Learning, privacy and the limits of computation

Intelligent learning agents are now employed virtually everywhere and interact with ever more complex systems. Consequently, they must process data and make complicated decisions at an ever increasing pace. They find applications in many domains, such as bioinformatics, control systems, networking, congitive radio, traffic management, robotics and smart grids. However, the hardware limits of fundamental computational operations, communication and privacy constraints, render their behaviour suboptimal. These limits are not taken into account in current algorithms, but are especially important when the amount of data and computation is extremely large, as the case in most modern problems. We shall develop theory and algorithms to investigate the interaction between approximate inference and planning in constrained agents. The focus will be on agents that are trying to solve the reinforcement learning problem. On the theoretical side, this will be done by formalising computational limitations as approximate statistics or differential privacy constraints; two new areas in learning theory that are deeply connected to computational problems. Then we can obtain general bounds on problems within certain such constraints. We can also leverage the statistical approximations to optimise the amount of computational effort used while planning, which will allow us to design efficient algorithms. Experimentally, we shall perform detailed simulations in a rich library of settings ranging from simple test benches to complex applications such as automated drug design and smart grids, where our algorithms will be tested. The practical impact of the research should not be understated. The overabundance of data makes the use of imprecise computation inevitable. This especially true for problems in astronomy, ecology, evolution and economics. Simultaneously, the emergence of new approximate computing hardware makes consideration of such methods imperative. The proposed research will strengthen the connections between the fields of algorithmic complexity, decision theory, differential privacy, learning theory, and statistics, and will lead to a new research track within those fields. The practical impact of the research should not be understated. Big data makes the use of imprecise computation inevitable. This especially true for problems in astronomy, ecology, evolution and economics. Simultaneously, the emergence of new approximate computing hardware makes consideration of such methods imperative. The proposed research will strengthen the connections between the fields of algorithmic complexity, decision theory, differential privacy, learning theory, and statistics, and will lead to a new research track within those fields.

Participants

Christos Dimitrakakis (contact)

Senior forskare at Computer Science and Engineering, Computing Science (Chalmers)

Funding

Swedish Research Council (VR)

Funding years 2016–2019

More information

Latest update

2015-12-22