Dynamic Enforcement of Differential Privacy
With recent privacy failures in the release of personal data, differential privacy received considerable attention in the research community.
This mathematical concept, despite its young age (Dwork, 2006), has grabbed the attention of many researchers for its robustness against identification of individuals even in presence of background information.
Beside that, its flexible definition makes it compatible with different data sources, data mining algorithms and data release models.
Its compositionality properties facilitates design of "differential privacy aware" programming languages.
These programming languages and frameworks empower non-experts to construct complex data mining analyses with proven differential privacy guarantees.
This thesis mostly focuses on two aspects of such systems: proving correctness of these frameworks, and improving their accuracy.
The correctness is important since some frameworks (like PINQ) derivates from theory without proper justification.
With respect to accuracy, we present an improved method of enforcing privacy that provides improved data utilisation and further benefits. In this setting, individuals take control of their privacy requirements rather than being seen as a part of database. As a result, they can opt-in to a database with their expected privacy level and optionally opt-out later.