Publishing neural networks in drug discovery might compromise training data privacy
Journal article, 2025

This study investigates the risks of exposing confidential chemical structures when machine learning models trained on these structures are made publicly available. We use membership inference attacks, a common method to assess privacy that is largely unexplored in the context of drug discovery, to examine neural networks for molecular property prediction in a black-box setting. Our results reveal significant privacy risks across all evaluated datasets and neural network architectures. Combining multiple attacks increases these risks. Molecules from minority classes, often the most valuable in drug discovery, are particularly vulnerable. We also found that representing molecules as graphs and using message-passing neural networks may mitigate these risks. We provide a framework to assess privacy risks of classification models and molecular representations, available at https://github.com/FabianKruger/molprivacy. Our findings highlight the need for careful consideration when sharing neural networks trained on proprietary chemical structures, informing organisations and researchers about the trade-offs between data confidentiality and model openness.

Membership inference attack

Machine learning

Cheminformatics

Drug discovery

Privacy

QSAR

Author

Fabian P. Krueger

Helmholtz Association of German Research Centres

Technical University of Munich

AstraZeneca AB

Johan Ostman

AI Sweden

Lewis Mervin

AstraZeneca AB

Igor V. Tetko

Helmholtz Association of German Research Centres

Ola Engkvist

Chalmers, Computer Science and Engineering (Chalmers), Data Science and AI

Journal of Cheminformatics

1758-2946 (ISSN) 17582946 (eISSN)

Vol. 17 1 38

Subject Categories (SSIF 2025)

Computer Sciences

DOI

10.1186/s13321-025-00982-w

PubMed

40140934

Related datasets

molprivacy [dataset]

URI: https://github.com/FabianKruger/molprivacy

More information

Latest update

4/4/2025 8