Publishing neural networks in drug discovery might compromise training data privacy
Artikel i vetenskaplig tidskrift, 2025

This study investigates the risks of exposing confidential chemical structures when machine learning models trained on these structures are made publicly available. We use membership inference attacks, a common method to assess privacy that is largely unexplored in the context of drug discovery, to examine neural networks for molecular property prediction in a black-box setting. Our results reveal significant privacy risks across all evaluated datasets and neural network architectures. Combining multiple attacks increases these risks. Molecules from minority classes, often the most valuable in drug discovery, are particularly vulnerable. We also found that representing molecules as graphs and using message-passing neural networks may mitigate these risks. We provide a framework to assess privacy risks of classification models and molecular representations, available at https://github.com/FabianKruger/molprivacy. Our findings highlight the need for careful consideration when sharing neural networks trained on proprietary chemical structures, informing organisations and researchers about the trade-offs between data confidentiality and model openness.

Membership inference attack

Machine learning

Cheminformatics

Drug discovery

Privacy

QSAR

Författare

Fabian P. Krueger

Helmholtz-Gemeinschaft Deutscher Forschungszentren

Technische Universität München

AstraZeneca AB

Johan Ostman

AI Sweden

Lewis Mervin

AstraZeneca AB

Igor V. Tetko

Helmholtz-Gemeinschaft Deutscher Forschungszentren

Ola Engkvist

Chalmers, Data- och informationsteknik, Data Science och AI

Journal of Cheminformatics

1758-2946 (ISSN) 17582946 (eISSN)

Vol. 17 1 38

Ämneskategorier (SSIF 2025)

Datavetenskap (datalogi)

DOI

10.1186/s13321-025-00982-w

PubMed

40140934

Relaterade dataset

molprivacy [dataset]

URI: https://github.com/FabianKruger/molprivacy

Mer information

Senast uppdaterat

2025-04-04