Enhancing generalization and training efficiency of neural ordinary differential equations for chemical reactor modeling
Artikel i vetenskaplig tidskrift, 2025
Digitalization in chemical engineering enhances data accessibility and allows leveraging advancements in machine learning to develop predictive reactor models. Nevertheless, purely data-driven machine learning models lack fundamental physical and chemical principles, thereby limiting their interpretability and generalization capabilities. Adhering to the principles of scientific machine learning, the integration of fundamental constraints can overcome these limitations. In this study, we propose incorporating conservation laws as a soft constraint in the reactor model. These laws, which are not known a priori, are automatically discovered from data and integrated with neural ODEs. By assessing the null space of the dependent variables within the datasets, these laws are identified, thereby enhancing model generalization. The findings show that embedding physics-regularization not only improves generalization, especially when data is scarce, but also enhances robustness. Furthermore, we demonstrate that when sampling fails to capture key dynamics, the model can still accurately predict the evolution of individual species concentrations. This finding suggests that information loss resulting from inadequate sampling can be effectively mitigated by using the proposed method. By introducing noise into the training and validation datasets, we show that the methodology remains robust and consistently outperforms benchmarks across all different levels of noise studied. In data-rich scenarios where generalization is less of a concern, pre-training the model reduces the total computational time by 68 percent. Thus, the implementation of suitable strategies for reactor modeling can significantly improve accuracy, robustness, and computational efficiency.
Conservation laws
Pre-training
Neural ODE
Design of experiments
Physics-regularization