Surrogate Models of Hydrogen Oxidation Kinetics based on Deep Neural Networks

Capa

Citar

Texto integral

Acesso aberto Acesso aberto
Acesso é fechado Acesso está concedido
Acesso é fechado Somente assinantes

Resumo

The paper presents a data based surrogate model of the chemical kinetics of hydrogen oxidation by air using recurrent and feed-forward neural networks. The work aims at the application of surrogate models in computational fluid dynamics simulators, which are ubiquitous in the development and optimization of modern chemical technologies. The sensitivity of the results to the size of the data set and network parameters is analyzed. For a seven-component reaction mechanism at adiabatic conditions, a model trained on a sample of one million sets of initial conditions enables prediction of the dependence of concentrations and temperature on time with a standard deviation below 2% over 20 microsecond range. However, points with large deviations reaching 10% are also observed, mostly for minor components with low concentrations. The surrogate model is several times faster compared to the direct numerical solution of kinetic equations on the temporal grid. The computational performance strongly depends on the batch size and is sensitive to the hardware. The results of the work demonstrate a significant potential of machine learning methods for modeling chemical transformations in computational fluid dynamics solvers. Further improvement of the accuracy with a similar computational performance can be expected from: (a) separate models for short-time (that is, strongly non-equilibrium) and long-time (closer to the equilibrium) ranges; (b) repeated optimization of network parameters even with minor modifications of the reaction mechanism; (c) more versatile approaches to complying with the conservation laws (d) application of physics informed machine learning (e.g. of the models with additional physical and chemical constraints such as mass conservation).

Sobre autores

I. Akeweje

Skolkovo Institute of Science and Technology

Email: easygear3428@gmail.com
Moscow, Russia

V. Vanovskiy

Skolkovo Institute of Science and Technology

Email: easygear3428@gmail.com
Moscow, Russia

A. Vishnyakov

Skolkovo Institute of Science and Technology

Autor responsável pela correspondência
Email: easygear3428@gmail.com
Moscow, Russia

Bibliografia

  1. Komp E., Janulaitis N., Valleau S. Progress towards machine learning reaction rate constants // Phys. Chem. Chem. Phys. 2022. V. 24. № 5. P. 2692.
  2. Grambow C.A., Pattanaik L., Green W.H. Deep Learning of Activation Energies // J. Phys. Chem. Lett. 2020. V. 11 № 8. P. 2992.
  3. Wan K., Barnaud C., Vervisch L.,Domingo P. Chemistry reduction using machine learning trained from non-premixed micro-mixing modeling: Application to DNS of a syngas turbulent oxy-flame with side-wall effects // Combustion and Flame. 2020. V. 220. № . P. 119.
  4. Lim H., Jung Y.J. MLSolvA: solvation free energy prediction from pairwise atomistic interactions by machine learning // J. Cheminformatics. 2021. V. 13. № 1. P. 56.
  5. Nakajima M., Nemoto T. Machine learning enabling prediction of the bond dissociation enthalpy of hypervalent iodine from SMILES // Sci. Rep. 2021. V. 11 № 1. P. 20207.
  6. Buchheita K., Owoyelea O., Jordana T., van Essendelft D.T., STEV: A Stabilized Explicit Variable-Load Solver with Machine Learning Acceleration for the Rapid Solution of Stiff Chemical Kinetics. 2019: arxiv. 1905/1905.09395
  7. Cerri G., Michelassi V., Monacchia S., Pica S. Kinetic combustion neural modelling integrated into computational fluid dynamics // Proc. Inst. Mech. Eng., Part A: J. Power & Energy. 2003. V. 217 № 2. P. 185.
  8. Keller A.C., Evans J.M. Application of random forest regression to the calculation of gas-phase chemistry within the GEOS-Chem chemistry model v10. Geoscientific Model Development // Geoscientific Model Development. 2019. V. 12 № 3. P. 1209-1225.
  9. Owoyele O., Pal P. ChemNODE: A neural ordinary differential equations framework for efficient chemical kinetic solvers // Energy and AI. 2022. V. 7 P. 100118
  10. Sen B.A., Menon S. Linear eddy mixing based tabulation and artificial neural networks for large eddy simulations of turbulent flames // Combustion and Flame. 2010. V. 157. № 1. P. 62.
  11. Blasco J.A., Fueyo N., Dopazo C.,Ballester J. Modelling the temporal evolution of a reduced combustion chemical system with an artificial neural network // Combustion and Flame. 1998. V. 113. № 1–2. P. 38.
  12. Blasco J.A., Fueyo N., Larroya J.C., Dopazo C., Chen Y.J. A single-step time-integrator of a methane-air chemical system using artificial neural networks // Comput and Chem. Eng. 1999. V. 23. № 9. P. 1127.
  13. Chen J.Y., Blasco J.A., Fueyo N., Dopazo C. An economical strategy for storage of chemical kinetics: Fitting in situ adaptive tabulation with artificial neural networks // Proceed Combust. Inst. 2000. V. 28 № 1. P. 115.
  14. Culpo M. Current Bottlenecks in the Scalability of OpenFOAM on Massively Parallel Clusters. 2021 10.31.2022]; Available from: https://prace-ri.eu/wp-content/uploads/Current_Bottlenecks_in_the_Scalability_of_OpenFOAM_on_Massively_Parallel_Clusters.pdf.
  15. An J., He G., Luo K., Qin F., Liu B. Artificial neural network based chemical mechanisms for computationally efficient modeling of hydrogen/carbon monoxide/kerosene combustion // International J. Hydrogen Energy. 2020. V. 45 № 53. P. 29594.
  16. Sharma A.J., Johnson R.F., Moses A.D., Kessler D.A. Deep learning for scalable chemical kinetics. in AIAA Scitech 2020 Forum. 2020. Orlando.
  17. Li J., Zhao Z., Kazakov A., Dryer F.L. An updated comprehensive kinetic model of hydrogen combustion // Int. J. Chem. Kinetics. 2004. V. 36. № 10. P. 566.
  18. Matveev V.G. A Simplified hydrogen combustion mechanism // Combustion, Explosion, and Shock Waves// 2001 V. 37 P. 3 [Матвеев В.Г. Упрощение механизма горения водорода // Физика горения и взрыва. 2001. V. 37 №. P. 3.]
  19. Гурвич Л.И. // Термодинамические свойства индивидуальных веществ Т. 4. 1978–1982, Москва: Наука.
  20. Hastie T., Tibshirani R., Friedman J.H. Boosting and Additive Trees, in The Elements of Statistical Learning. 2009. Springer: New York. p. 337–384.
  21. Williams R.J., Hinton G.E., Rumelhart D.E. Learning representations by back-propagating errors // Nature. 1986. V. 323 №. P. 533.
  22. Kingma D.P., Ba L.J. Adam: A Method for Stochastic Optimization EventInternational Conference on Learning Representations, in International Conference on Learning Representations. 2015. ICLR San Diego.
  23. Zell A. // Simulation Neuronaler Netze. 1994. Addison-Wesley.

Arquivos suplementares

Arquivos suplementares
Ação
1. JATS XML
2.

Baixar (89KB)
3.

Baixar (301KB)
4.

Baixar (278KB)
5.

Baixar (18KB)

Declaração de direitos autorais © И. Акевейе, В.В. Вановский, А.М. Вишняков, 2023

Este site utiliza cookies

Ao continuar usando nosso site, você concorda com o procedimento de cookies que mantêm o site funcionando normalmente.

Informação sobre cookies