-
本文基于第一性原理思想,采用量子动力学方法对机器学习的迭代运动过程进行建模。在机器学习的参数空间定义广义目标函数,利用Schrödinger方程和势能等效得到机器学习过程的量子动力学方程,通过Wick转动进一步建立了量子动力学与热动力学的关系,这为利用物理理论和数学理论对机器学习的迭代过程进行研究提供了可能。本文工作将机器学习的迭代过程转化为含时偏微分方程来进行精确数学表述,该方程表明机器学习过程可能存在多尺度的退火过程和同一尺度下的时间演化过程。利用量子动力学方程我们证明了机器学习在时间演化时的收敛性,解释了机器学习中的扩散模型是量子动力学方程在经典近似和低阶泰勒近似下的映射模型,导出了人工智能中常用的Softmax和Sigmoid函数。这些结果表明量子动力学方法在研究机器学习理论中是有效的。To address the current lack of rigorous theoretical models in the machine learning process, this paper adopts the quantum dynamic method to model the iterative motion process of machine learning based on the principles of first-principles thinking. This approach treats the iterative evolution of algorithms as a physical motion process, defines a generalized objective function in the parameter space of machine learning algorithms, and views the iterative process of machine learning as the process of seeking the optimal value for this generalized objective function. In physical terms, this process corresponds to the system reaching its ground energy state. Since the dynamic equation of a quantum system is the Schrödinger equation, by treating the generalized objective function as the potential energy term in the Schrödinger equation, we can obtain the quantum dynamic equation that describes the iterative process of machine learning. The process of machine learning is thus the process of seeking the ground energy state of the quantum system constrained by a generalized objective function. The quantum dynamic equation for machine learning transforms the iterative process into a time-dependent partial differential equation for precise mathematical representation, allowing for the study of the iterative process of machine learning using physical and mathematical theories. This provides theoretical support for implementing the iterative process of machine learning using quantum computers. To further apply the quantum dynamic equation to explain the iterative process of machine learning on classical computers, the Wick rotation is used to convert the quantum dynamic equation into a thermodynamic equation, demonstrating the convergence of the time evolution process in machine learning. As time approaches infinity, the system will converge to the ground energy state. Since an analytical expression cannot be given for the generalized objective function in the parameter space, Taylor expansion is used to approximate the generalized objective function. Under the zero-order Taylor approximation of the generalized objective function, the quantum dynamic equation and thermodynamic equation for machine learning degrade into the free-particle equation and diffusion equation, respectively. This result indicates that the most basic dynamic processes during the iteration of machine learning on quantum and classical computers are wave packet dispersion and diffusion, respectively. This result explains, from a dynamic perspective, the basic principles of diffusion models that have been successfully applied in the field of generative neural networks in recent years. Diffusion models indirectly realize the thermal diffusion process in the parameter space by adding and removing Gaussian noise to images, thereby optimizing the generalized objective function in the parameter space. The diffusion process is the dynamic process under the zero-order approximation of the generalized objective function. Meanwhile, using the thermodynamic equation of machine learning, we also derived the Softmax and Sigmoid functions commonly used in artificial intelligence. These results show that the quantum dynamic method is an effective theoretical approach for studying the iterative process of machine learning, providing rigorous mathematical and physical models for studying the iterative process of machine learning on both quantum and classical computers.
-
Keywords:
- quantum dynamics /
- machine learning /
- diffusion model /
- Schrö
-
[1] Metropolis N, Rosenbluth A W, Rosenbluth M N, Teller A H, Teller E 1953 J. Chem. Phys. 211087
[2] Kirkpatrick S, Gelatt C D, Vecchi M P 1983 Science 220671
[3] Wang F, Wang P 2024 Quantum Inf. Process. 2366
[4] Wang P, Xin G 2023 Acta Autom. Sin. 492396. (in Chinese) [王鹏,辛罡2023自动化学报492396]
[5] Wang P, Huang Y, Ren C, Guo Y 2013 Acta Electron. Sin. 412468. (in Chinese) [王鹏,黄焱,任超,郭又铭2013电子学报412468]
[6] Wang P, Wang F 2022 J. Univ. Electron. Sci. Technol. (Nat. Sci. Ed.) 512. (in Chinese) [王鹏,王方 2022电子科技大学学报(自然科学版) 512]
[7] Johnson M W, Amin M H S, Gildert S 2011 Nature 473194
[8] Sohl-Dickstein J, Weiss E, Maheswaranathan N, Ganguli S 2015 In Proceedings of the 32nd International Conference on Machine Learning (PMLR), p 2256
[9] Song Y, Sohl-Dickstein J, Kingma D P, Kumar A, Ermon S, Poole B 2020 arXiv:2011.13456[cs.LG]
[10] Xin G, Wang P, Jiao Y 2021 Expert. Syst. Appl. 185115615
[11] Jin J, Wang P 2021 Swarm Evol. Comput. 65100916
[12] Wick G C 1954 Phys. Rev. 961124
[13] Dhariwal P, Nichol A 2021 In Proceedings of the 35th International Conference on Neural Information Processing Systems (Red Hook, NY, USA: Curran Associates, Inc.), p 672
[14] Ho J, Jain A, Abbeel P 2020 In Proceedings of the 34th International Conference on Neural Information Processing Systems (Red Hook, NY, USA: Curran Associates, Inc.), p 574
[15] Nichol A Q, Dhariwal P 2021 In Proceedings of the 38th International Conference on Machine Learning (PMLR), p 8162
[16] Lim S, Yoon E, Byun T, Kang T, Kim S, Lee K, Choi S 2023 In Proceedings of the 37th International Conference on Neural Information Processing Systems (Red Hook, NY, USA: Curran Associates, Inc.), p 1645
[17] Anderson J B 1975 J. Chem. Phys. 631499
[18] Kosztin I, Faber B, Schulten K 1996 Am. J. Phys. 64633
[19] Haghighi M K, Lüchow A 2017 J. Phys. Chem. A 1216165
[20] Jeong J, Shin J 2023 In Advances in Neural Information Processing Systems, vol. 36(Red Hook, NY, USA: Curran Associates, Inc.), p 67374
[21] Morawietz T, Artrith N 2021 J. Comput. Aid. Mol. Des. 35557
计量
- 文章访问数: 106
- PDF下载量: 4
- 被引次数: 0