搜索

x
中国物理学会期刊

机器学习赋能电子结构计算: 进展、挑战与展望

CSTR: 32037.14.aps.75.20251253

Machine learning empowered electronic structure calculations: Progress, challenges, and prospects

CSTR: 32037.14.aps.75.20251253
PDF
HTML
导出引用
  • 密度泛函理论在当代电子结构计算中占据主流地位, 然而其计算复杂度随体系规模呈立方增长, 制约了在复杂体系或高精度计算中的应用. 近年来, 机器学习与第一性原理计算的结合, 为这一问题提供了新的解决方案. 本文对机器学习加速电子结构计算的方法进行了综述, 重点讨论现有研究在加速材料电子结构计算中所取得的重要进展. 此外, 对未来研究中基于机器学习技术进一步克服电子结构计算的精度和效率瓶颈、扩展适用范围、实现在大尺度材料体系中计算模拟与实验测量的深度融合做了展望.

     

    Density functional theory (DFT) serves as the primary method of calculating electronic structures in physics, chemistry, and materials science. However, its practical application is fundamentally limited by a computational cost that scales cubically with system size, making high-precision studies of complex or large-scale materials prohibitively expensive. This review addresses the key challenge by examining the rapidly evolving paradigm of integrating machine learning (ML) with first-principles calculations to significantly accelerate and expand electronic structure prediction. Our primary objective is to provide a comprehensive and critical overview of the methodological advances, physical outcomes, and transformative potential of this interdisciplinary field.
    The core methodological progress involves a shift from black-box property predictors to symmetry-preserving, transferable models that learn the fundamental Hamiltonian—the central quantity from which diverse electronic properties are derived. We detail this evolution, beginning with pioneering applications in molecular systems by using graph neural networks (e.g., SchNOrb, DimeNet) to predict energies, wavefunctions, and Hamiltonian matrices with meV-level accuracy. This review then focuses on the critical extension to periodic solids, where maintaining symmetries such as E(3)-equivariance and handling vast configurational spaces are of utmost importance. We systematically analyze three leading model families that define the state-of-the-art: the DeepH series, which uses local coordinate message passing and E(3)-equivariant networks to achieve sub-meV accuracy and linear scaling; the HamGNN framework, built on rigorous equivariant tensor decomposition, which excels in modeling systems with spin-orbit coupling and charged defects; and the DeePTB approach, which leverages deep learning for tight-binding Hamiltonian parameterization, enabling quantum-accurate simulations of millions of atoms.
    These methods yield significant physical results and computational breakthroughs. Key outcomes include: 1) unprecedented accuracy and speed. Models consistently achieve Hamiltonian prediction mean absolute errors (MAE) below 1 meV (e.g., DeepH-E3: ~0.4 meV in graphene; HamGNN: ~1.5 meV in QM9 molecules), along with computational speedups of 3 to 5 orders of magnitude compared with traditional DFT. 2) Scale bridging. Successful applications now range from small molecules to defect-containing supercells with over 10000 atoms (e.g., HamGNN-Q on a 13824-atom GaAs defect) and even to millions of atoms for optoelectronic property simulations (DeePTB). 3) Expanded application scope. This review highlights how these ML-accelerated tools are revolutionizing research in previously intractable areas: predicting spectroscopic properties of molecules (e.g., DetaNet for NMR/UV-Vis spectra), elucidating electronic structures of topological materials and magnetic moiré systems, computing electron-phonon coupling and carrier mobility with DFT-level accuracy but far greater efficiency (HamEPC framework), and enabling high-throughput screening for materials design.
    In conclusion, ML-accelerated electronic structure calculation has matured into a powerful paradigm, transitioning from a proof-of-concept to a tool capable of delivering DFT-fidelity results at dramatically reduced cost for systems of realistic scale and complexity. However, challenges remain, including model interpretability (“black-box” nature), transferability to unseen elements, and seamless integration with existing plane-wave DFT databases. Future directions include physics-constrained unsupervised learning (e.g., DeepH-zero), developing more universal and element-agnostic architectures, and creating closed-loop, artificial intelligence (AI)-driven discovery pipelines. By overcoming current limitations, these methods have the potential to fundamentally change the field of materials research, accelerating the process from atomistic simulation to rational material design and discovery.

     

    目录

    /

    返回文章
    返回