搜索

x

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

机器学习在裂变势垒高度和基态结合能中的应用

张旭喆 李佳星 陈婉玲 张鸿飞

引用本文:
Citation:

机器学习在裂变势垒高度和基态结合能中的应用

张旭喆, 李佳星, 陈婉玲, 张鸿飞

Applications of machine learning in fission barrier height and ground state binding energy

ZHANG Xuzhe, LI Jiaxing, CHEN Wanling, ZHANG Hongfei
Article Text (iFLYTEK Translation)
PDF
HTML
导出引用
在线预览
  • 超重核的裂变势垒高度和基态结合能是影响熔合反应中存活概率的关键物理量, 其准确性是存活概率计算中不确定性的主要来源. 本工作应用迁移学习技术来训练神经网络模型, 通过结合理论模型预测数据和实验数据, 改进了有限程液滴模型(FRLDM)和WS4理论模型对裂变势垒高度的预测. 结果显示, FRLDM模型的均方根误差从1.03 MeV降低到0.60 MeV, WS4模型的均方根误差从0.97 MeV降低到0.61 MeV. 对于结合能, 本文优化了AME2020的实验值与WS4理论预测之间的差值. 在测试集上, 均方根误差从0.33 MeV降低到0.17 MeV, 而在整个数据集上, 其从0.29 MeV降低到0.26 MeV. 本研究为提高裂变势垒高度和结合能预测的准确性提供了一种具有前景的方法, 这将有助于提高核反应存活概率计算的精度. 本文数据集可在https://www.doi.org/10.57760/sciencedb.28388中访问获取.
    This study uses machine learning, specifically transfer learning with neural networks, to improve the predictions of fission barrier heights and ground state binding energies of superheavy nuclei, which are crucial for calculating survival probabilities in fusion reactions. Transfer learning for neural networks involve two stages: pre-training and fine-tuning, each utilizing a distinct pre-training dataset and target dataset. In this work, we split the pre-training data into 60% for training and 40% for validation, while the target data are partitioned into 20% for test, with the remaining 80% further divided into 60% for training and 40% for validation. To construct the neural-network model, we adopt the proton number Z and mass number A as the input layer, employ two hidden layers, each containing 128 neurons with rectified linear unit (ReLU) activation, and set the learning rate to 0.001. For the fission-barrier-height model, the pre-training dataset is either the FRLDM or the WS4 model data, with the experimental measurements serving as the target set. For the ground-state binding-energy model, we first calculate the residuals between WS4 predictions and the AME2020 evaluation, then divide these residuals into a light-nucleus subset and a heavy-nucleus subset according to proton number. The light-nucleus subset is used for pre-training, and the heavy-nucleus subset for fine-tuning. After optimization, the root-mean-square error (RMSE) of the FRLDM barrier model decreases from 1.03 MeV to 0.60 MeV, and that of the WS4 barrier model drops from 0.97 MeV to 0.61 MeV. For the binding-energy model, the RMSE decreases from 0.33 MeV to 0.17 MeV on the test set and from 0.29 MeV to 0.26 MeV on the full data set. We also present the performances of the fission-barrier model before and after refinement, together with the predicted barrier heights along the isotopic chains of the new elements Z = 119 and Z = 120, and analyze the reasons for the differences in the results obtained by different models. We hope that these results will serve as a useful reference for future theoretical studies.The datasets in this paper are openly available at https://www.doi.org/10.57760/sciencedb.28388.
  • 图 1  测试集中裂变势垒高度Bf的实验值和理论估算值的比较. 蓝色圆圈代表实验结果, 红色正方形表示FRLDM或WS4的结果, 紫色三角形表示在不同理论模型(FRLDM 或 WS4)的数据集上预训练的神经网络模型的结果

    Fig. 1.  Comparison of experimental and theoretical estimates of the fission barrier Bf in the test set. Blue circles represent experimental results, red squares indicate the results from FRLDM or WS4, and purple triangles show the results from neural network models pre-trained on datasets of different theoretical models (FRLDM or WS4).

    图 2  由FRLDM和WS4提供的测试数据在优化前后的均方根误差

    Fig. 2.  The root mean square error (RMSE) of the test data provided by FRLDM and WS4 before and after optimization.

    图 3  由FRLDM, ANNF, WS4和ANNW对预训练数据集估计的裂变势垒Bf的等值线图

    Fig. 3.  Contour plots of the fission barrier Bf estimated by FRLDM, ANNF, WS4, and ANNW for the pre-trained datasets.

    图 4  WS4模型在测试集数据以及全部数据上优化前后的均方根误差

    Fig. 4.  Root mean square error of the WS4 model on the test dataset and the entire dataset before and after optimization.

    表 1  测试集数据的裂变势垒高度(单位: MeV)与其他理论评估及实验数据[31]的比较

    Table 1.  Comparison of fission barriers (in MeV) for the test set data with other theoretical evaluations and experimental data[33].

    核素质子数质量数实验值FRLDMANNFWS4ANNW
    207Po8420719.320.019.518.4719.24
    208Po8420819.920.820.519.9220.36
    212Po8421219.620.220.720.9520.90
    227Ac892277.406.988.377.167.44
    233Th902336.655.476.246.956.96
    239U922396.006.215.945.506.17
    236Np932365.404.814.884.985.34
    237Np932375.404.945.034.955.43
    242Pu942425.056.415.886.195.56
    243Pu942435.456.665.456.595.66
    240Am952406.006.125.234.724.75
    241Am952415.356.345.445.084.96
    243Am952435.056.805.865.975.30
    241Cm962415.506.325.234.684.54
    248Cm962484.806.805.277.035.49
    下载: 导出CSV

    表 2  神经网络模型在(Z = 119, A = 293—299)以及(Z = 120, A = 294—302)的裂变势垒高度(以MeV为单位)以及结合能(以MeV为单位)的预言值

    Table 2.  Predicted fission barrier heights (in MeV) and binding energies (in MeV) for nuclides with Z = 119 and A = 293—299, as well as Z = 120 and A = 294—302, using neural network model.

    质子数质量数ANNFANNWANNBinding
    1192939.525.942060.78
    1192949.916.002066.96
    11929510.175.942074.71
    11929610.415.872081.35
    11929710.665.802088.92
    11929810.915.732094.95
    11929911.165.662102.11
    1202949.355.982060.90
    1202959.756.122067.63
    12029610.146.232075.76
    12029710.436.172082.32
    12029810.686.102090.27
    12029910.926.032096.30
    12030011.175.962103.82
    12030111.425.902109.78
    12030211.665.832117.13
    下载: 导出CSV
  • [1]

    Schmidt K H, Jurado B 2018 Rep. Prog. Phys. 81 106301Google Scholar

    [2]

    Audi G, Bersillon O, Blachot J, Wapstra A H 2003 Nucl. Phys. A 729 3Google Scholar

    [3]

    Audi G, Wapstra A H, Thibault C 2003 Nucl. Phys. A 729 337Google Scholar

    [4]

    Wang N, Liu M 2024 Chin. Phys. C 48 094103Google Scholar

    [5]

    Bayram T, Akkoyun S, Kara S O 2014 Ann. Nucl. Energy 63 172Google Scholar

    [6]

    Oganessian Y T, Utyonkov V K, Lobanov Y V, Abdullin F S, Polyakov A N, Sagaidak R N, Shirokovsky I V, Tsyganov Y S, Voinov A A, Gulbekian G G, Bogomolov S L, Gikal B N, Mezentsev A N, Iliev S, Subbotin V G, Sukhov A M, Subotic K, Zagrebaev V I, Vostokin G K, Itkis M G, Moody K J, Patin J B, Shaughnessy D A, Stoyer M A, Stoyer N J, Wilk P A, Kenneally J M, Landrum J H, Wild J F, Lougheed R W 2006 Phys. Rev. C 74 044602Google Scholar

    [7]

    Oganessian Y T, Utyonkov V K, Lobanov Y V, Abdullin F S, Polyakov A N, Shirokovsky I V, Tsyganov Y S, Gulbekian G G, Bogomolov S L, Gikal B N, Mezenstev A N, Iliev S, Subbotin V G, Sukhov A M, Voinov A A, Buklanov G V, Subotic K, Zagrebaev V I, Itkis M G, Patin J B, Moody K J, Wild J F, Stoyer M A, Stoyer N J, Shaughnessy D A, Kenneally J M, Wilk P A, Lougheed R W, Il’kaev R I, Vesnovskii S P 2004 Phys. Rev. C 70 064609Google Scholar

    [8]

    Rana S, Kumar R, Bhuyan M 2021 Phys. Rev. C 104 024619Google Scholar

    [9]

    Jadambaa K 2017 EPJ Web Conf. 163 00030Google Scholar

    [10]

    Schädel M 2015 Philos. Trans. R. Soc. A 373 2037.

    [11]

    Li J X, Zhang H F 2022 Phys. Rev. C 105 054606Google Scholar

    [12]

    Feng Z Q, Jin G M, Fu F, Li J Q 2006 Nucl. Phys. A 771 50Google Scholar

    [13]

    Zhu L, Xie W J, Zhang F S 2014 Phys. Rev. C 89 024615Google Scholar

    [14]

    Qiang Y, Deng X Q, Shi Y, Qiao C Y, Pei J C 2024 Phys. Lett. B 858 139057Google Scholar

    [15]

    Hofmann S, Münzenberg G 2000 Rev. Mod. Phys. 72 733Google Scholar

    [16]

    Lunney D, Pearson J M, Thibault C 2003 Rev. Mod. Phys. 75 1021Google Scholar

    [17]

    Ma N N, Zhang H F, Bao X J, Zhang H F 2019 Chin. Phys. C 43 044105Google Scholar

    [18]

    Zhang H F, Wang L H, Yin J P, Chen P H, Zhang H F 2017 J. Phys. G: Nucl. Part. Phys. 44 045110Google Scholar

    [19]

    Naqa I E, Murphy M J 2015 What Is Machine Learning? (Germany: Springer Nature) pp21–37

    [20]

    Goh A T C 1995 Artif. Intell. Eng. 9 143Google Scholar

    [21]

    Bayram T, Akkoyun S, Kara S O 2014 J. Phys. : Conf. Ser. 490 012105Google Scholar

    [22]

    Akkoyun S, Bayram T, Turker T 2014 Radiat. Phys. Chem. 96 186Google Scholar

    [23]

    Athanassopoulos S, Mavrommatis E, Gernoth K A, Clark J W 2004 Nucl. Phys. A 743 222Google Scholar

    [24]

    温湖峰, 尚天帅, 李剑, 牛中明, 杨东, 薛永和, 李想, 黄小龙 2023 物理学报 72 152101Google Scholar

    Wen H F, Shang T S, Li J, Niu Z M, Yang D, Xue Y H, Li X, Huang X L 2023 Acta Phys. Sin. 72 152101Google Scholar

    [25]

    Zhang W J, Zhang Z Y, Hu J F, Lu B N, Pang J Y, Wang Q 2025 Chin. Phys. Lett. 42 070202Google Scholar

    [26]

    Li J X, Zhang H F 2024 Phys. Rev. C 110 034608Google Scholar

    [27]

    Graczyk K M, Kowal B E, Ankowski A M, Banerjee R D, Bonilla J L, Prasad H, Sobczyk J T 2025 Phys. Rev. Lett. 135 052502Google Scholar

    [28]

    Yuan L, Li J X, Zhang H F 2024 Chin. Phys. C 48 064106Google Scholar

    [29]

    Möller P, Sierk A J, Ichikawa T, Iwamoto A, Bengtsson R, Uhrenholt H, Åberg S 2009 Phys. Rev. C 79 064304

    [30]

    Wang N, Liu M 2024 Chin. Phys. C 48 094103Google Scholar

    [31]

    Wang M, Huang W J, Kondev F G, Audi G, Naimi S 2021 Chin. Phys. C 45 030003Google Scholar

    [32]

    Wang N, Liu M, Wu X Z, Meng J 2024 Phys. Lett. B 734 215

    [33]

    Capote R, Herman M, Obložinský P, Young P G, Goriely S, Belgya T, Ignatyuk A V, Koning A J, Hilaire S, Plujko V A, Avrigeanu M, Bersillon O, Chadwick M B, Fukahori T, Ge Zhigang, Han Yinlu, Kailas S, Kopecky J, Maslov V M, Reffo G, Sin M, Soukhovitskii E Sh, Talou P 2009 Nucl. Data Sheets 110 3107Google Scholar

  • [1] 刘亚琪, 李志龙, 王永佳, 李庆峰, 马春旺. 基于机器学习的原子核质量表探究. 物理学报, doi: 10.7498/aps.75.20251526
    [2] 李雨婷, 杨炯, 奚晋扬. 机器学习赋能电子结构计算: 进展、挑战与展望. 物理学报, doi: 10.7498/aps.75.20251253
    [3] 张童, 王加豪, 田帅, 孙旭冉, 李日. 基于机器学习的铸件凝固过程动态收缩行为. 物理学报, doi: 10.7498/aps.74.20241581
    [4] 王鹏, 麦麦提尼亚孜·麦麦提阿卜杜拉. 机器学习的量子动力学. 物理学报, doi: 10.7498/aps.74.20240999
    [5] 陈海军, 盛浩文, 黄文豪, 吴彬琪, 赵天亮, 包小军. 基于神经网络方法研究超重核的稳定性和衰变性质. 物理学报, doi: 10.7498/aps.74.20250720
    [6] 邢凤竹, 乐先凯, 王楠, 王艳召. Z = 118—120超重核α衰变性质的研究. 物理学报, doi: 10.7498/aps.74.20240907
    [7] 张旭, 丁进敏, 侯晨阳, 赵一鸣, 刘鸿维, 梁生. 基于机器学习的激光匀光整形方法. 物理学报, doi: 10.7498/aps.73.20240747
    [8] 张嘉晖. 蛋白质计算中的机器学习. 物理学报, doi: 10.7498/aps.73.20231618
    [9] 邓祥文, 伍力源, 赵锐, 王嘉鸥, 赵丽娜. 机器学习在光电子能谱中的应用及展望. 物理学报, doi: 10.7498/aps.73.20240957
    [10] 刘烨, 牛赫然, 李兵兵, 马欣华, 崔树旺. 机器学习在宇宙线粒子鉴别中的应用. 物理学报, doi: 10.7498/aps.72.20230334
    [11] 管星悦, 黄恒焱, 彭华祺, 刘彦航, 李文飞, 王炜. 生物分子模拟中的机器学习方法. 物理学报, doi: 10.7498/aps.72.20231624
    [12] 杨章章, 刘丽, 万致涛, 付佳, 樊群超, 谢锋, 张燚, 马杰. 结合机器学习算法提高从头算方法对HF/HBr/H35Cl/Na35Cl振动能谱的预测性能. 物理学报, doi: 10.7498/aps.72.20221953
    [13] 张逸凡, 任卫, 王伟丽, 丁书剑, 李楠, 常亮, 周倩. 机器学习结合固溶强化模型预测高熵合金硬度. 物理学报, doi: 10.7498/aps.72.20230646
    [14] 艾飞, 刘志兵, 张远涛. 结合机器学习的大气压介质阻挡放电数值模拟研究. 物理学报, doi: 10.7498/aps.71.20221555
    [15] 林键, 叶梦, 朱家纬, 李晓鹏. 机器学习辅助绝热量子算法设计. 物理学报, doi: 10.7498/aps.70.20210831
    [16] 陈江芷, 杨晨温, 任捷. 基于波动与扩散物理系统的机器学习. 物理学报, doi: 10.7498/aps.70.20210879
    [17] 赵凤岐, 张敏, 李志强, 姬延明. 纤锌矿In0.19Ga0.81N/GaN量子阱中光学声子和内建电场对束缚极化子结合能的影响. 物理学报, doi: 10.7498/aps.63.177101
    [18] 王文娟, 王海龙, 龚谦, 宋志棠, 汪辉, 封松林. 外电场对InGaAsP/InP量子阱内激子结合能的影响. 物理学报, doi: 10.7498/aps.62.237104
    [19] 孟振华, 李俊斌, 郭永权, 王义. 稀土元素的价电子结构和熔点、结合能的关联性. 物理学报, doi: 10.7498/aps.61.107101
    [20] 黄明辉, 甘再国, 范红梅, 苏朋源, 马 龙, 周小红, 李君清. 超重核合成时的驱动势与热熔合反应截面. 物理学报, doi: 10.7498/aps.57.1569
计量
  • 文章访问数:  450
  • PDF下载量:  20
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-09-17
  • 修回日期:  2025-10-16
  • 上网日期:  2025-11-13

/

返回文章
返回