搜索

x
中国物理学会期刊

基于神经网络方法研究β衰变释放粒子的平均能量数据

CSTR: 32037.14.aps.74.20250655

Average energy data of β decay nuclei based on neural networks

CSTR: 32037.14.aps.74.20250655
PDF
HTML
导出引用
  • 核素β衰变释放的β粒子与γ射线平均能量是计算反应堆衰变热的核心参数, 对核设施安全与工程应用至关重要. 然而, 许多核素的实验数据匮乏, 现有理论模型精度难以满足需求. 本文基于ENSDF数据库中543个实验数据准确的β衰变核素(选自1136个β衰变核素), 采用神经网络方法对核素衰变发射的β粒子、γ射线及中微子的平均能量进行预测, 对比了三种特征组(分别含特殊特征值T_1/2, (1/T_1/2)^1/5, Q/3)的模型性能. 结果表明: 相比特征组含T_1/2以及(1/T_1/2 )^1/5的模型, 特征组含Q/3的模型综合表现最佳, 其β粒子与中微子预测误差分别为28.11%/56.9%和35.33%/37.76%, 并且利用该特征组训练的机器学习模型成功补充了裂变产物区(质量数66—172) 291个核素的缺失数据. 核素图对比显示, 神经网络对规律性较强的β粒子及中微子能量预测与实验符合较好, 但对γ射线(训练误差76.9%)以及奇奇核、幻数附近核素的预测偏差显著. 本文证实经验特征值Q/3可有效提升模型性能, 同时揭示了数据规律性与模型泛化能力的关联, 为后续融合物理机理优化机器学习模型提供了依据.

     

    The average β energy data and average γ energy data of the β-decay nuclei play an important role in many fields of nuclear technology and scientific research, such as the decay heat and antineutrino spectrum calculation for different kinds of reactors. However, the reliable experimental measurements of the average energies for many nuclei are lacking, and the theoretical calculation needs to be improved to meet the requirements for accuracy in the technical applications.
    In this study, the average β, γ and neutrino energies of the β-decay nuclei are investigated by the neural network method based on the newly evaluated experimental data of 543 nuclei that are selected from a total of 1136 β-decay nuclei. In the neural network approach, three different feature sets are used for model training. Each feature set contains a feature characteristic value (one of the T_1/2, \left( 1/T_1/2 \right)^1/5, andQ/3), along with five identical feature values (Z, N, parity of Z, parity of N, and \Delta Z).
    The three feature values are selected based on the physical mechanism below. 1) The average energy is obviously related to Q value and approximately taken as Q/3 in the reactor industry. Therefore, the Q/3 is chosen as one feature value. 2) The half-live is related to the Q value of β-decay, and T_1/2 is considered. 3) According to the Sargent’s law, \left( 1/T_1/2 \right)^1/5 \propto Q, a more accurate \left( 1/T_1/2 \right)^1/5 value is selected.
    As a result, for the feature set of T_1/2, the training results for all three types of average energies are unsatisfactory. For the other groups, the relative errors of the average β energy data, are 19.32% and 28.11% for \left( 1/T_1/2 \right)^1/5 and Q/3 feature groups in the training set, and 82% and 56.9% in the validation set; the relative errors of the average γ energy are 28.9% and 76.9% for \left( 1/T_1/2 \right)^1/5 and Q/3 feature sets, respectively, and they are both >100% in the validation set; for the average neutrino energy, the relative errors in the training set are 27.82% and 35.33% for \left( 1/T_1/2 \right)^1/5 and Q/3 feature group, and 76.32% and 37.76% in the validation set, respectively.
    Considering the accuracy comparison of the three groups, the Q/3 feature set is chosen to predict the average energy data of nuclei in the fission product region (mass numbers range from 66 to 172), which lacks reliable experimental data. As a result, the average energy data with predicted values for 291 nuclei are supplemented. Besides, a comparison is made between the calculated data and the evaluated experimental data through the nuclide chart. It is found that the neural network accurately predicts the experimental data for the average β and neutrino energies which exhibit relatively strong regularity. However, it shows significant deviations in predictions for average gamma energy (relative error in the training set is 76.9%). Large deviation also emerges in the odd-odd nuclei and nuclei near magic numbers. This study confirms that integrating empirical relationships and physical principles can effectively improve the performance of the neural network, and simultaneously reveals the relationship between data regularity and model generalization capability. These findings provide a basis for using physical mechanisms to optimize machine learning models in the future.

     

    目录

    /

    返回文章
    返回