The inevitable distortions in optical coherence tomography (OCT) imaging often lead to mismatches between the imaging space and the real space, significantly affecting measurement accuracy. To address this issue, this study proposes a machine learning-based OCT image distortion correction method. A calibration plate with uniformly distributed circular hole arrays is sequentially imaged at different marked planes. The point showing minimal deviation between its coordinates and the mean coordinates in all imaging planes is selected as a reference marker. A mathematical model is then used to reconstruct all marker point coordinates in the reference plane, establishing a mapping relationship between the imaging space of calibration plate and the real physical space. A multilayer perceptron (MLP) is employed to learn this mapping relationship. The network architecture consists of multiple fully-connected modules, each with a linear layer and an activation function besides the output layer. The optimal model is selected based on validation set performance, and then used to analyze the spatial distribution of points. Using a swept-source OCT system, lens images are acquired and corrected through the trained model to obtain the anterior surface point cloud. Combined with ray tracing reconstruction of the posterior surface, the lens curvature radius and central thickness are calculated. The experimental results show that after correction, the lens curvature radius is measured with an accuracy of 10 μm (error < 1%), while the central thickness is determined, with an accuracy of 3 μm (relative error: 0.3%). This method shows high accuracy and reliability, providing an effective solution for improving OCT measurement accuracy.