Search

Article

x

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Attacking asymmetric cryptosystem based on phase truncated Fourier fransform by deep learning

Xu Zhao Zhou Xin Bai Xing Li Cong Chen Jie Ni Yang

Citation:

Attacking asymmetric cryptosystem based on phase truncated Fourier fransform by deep learning

Xu Zhao, Zhou Xin, Bai Xing, Li Cong, Chen Jie, Ni Yang
PDF
HTML
Get Citation
  • Most of optical encryption systems are symmetric cryptosystems. The plaintext and the ciphertext in optical image encryption are related linearly. The security of the system needs to be strengthened. The asymmetric cryptosystem based on phase truncated Fourier transforms (PTFT) makes the security of the encryption system greatly improved by its nonlinear phase truncation. Deep learning (DL) as a method of machine learning was proposed decades ago. With the development of computer’s performance, the practicality of deep learning proves to be more and more obvious. Recently, deep learning has been effectively used in many fields such as biomedicine, object detection, etc. The good results have been achieved. In this article proposed is the attack to the PTFT encryption system by deep learning. Through the PTFT encryption system, we construct a plaintext-ciphertext paired image dataset and then train it by residual network (ResNet). There are two problems encountered by the traditional neural network model. One is vanishing or named exploding gradient, which makes training effect difficult to converge and the other is a degradation phenomenon. When continuing to increase the number of layers for a suitable depth model, the model accuracy will decline which is not caused by overfitting. This problem can be solved by the ResNet to a certain extent by directly bypassing and then taking the input information to the output to protect the integrity of the information. The biggest difference between ordinary directly connected convolutional neural networks and ResNet is that the ResNet has many bypass branches that directly connect the input to the subsequent layers, so that the subsequent layers can directly learn the residuals. The ResNet can automatically learn the decryption characteristics of the encryption system. Finally, the test set is used to test the decryption performance of the trained model. The data show that the model can restore the image with high quality and the model has a certain anti-noise ability. Compared with the two-step iterative amplitude recovery algorithm, the the method proposed in this paper can recover high quality image.
      Corresponding author: Zhou Xin, zhoxn@21cn.com
    • Funds: Project supported by the National Natural Science Foundation of China (Grant Nos. 61475104, 61177009)
    [1]

    Refregier P, Javidi B 1995 Opt. Lett. 20 767Google Scholar

    [2]

    Qin W, Peng X 2010 Opt. Lett. 35 118Google Scholar

    [3]

    Lecun Y, Bottou L, Bengio Y, Haffner P 1998 Proc. IEEE. 86 2278Google Scholar

    [4]

    Krizhevsky A, Sutskever I, Hinton G 2017 Commun. ACM. 60 84Google Scholar

    [5]

    Simonyan K, Zisserman A 2014 arXiv e-prints arXiv: 1409.1556

    [6]

    Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A 2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Boston, USA, June 7−12, 2015 p1

    [7]

    He K, Zhang X, Ren S, Sun J 2015 arXiv e-prints arXiv: 1512.03385

    [8]

    Hai H, Pan S, Liao M, Lu D, He W, Peng X 2019 Opt. Express 27 21204Google Scholar

    [9]

    Srivastava R, Greff K, Schmidhuber J 2015 Proceedings of the 28th International Conference on Neural Information Processing Systems Montreal, Canada, December 7−10, 2015 p2377

    [10]

    Drozdzal M, Vorontsov E, Chartrand G, Kadoury S, Pal C 2016 arXiv e-prints arXiv: 1608.04117

    [11]

    Glorot X, Bordes A, Bengio Y 2011 Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS) Fort Lauderdale, USA, April 11−13, 2011 p315

    [12]

    Nair V, Hinton G 2010 Proceedings of the 27th International Conference on International Conference on Machine Learning Madison, USA, June 21−24, 2010 p807

    [13]

    Ioffe S, Szegedy C 2015 Proceedings of the 32nd International Conference on International Conference on Machine Learning Lille, France, July 6−11, 2015 p448

    [14]

    Dong C, Loy C C, He K, Tang X 2014 Proceedings of the 13th European Conference on Computer Vision Zurich, Switzerland, September 6−12, 2014 p184

    [15]

    Ishikawa M 1996 Neural Networks. 9 509Google Scholar

    [16]

    Ciregan D, Meier U, Schmidhuber J 2012 2012 IEEE Conference on Computer Vision and Pattern Recognition Providence, USA, June 16−21, 2012 p3642

    [17]

    Kingma D P, Ba J 2014 arXiv e-prints arXiv: 1412.6980

    [18]

    Horé A, Ziou D 2010 20th International Conference on Pattern Recognition Istanbul, Turkey, Auguest 23−26, 2010 p2366

    [19]

    Wang Z, Bovik A C, Sheikh H R, Simoncelli E P 2004 IEEE Trans. Image Process. 13 600Google Scholar

    [20]

    Wang X, Zhao D 2012 Opt. Commun. 285 1078Google Scholar

  • 图 1  基于PTFT的加密系统原理图 (a) 加密过程; (b) 解密过程

    Figure 1.  Schematic diagrams of PTFT system: (a) Encryption; (b) decryption.

    图 2  残差网络模块

    Figure 2.  Residual module of ResNet.

    图 3  基于ResNet的网络架构

    Figure 3.  Neural network based on ResNet.

    图 4  神经网络重建效果图 (a) 明文图像; (b) 密文图像; (c) 通过神经网络恢复的明文图像

    Figure 4.  Images reconstructed by neural network: (a) Plaintext; (b) ciphertext; (c) plaintext reconstructed by neural network.

    图 5  含有不同能量比高斯噪声的密文解密效果 (a) 0%; (b) 10%; (c) 20%; (d) 50%

    Figure 5.  Reconstruction results of ciphertext containing Gaussian noise with different energy ratios: (a) 0%; (b) 10%; (c) 20%; (d) 50%.

    图 6  使用含不同能量比高斯噪声的密文训练集后的测试效果 (a) 0%; (b) 20%

    Figure 6.  Test results after using ciphertext groups of Gaussian noise with different energy ratios: (a) 0%; (b) 20%.

    图 7  使用含不同能量比高斯噪声的密文训练集后的测试效果 (a) 0%; (b) 20%

    Figure 7.  Test results after using ciphertext groups of Gaussian noise with different energy ratios: (a) 0%; (b) 20%.

    图 8  图6中图像的PSNR和SSIM (a) PSNR; (b) SSIM

    Figure 8.  PSNR and SSIM in Fig.6: (a) PSNR; (b) SSIM.

    图 9  (a)深度学习算法恢复结果; (b)两步迭代振幅恢复算法恢复结果

    Figure 9.  (a) Reconstruction results by deep learning; (b) reconstruction results by two-step iterative amplitude retrieval approach.

    图 10  图9中图像的PSNR和SSIM (a) PSNR; (b) SSIM

    Figure 10.  PSNR and SSIM in Fig. 9: (a) PSNR; (b) SSIM.

  • [1]

    Refregier P, Javidi B 1995 Opt. Lett. 20 767Google Scholar

    [2]

    Qin W, Peng X 2010 Opt. Lett. 35 118Google Scholar

    [3]

    Lecun Y, Bottou L, Bengio Y, Haffner P 1998 Proc. IEEE. 86 2278Google Scholar

    [4]

    Krizhevsky A, Sutskever I, Hinton G 2017 Commun. ACM. 60 84Google Scholar

    [5]

    Simonyan K, Zisserman A 2014 arXiv e-prints arXiv: 1409.1556

    [6]

    Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A 2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Boston, USA, June 7−12, 2015 p1

    [7]

    He K, Zhang X, Ren S, Sun J 2015 arXiv e-prints arXiv: 1512.03385

    [8]

    Hai H, Pan S, Liao M, Lu D, He W, Peng X 2019 Opt. Express 27 21204Google Scholar

    [9]

    Srivastava R, Greff K, Schmidhuber J 2015 Proceedings of the 28th International Conference on Neural Information Processing Systems Montreal, Canada, December 7−10, 2015 p2377

    [10]

    Drozdzal M, Vorontsov E, Chartrand G, Kadoury S, Pal C 2016 arXiv e-prints arXiv: 1608.04117

    [11]

    Glorot X, Bordes A, Bengio Y 2011 Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS) Fort Lauderdale, USA, April 11−13, 2011 p315

    [12]

    Nair V, Hinton G 2010 Proceedings of the 27th International Conference on International Conference on Machine Learning Madison, USA, June 21−24, 2010 p807

    [13]

    Ioffe S, Szegedy C 2015 Proceedings of the 32nd International Conference on International Conference on Machine Learning Lille, France, July 6−11, 2015 p448

    [14]

    Dong C, Loy C C, He K, Tang X 2014 Proceedings of the 13th European Conference on Computer Vision Zurich, Switzerland, September 6−12, 2014 p184

    [15]

    Ishikawa M 1996 Neural Networks. 9 509Google Scholar

    [16]

    Ciregan D, Meier U, Schmidhuber J 2012 2012 IEEE Conference on Computer Vision and Pattern Recognition Providence, USA, June 16−21, 2012 p3642

    [17]

    Kingma D P, Ba J 2014 arXiv e-prints arXiv: 1412.6980

    [18]

    Horé A, Ziou D 2010 20th International Conference on Pattern Recognition Istanbul, Turkey, Auguest 23−26, 2010 p2366

    [19]

    Wang Z, Bovik A C, Sheikh H R, Simoncelli E P 2004 IEEE Trans. Image Process. 13 600Google Scholar

    [20]

    Wang X, Zhao D 2012 Opt. Commun. 285 1078Google Scholar

Metrics
  • Abstract views:  4092
  • PDF Downloads:  94
  • Cited By: 0
Publishing process
  • Received Date:  08 December 2020
  • Accepted Date:  24 February 2021
  • Available Online:  12 July 2021
  • Published Online:  20 July 2021

/

返回文章
返回