-
本文提出了一种基于物理驱动的融合注意力机制的新型卷积网络单像素成像方法。通过将结合通道与空间注意力机制的模块集成到一个随机初始化的卷积网络中,利用单像素成像的物理模型约束网络,实现了高质量的图像重建。具体来说,我们将其空间与通道两个维度的注意力机制集成为一个模块,引入到多尺度U-net卷积网络的各层中,通过这种方式,不仅可以利用注意力机制在三维数据立方中提供的关键权重信息,还充分结合了U-net网络在不同空间频率下强大的特征提取能力。这一创新方法能够有效捕捉图像细节,抑制背景噪声,提升图像重建质量。实验结果表明,针对低采样率条件下的图像重建,与传统非预训练网络相比,融合注意力机制的方案不仅在直观上图像细节重建的更好,而且在定量的评价指标(如峰值信噪比和结构相似性)上均表现出显著优势,验证了其在单像素成像中的有效性与应用前景。This paper presents a novel convolutional neural network-based single-pixel imaging method that integrates a physics-driven fusion attention mechanism. By incorporating a module combining both channel and spatial attention mechanisms into a randomly initialized convolutional network, the method utilizes the physical model constraints of single-pixel imaging to achieve high-quality image reconstruction. Specifically, the spatial and channel attention mechanisms are combined into a single module and introduced into various layers of a multi-scale U-net convolutional network. In the spatial attention mechanism, we extract the attention weight features of each spatial region of the pooled feature map using convolution. In the channel attention mechanism, we pool the three-dimensional feature map into a single-channel signal and input it into a two-layer fully connected network to obtain the attention weight information for each channel. This approach not only leverages the critical weighting information provided by the attention mechanism in the three-dimensional data cube but also fully integrates the powerful feature extraction capabilities of the U-net network across different spatial frequencies. This innovative method effectively captures image details, suppresses background noise, and improves image reconstruction quality. During the experimental phase, we employed the optical path of single-pixel imaging to acquire bucket signals for two target images, ”snowflake” and ”basket”. By inputting any noise image into a randomly initialized neural network with an attention mechanism, and leveraging the mean square error between simulated and actual bucket signals, we physically constrained the network’s convergence. Ultimately, we achieved a reconstructed image that adhered to the physical model. Experimental results demonstrate that, under low sampling rate conditions, the scheme that integrates the attention mechanism not only intuitively reconstructs image details better but also demonstrates significant advantages in quantitative evaluation metrics such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), verifying its effectiveness and potential for application in singlepixel imaging.
-
Keywords:
- Single-pixel imaging /
- attention mechanisms /
- convolutional neural networks /
- image reconstruction
-
[1] Kilcullen P, Ozaki T, Liang J 2022 Nat. Commun. 137879
[2] Hahamovich E, Monin S, Hazan Y, Rosenthal A 2021 Nat. Commun. 124516
[3] Shapiro J H 2008 Phys. Rev. A 78061802
[4] Ferri F, Magatti D, Gatti A, Bache M, Brambilla E, Lugiato L 2005 Physical review letters 94183602
[5] Wang F, Wang C, Deng C, Han S, Situ G 2022 Photon. Res. 10104
[6] Pan L, Shen Y, Qi J, Shi J, Feng X 2023 Opt. Express 3113943
[7] Song K, Bian Y, Wang D, Li R, Wu K, Liu H, Qin C, Hu J, Xiao L 2024 Laser & Photonics Rev. published online 2401397
[8] Zhao X S, Yu C, Wang C, Li T, Liu B, Lu H, Zhang R, Dou X, Zhang J, Pan J W 2024 Appl. Phys. Lett. 125211103
[9] Karpowicz N, Zhong H, Xu J, Lin K I, Hwang J S, Zhang X C 2005 Semicond. Sci. Tech. 20 S293
[10] Simões M, Vaz P, Cortez A F V 2024. arXiv:2411.03907[physics.ins-det]
[11] Shwartz S 2021 Science Bulletin 66857
[12] Olbinado M P, Paganin D M, Cheng Y, Rack A 2021 Optica 81538
[13] Clemente P, Durán V, Tajahuerce E, Andrés P, Climent V, Lancis J 2013 Opt. Lett. 382524
[14] Jiang W, Yin Y, Jiao J, Zhao X, Sun B 2022 Photon. Res. 102157
[15] Gibson G M, Sun B, Edgar M P, Phillips D B, Hempler N, Maker G T, Malcolm G P A, Padgett M J 2017 Opt. Express 252998
[16] Zhou L, Xiao Y, Chen W 2023 Opt. Express 3123027
[17] Xu Y, Lu L, Saragadam V, Kelly K F 2024 Nat. Commun. 151456
[18] Li J, Li X, Yardimci N T, Hu J, Li Y, Chen J, Hung Y C, Jarrahi M, Ozcan A 2023 Nat. Commun. 146791
[19] Li S, Liu X, Xiao Y, Ma Y, Yang J, Zhu K, Tian X 2023 Opt. Express 314712
[20] Zheng P, Dai Q, Li Z, Ye Z, Xiong J, Liu H C, Zheng G, Zhang S 2021 Sci. Adv. 7 eabg0363
[21] Katz O, Bromberg Y, Silberberg Y 2009 Appl. Phys. Lett. 95131110
[22] López-García L, Cruz-Santos W, GarcíaArellano A, Filio-Aguilar P, Cisneros-Martínez J A, Ramos-García R 2022 Opt. Express 3013714
[23] Zhang Z, Ma X, Zhong J 2015 Nat. Commun. 66225
[24] Donoho D 2006 IEEE T. Inform. Theory 521289
[25] Duarte M F, Davenport M A, Takhar D, Laska J N, Sun T, Kelly K F, Baraniuk R G 2008 IEEE Signal Proc. Mag. 2583
[26] Huang L, Luo R, Liu X, Hao X 2022 Light Sci. Appl. 1161
[27] Figueiredo M A T, Nowak R D, Wright S J 2007
[28] pioneers A 2024 Nat. Mach. Intell. 61271
[29] Wenshu Z, Daolun L, Luhang S, Wen Z, Xuliang L 2022 Chinese Journal of Theoretical and Applied Mechanics 54543(in Chinses) [查文舒, 李道伦, 沈路航, 张雯, 刘旭亮2022力学学报54543]
[30] Zhang H, Wang J, Zhang Y, Du X, Wu H, Zhang T 2024 Astronomical Techniques and Instruments 11
[31] van Leeuwen C, Podareanu D, Codreanu V, Cai M X, Berg A, Zwart S P, Stoffer R, Veerman M, van Heerwaarden C, Otten S, Caron S, Geng C, Ambrosetti F, Bonvin A M J J 2020. arXiv:2004.03454[cs.CE]
[32] Barbastathis G, Ozcan A, Situ G 2019 Optica 6921
[33] Ruget A, Moodley C, Forbes A, Leach J 2024 Opt. Express 3241057
[34] Wetzstein G, Ozcan A, Gigan S, Fan S, Englund D, Soljačić M, Denz C, Miller D A B, Psaltis D 2020 Nat. 58839
[35] Lyu M, Wang W, Wang H, Wang H, Li G, Chen N, Situ G 2017 Sci. Rep. 717865
[36] Zhang X, Deng C, Wang C, Wang F, Situ G 2023 ACS Photonics 102363
[37] Li J, Li Y, Li J, Zhang Q, Li J 2020 Opt. Express 2822992
[38] Wang F, Wang C, Chen M, Gong W, Zhang Y, Han S, Situ G 2022 Light Sci. Appl. 111
[39] Peng L, Xie S, Qin T, Cao L, Bian L 2023 Opt. Lett. 482527
[40] Liu H, Bian L, Zhang J 2023 Opt. Laser Technol. 157108600
[41] Liu X, Han T, Zhou C, Huang J, Ju M, Xu B, Song L 2023 Opt. Express 319945
[42] Hammernik K, Küstner T, Yaman B, Huang Z, Rueckert D, Knoll F, Akçakaya M 2023 IEEE Signal Processing Magazine 4098
[43] Wang P, Chen P, Yuan Y, Liu D, Huang Z, Hou X, Cottrell G W 2017. arXiv:1702.08502[cs.CV]
[44] Ulyanov D, Vedaldi A, Lempitsky V 2020 IJCV 1281867
[45] Ren W, Nie X, Peng T, Scully M O 2022 Opt. Express 3047921
[46] Zhang H, Sindagi V, Patel V M 2020 IEEE Transactions on Circuits and Systems for Video Technology 303943
[47] Lv W, Xiong J, Shi J, Huang Y, Qin S 2021 J. Intell. Manuf. 32441
[48] Zhang H, Wang Z, Liu D 2014 IEEE Transactions on Neural Networks and Learning Systems 251229
[49] Baozhou Z, Hofstee P, Lee J, Al-Ars Z 2021. arXiv:2108.08205[cs.CV]
[50] Karim N, Rahnavard N 2021. arXiv:2107.01330[cs.CV]
[51] Hoshi I, Shimobaba T, Kakue T, Ito T 2020 Opt. Express 2834069
[52] Stollenga M, Masci J, Gomez F, Schmidhuber J 2014. arXiv:1407.3068[cs.CV]
[53] Zhang Y, Li K, Li K, Wang L, Zhong B, Fu Y 2018. arXiv:1807.02758[cs.CV]
[54] Liao X, He L, Mao J, Xu M 2024 Remote Sensing 161688
[55] Yu W K, Wang S F, Shang K Q 2024 Sensors 241012
[56] Ronneberger O, Fischer P, Brox T 2015. arXiv:1505.04597[cs.CV]
[57] Lyu M, Wang W, Wang H, Wang H, Li G, Chen N, Situ G 2017 Scientific Reports 717865
[58] Meng Z, Yu Z, Xu K, Yuan X 2021. arXiv:2108.12654[eess.IV]
[59] Ferri F, Magatti D, Lugiato L A, Gatti A 2010 Phys. Rev. Lett. 104253603
[60] Lin J, Yan Q, Lu S, Zheng Y, Sun S, Wei Z 2022 Photonics 9343
计量
- 文章访问数: 52
- PDF下载量: 1
- 被引次数: 0