Search

Article

x

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Dense light field reconstruction algorithm based on dictionary learning

Xia Zheng-De Song Na Liu Bin Pan Jin-Xiao Yan Wen-Min Shao Zi-Hui

Citation:

Dense light field reconstruction algorithm based on dictionary learning

Xia Zheng-De, Song Na, Liu Bin, Pan Jin-Xiao, Yan Wen-Min, Shao Zi-Hui
PDF
HTML
Get Citation
  • The camera array is an important tool to obtain the light field of target in space. The method of obtaining high angular resolution light field by a large-scaled dense camera array increases the difficulty of sampling and the equipment cost. At the same time, the demand for synchronization and transmission of a large number of data also limits the sampling rate of light field. In order to complete the dense reconstruction of sparse sampling of light field, we analyze the correlation and redundancy of multi-view images in the same scene based on sparse light field data, then establish an effective mathematical model of light field dictionary learning and sparse coding. The trained light field atoms can sparsely express the local spatial-angular consistency of light field, and the four-dimensional (4D) light field patches can be reconstructed from a two-dimensional (2D) local image patch centered around each pixel in the sensor. The global and local constraints of the four-dimensional light field are mapped into the low-dimensional space by the dictionary. These constraints are shown as the sparsity of each vector in the sparse representation domain, the constraints between the positions of non-zero elements and their values. According to the constraints among sparse encoding elements, we establish the sparse encoding recovering model of virtual angular image, and propose the sparse encoding recovering method in the transform domain. The atoms of light field in dictionary are screened and the patches of light field are represented linearly by the sparse representation matrix of the virtual angular image. In the end, the virtual angular images are constructed by image fusion after sparse inverse transform. According to multi-scene dense reconstruction experiments, the effectiveness of the proposed method is verified. The experimental results show that the proposed method can recover the occlusion, shadow and complex illumination in satisfying quality. That is to say, it can be used for dense reconstruction of sparse light field in complex scene. In our study, the dense reconstruction of linear sparse light field is achieved. In the future, the dense reconstruction of nonlinear sparse light field will be studied to promote the practical application of light field imaging.
      Corresponding author: Liu Bin, liubin414605032@163.com
    [1]

    Cao X, Zheng G, Li T T 2014 Opt. Express. 22 24081Google Scholar

    [2]

    Schedl D C, Birklbauer C, Bimber O 2018 Comput. Vis. Image Und. 168 93Google Scholar

    [3]

    Smolic A, Kauff P 2005 Proc. IEEE 93 98Google Scholar

    [4]

    McMillan L, Bishop G 1995, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques Los Angeles, USA, August 6−11, 1995 p39

    [5]

    Fehn C 2004 The International Society for Optical Engineering Bellingham, USA, December 30, 2004 p93

    [6]

    Xu Z, Bi S, Sunkavalli K, Hadap S, Su H, Ramamoorthi R 2019 ACM T. Graphic 38 76

    [7]

    Wang C, Liu X F, Yu W K, Yao X R, Zheng F, Dong Q, Lan R M, Sun Z B, Zhai G J, Zhao Q 2017 Chin. Phys. Lett. 34 104203Google Scholar

    [8]

    Zhang L, Tam W J 2005 IEEE Trans. Broadcast. 51 191Google Scholar

    [9]

    Chen W, Chang Y, Lin S, Ding L, Chen L 2005 IEEE Conference on Multimedia and Expo Amsterdam, The Netherlands, July 6−8, 2005 p1314

    [10]

    Jung K H, Park Y K, Kim J K, Lee H, Kim J 2008 3 DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video Istanbul, Turkey, May 28−30, 2008 p237

    [11]

    Hosseini Kamal M, Heshmat B, Raskar R, Vandergheynst P, Wetzstein G 2016 Comput. Vis. Image Und. 145 172Google Scholar

    [12]

    Levoy M, Hanrahan P 1996 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques New York, USA, August 4−9, 1996 p31

    [13]

    Levoy M 2006 Computer 39 46Google Scholar

    [14]

    Donoho D L 2006 IEEE T. Inform. Theory. 52 1289Google Scholar

    [15]

    Park J Y, Wakin M B 2012 Eurasip J. Adv. Sig. Pr. 2012 37

    [16]

    Zhu B, Liu J Z, Cauley S F, Rosen B R, Rosen M S 2018 Nature 555 487Google Scholar

    [17]

    Ophir B, Lustig M, Elad M 2011 IEEE J. Sel. Top. Signal Process. 5 1014Google Scholar

    [18]

    Marwah K, Wetzstein G, Bando Y, Raskar R 2013 ACM T. Graphic. 32 46

    [19]

    Marwah K, Wetzstein G, Veeraraghavan A, Raskar R 2012 ACM SIGGRAPH 2012 Talks Los Angeles, USA, August 5−9, 2012 p42

    [20]

    Tenenbaum J B, Silva V D, Langford J C 2000 Science 290 2319Google Scholar

    [21]

    Honauer K, Johannsen O, Kondermann D, Glodluecke B 2016 Asian Conference on Computer Vision Taipei, China, November 20−24, 2016 p19

    [22]

    Bottou L, Bousquet O 2008 Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems Whistler, Canada, December 3−6, 2007 p161

    [23]

    Mairal J, Bach F, Ponce J, Sapiro G 2009 Proceedings of the 26th Annual International Conference on Machine Learning Montreal, Canada, June 14–18, 2009 p689

    [24]

    Flynn J, Neulander I, Philbin J, Snavely N 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Las Vegas, USA, June 27−30, 2016 p5515

  • 图 1  算法架构图

    Figure 1.  Algorithm workflow.

    图 2  光场过完备字典

    Figure 2.  Light field overcomplete dictionary.

    图 3  重建图像质量曲线图 (a) pixels为256 × 256, 不同稀疏度重建性能曲线图; (b) pixels为512 × 512, 不同稀疏度重建性能曲线图; (c) 不同分辨率重建图像的PSNR曲线图; (d) pixels为256 × 256, 不同冗余度重建性能曲线图

    Figure 3.  Performance of reconstructed image: (a) Performance in sparsity, pixels = 256 × 256; (b) performance in sparsity, pixels = 512 × 512; (c) PSNR in different resolution; (d) performance in redundancy, pixels = 256 × 256.

    图 4  不同稀疏度、冗余度参数重建图像 (a) K = 16, N = 256; (b) K = 34, N = 1024

    Figure 4.  Image reconstruction in different sparsity and redundancy: (a) K = 16, N = 256; (b) K = 34, N = 1024

    图 5  包含遮挡目标的稠密光场恢复 (a) 稠密光场; (b), (e) 参考图像; (c), (d) 恢复的view 2, view 5虚拟角度图像; (g), (h)目标图像; (f), (i) 残差图

    Figure 5.  Dense reconstruction of light field with occluded targets: (a) Dense light field; (b), (e) reference images; (c), (d) reconstructed virtual images of view 2 and view 5; (g), (h) target images; (f), (i) residual images.

    图 6  稠密光场恢复 (a) 本文算法恢复图像; (b) DIBR算法恢复图像; (c) 目标图像; (d) 残差图; (e)稠密光场

    Figure 6.  Dense reconstruction of light field: (a) Reconstructed image for proposed algorithm; (b) reconstructed image for DIBR; (c) target image; (d) residual image; (e) dense light field.

    表 1  不同稀疏度、冗余度重建图像质量指标

    Table 1.  Performance of image reconstruction in different sparsity and redundancy

    Sparse
    parameter (K),
    Redundancy
    parameter (N)
    MSEPSNR/dBSSIMTime/s
    K = 16, N = 25654.421530.77310.88601266.08
    K = 34, N = 102449.004431.22850.886514306.55
    DownLoad: CSV

    表 2  不同场景光场稠密重建结果

    Table 2.  Dense reconstruction of light field in different scenes.

    SenseTableBicycletownBoardgamesrosemaryVinylbicycle*
    MSE21.212454.421525.800553.924418.895022.475649.0044
    PSNR/dB34.864930.773134.014530.812935.367334.613731.2285
    SSIM0.93230.88600.94740.93410.96990.94210.8865
    * 稀疏度K = 34, 冗余度N = 1024.
    DownLoad: CSV
  • [1]

    Cao X, Zheng G, Li T T 2014 Opt. Express. 22 24081Google Scholar

    [2]

    Schedl D C, Birklbauer C, Bimber O 2018 Comput. Vis. Image Und. 168 93Google Scholar

    [3]

    Smolic A, Kauff P 2005 Proc. IEEE 93 98Google Scholar

    [4]

    McMillan L, Bishop G 1995, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques Los Angeles, USA, August 6−11, 1995 p39

    [5]

    Fehn C 2004 The International Society for Optical Engineering Bellingham, USA, December 30, 2004 p93

    [6]

    Xu Z, Bi S, Sunkavalli K, Hadap S, Su H, Ramamoorthi R 2019 ACM T. Graphic 38 76

    [7]

    Wang C, Liu X F, Yu W K, Yao X R, Zheng F, Dong Q, Lan R M, Sun Z B, Zhai G J, Zhao Q 2017 Chin. Phys. Lett. 34 104203Google Scholar

    [8]

    Zhang L, Tam W J 2005 IEEE Trans. Broadcast. 51 191Google Scholar

    [9]

    Chen W, Chang Y, Lin S, Ding L, Chen L 2005 IEEE Conference on Multimedia and Expo Amsterdam, The Netherlands, July 6−8, 2005 p1314

    [10]

    Jung K H, Park Y K, Kim J K, Lee H, Kim J 2008 3 DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video Istanbul, Turkey, May 28−30, 2008 p237

    [11]

    Hosseini Kamal M, Heshmat B, Raskar R, Vandergheynst P, Wetzstein G 2016 Comput. Vis. Image Und. 145 172Google Scholar

    [12]

    Levoy M, Hanrahan P 1996 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques New York, USA, August 4−9, 1996 p31

    [13]

    Levoy M 2006 Computer 39 46Google Scholar

    [14]

    Donoho D L 2006 IEEE T. Inform. Theory. 52 1289Google Scholar

    [15]

    Park J Y, Wakin M B 2012 Eurasip J. Adv. Sig. Pr. 2012 37

    [16]

    Zhu B, Liu J Z, Cauley S F, Rosen B R, Rosen M S 2018 Nature 555 487Google Scholar

    [17]

    Ophir B, Lustig M, Elad M 2011 IEEE J. Sel. Top. Signal Process. 5 1014Google Scholar

    [18]

    Marwah K, Wetzstein G, Bando Y, Raskar R 2013 ACM T. Graphic. 32 46

    [19]

    Marwah K, Wetzstein G, Veeraraghavan A, Raskar R 2012 ACM SIGGRAPH 2012 Talks Los Angeles, USA, August 5−9, 2012 p42

    [20]

    Tenenbaum J B, Silva V D, Langford J C 2000 Science 290 2319Google Scholar

    [21]

    Honauer K, Johannsen O, Kondermann D, Glodluecke B 2016 Asian Conference on Computer Vision Taipei, China, November 20−24, 2016 p19

    [22]

    Bottou L, Bousquet O 2008 Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems Whistler, Canada, December 3−6, 2007 p161

    [23]

    Mairal J, Bach F, Ponce J, Sapiro G 2009 Proceedings of the 26th Annual International Conference on Machine Learning Montreal, Canada, June 14–18, 2009 p689

    [24]

    Flynn J, Neulander I, Philbin J, Snavely N 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Las Vegas, USA, June 27−30, 2016 p5515

  • [1] Pang Wei-Xu, Li Ning, Huang Xiao-Long, Kang Yang, Li Can, Fan Xu-Dong, Weng Chun-Sheng. Optimization of beam arrangement for tunable diode laser absorption tomography reconstruction based on fractional Tikhonov regularization. Acta Physica Sinica, 2023, 72(3): 037801. doi: 10.7498/aps.72.20221731
    [2] Li Yong-Fei, Guo Rui-Ming, Zhao Hang-Fang. Sparse reconstruction of acoustic interference fringes in shallow water and internal wave environment. Acta Physica Sinica, 2023, 72(7): 074301. doi: 10.7498/aps.72.20221932
    [3] Xiang Peng-Cheng, Cai Cong-Bo, Wang Jie-Chao, Cai Shu-Hui, Chen Zhong. Super-resolved reconstruction method for spatiotemporally encoded magnetic resonance imaging based on deep neural network. Acta Physica Sinica, 2022, 71(5): 058702. doi: 10.7498/aps.71.20211754
    [4] Liu Fei, Sun Shao-Jie, Han Ping-Li, Zhao Lin, Shao Xiao-Peng. Clear underwater vision in non-uniform scattering field by low-rank-and-sparse-decomposition-based olarization imaging. Acta Physica Sinica, 2021, 70(16): 164201. doi: 10.7498/aps.70.20210314
    [5] Li Ning, Tu Xin, Huang Xiao-Long, Weng Chun-Sheng. Development of beam arrangement design for tunable diode laser absorption tomography reconstruction based on Tikhonov regularization parameter matrix. Acta Physica Sinica, 2020, 69(22): 227801. doi: 10.7498/aps.69.20201144
    [6] Li Chun-Lei, Xu Yan, Zheng Jun, Wang Xiao-Ming, Yuan Rui-Yang, Guo Yong. Light-field assisted spin-polarized transport properties in magnetic-electric barrier structures. Acta Physica Sinica, 2020, 69(10): 107201. doi: 10.7498/aps.69.20200237
    [7] Zhong Ming-Yu, Xi Liang, Si Fu-Qi, Zhou Hai-Jin, Wang Yu. Tomographic reconstruction of stack plume based on sparse optimization. Acta Physica Sinica, 2019, 68(16): 164205. doi: 10.7498/aps.68.20190268
    [8] Liu Bin, Pan Yi-Hua, Yan Wen-Min. Defocusing mechanism and focusing evaluation function of light field imaging. Acta Physica Sinica, 2019, 68(20): 204202. doi: 10.7498/aps.68.20190725
    [9] Xie Wan-Cai, Huang Su-Juan, Shao Wei, Zhu Fu-Quan, Chen Mu-Sheng. Free-space optical communication based on hybrid optical mode array encoding. Acta Physica Sinica, 2017, 66(14): 144102. doi: 10.7498/aps.66.144102
    [10] Feng Hui, Sun Biao, Ma Shu-Gen. One-bit compressed sensing reconstruction for block sparse signals. Acta Physica Sinica, 2017, 66(18): 180202. doi: 10.7498/aps.66.180202
    [11] Wang Xin-Yi, Fan Quan-Ping, Wei Lai, Yang Zu-Hua, Zhang Qiang-Qiang, Chen Yong, Peng Qian, Yan Zhuo-Yang, Xiao Sha-Li, Cao Lei-Feng. High-resolution reconstruction of Fresnel zone plate coded imaging. Acta Physica Sinica, 2017, 66(5): 054203. doi: 10.7498/aps.66.054203
    [12] Wen Fang-Qing, Zhang Gong, Ben De. A recovery algorithm for multitask compressive sensing based on block sparse Bayesian learning. Acta Physica Sinica, 2015, 64(7): 070201. doi: 10.7498/aps.64.070201
    [13] Ma Ge, Hu Yue-Ming, Gao Hong-Xia, Li Zhi-Fu, Guo Qi-Wei. Physical total energy based objective function model for sparse reconstruction. Acta Physica Sinica, 2015, 64(20): 204202. doi: 10.7498/aps.64.204202
    [14] Wang Lin-Yuan, Liu Hong-Kui, Li Lei, Yan Bin, Zhang Han-Ming, Cai Ai-Long, Chen Jian-Lin, Hu Guo-En. Review of sparse optimization-based computed tomography image reconstruction from few-view projections. Acta Physica Sinica, 2014, 63(20): 208702. doi: 10.7498/aps.63.208702
    [15] Deng Cheng-Zhi, Tian Wei, Chen Pan, Wang Sheng-Qian, Zhu Hua-Sheng, Hu Sai-Feng. Infrared image super-resolution via locality-constrained group sparse model. Acta Physica Sinica, 2014, 63(4): 044202. doi: 10.7498/aps.63.044202
    [16] Song Chang-Xin, Ma Ke, Qin Chuan, Xiao Peng. Infrared image segmentation based on clustering combined with sparse coding and spatial constraints. Acta Physica Sinica, 2013, 62(4): 040702. doi: 10.7498/aps.62.040702
    [17] Hao Chong-Qing, Wang Jiang, Deng Bin, Wei Xi-Le. Estimating topology of complex networks based on sparse Bayesian learning. Acta Physica Sinica, 2012, 61(14): 148901. doi: 10.7498/aps.61.148901
    [18] Zhou Yu-Shu, Cao Jie. Partitioning and reconstruction problem of the wind in a limited region. Acta Physica Sinica, 2010, 59(4): 2898-2906. doi: 10.7498/aps.59.2898
    [19] Liu Dong, Wang Fei, Huang Qun-Xing, Yan Jian-Hua, Chi Yong, Cen Ke-Fa. Fast reconstruction of two-dimensional temperature distribution in participating medium. Acta Physica Sinica, 2008, 57(8): 4812-4816. doi: 10.7498/aps.57.4812
    [20] WANG ZHONG-QING. HIGHER POWER SQUEEZING EFFECTS FOR ODD AND EVEN q-COHERENT STATES. Acta Physica Sinica, 2001, 50(4): 690-692. doi: 10.7498/aps.50.690
Metrics
  • Abstract views:  7936
  • PDF Downloads:  98
  • Cited By: 0
Publishing process
  • Received Date:  23 October 2019
  • Accepted Date:  16 December 2019
  • Published Online:  20 March 2020

/

返回文章
返回