搜索

x

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于忆阻器阵列的下一代储池计算

任宽 张握瑜 王菲 郭泽钰 尚大山

引用本文:
Citation:

基于忆阻器阵列的下一代储池计算

任宽, 张握瑜, 王菲, 郭泽钰, 尚大山

Next-generation reservoir computing based on memristor array

Ren Kuan, Zhang Wo-Yu, Wang Fei, Guo Ze-Yu, Shang Da-Shan
PDF
HTML
导出引用
  • 储池计算是类脑计算范式的一种, 具有结构简单、训练参数少等特点, 在时序信号处理、混沌动力学系统预测等方面有着巨大的应用潜力. 本文提出了一种基于存内计算范式的储池计算硬件实现方法, 利用忆阻器阵列完成非线性向量自回归过程中的矩阵向量乘法操作, 有望进一步提升储池计算的能效. 通过忆阻器阵列仿真实验, 在Lorenz63时间序列预测任务中验证了该方法的可行性, 以及该方法在噪声条件下预测结果的鲁棒性, 并探究忆阻器阵列阻值精度对预测结果的影响. 这一结果为储池计算的硬件实现提供了一种新的途径.
    As a kind of brain-inspired computing, reservoir computing (RC) has great potential applications in time sequence signal processing and chaotic dynamics system prediction due to its simple structure and few training parameters. Since in the RC randomly initialized network weights are used, it requires abundant data and calculation time for warm-up and parameter optimization. Recent research results show that an RC with linear activation nodes, combined with a feature vector, is mathematically equivalent to a nonlinear vector autoregression (NVAR) machine, which is named next-generation reservoir computing (NGRC). Although the NGRC can effectively alleviate the problems which traditional RC has, it still needs vast computing resources for multiplication operations. In the present work, a hardware implementation method of using computing-in memory paradigm for NGRC is proposed for the first time. We use memristor array to perform the matrix vector multiplication involved in the nonlinear vector autoregressive process for the improvement of the energy efficiency. The Lorenz63 time series prediction task is performed by simulation experiments with the memristor array, demonstrating the feasibility and robustness of this method, and the influence of the weight precision of the memristor devices on the prediction results is discussed. These results provide a promising way of implementing the hardware NGRC.
      通信作者: 尚大山, shangdashan@ime.ac.cn
    • 基金项目: 国家重点基础研究发展计划(批准号: 2018YFA0701500)、国家自然科学基金 (批准号: 61874138)和中国科学院战略性先导科技专项(批准号: XDB44000000)资助的课题.
      Corresponding author: Shang Da-Shan, shangdashan@ime.ac.cn
    • Funds: Project supported by the National Basic Research Program of China (Grant No. 2018YFA0701500), the National Natural Science Foundation of China (Grant No. 61874138), and the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB44000000)
    [1]

    Guillem C, Jordi F 2015 Front. Psychol. 6 818

    [2]

    Dayan P, Abbott L F 2001 J. Cogn. Neurosci. 15 154

    [3]

    Vogels T P, Rajan K, Abbott L F 2005 Annu. Rev. Neurosci. 28 357Google Scholar

    [4]

    Tian Y, Li G, Sun P 2021 Phys. Rev. Res. 3 043085Google Scholar

    [5]

    Borst A, Theunissen F E 1999 Nat. Neurosci. 2 947Google Scholar

    [6]

    Amit D J, Gutfreund H, Sompolinsky H 1987 Phys. Rev. A 35 2293Google Scholar

    [7]

    Danilo P, Mandic J A C 2001 Recurrent Neural Networks Architecture (Hoboken: John Wiley & Sons Ltd) pp69–89

    [8]

    Choi E, Schuetz A, Stewart W F, Sun J 2016 J. Am. Med. Inform. Assoc. 24 361

    [9]

    Pascanu R, Mikolov T, Bengio Y 2013 Proceedings of the 30 th International Conference on Machine Learning Atlanta, Georgia, USA, June 16–21, 2013 p1310

    [10]

    Dominey P, Arbib M, Joseph J P 1995 J. Cogn. Neurosci. 7 311Google Scholar

    [11]

    Jaeger H 2001 German National Research Institute for Computer ScienceGerman National Research Centre for Information Technology, GMD Technical Reports Bonn, Germany, January 01, 2001 p13

    [12]

    Jaeger H, Haas H 2004 Science 304 78Google Scholar

    [13]

    Maass W, Natschlager T, Markram H 2002 Neural Comput. 14 2531Google Scholar

    [14]

    Kan S, Nakajima K, Takeshima Y, Asai T, Kuwahara Y, Akai-Kasaya M 2021 Phys. Rev. Appl. 15 024030Google Scholar

    [15]

    Pathak J, Hunt B, Girvan M, Lu Z, Ott E 2018 Phys. Rev. Lett. 120 024102Google Scholar

    [16]

    Chattopadhyay A, Hassanzadeh P, Subramanian D 2020 Nonlinear Processes Geophys. 27 373Google Scholar

    [17]

    Lukoševičius M, Jaeger H, Schrauwen B 2012 KI - Künstliche Intelligenz 26 365

    [18]

    Boyd S, Chua L 1985 IEEE Trans. Circuits Syst. 32 1150Google Scholar

    [19]

    Grigoryeva L, Ortega J P 2018 Neural Networks 108 495Google Scholar

    [20]

    Zhao C, Li J, Liu L, Koutha L S, Liu J, Yi Y 2016 Proceedings of the 3 rd ACM International Conference on Nanoscale Computing and Communication New York, New York, USA, September 28–30, 2016 p1

    [21]

    Canaday D, Griffith A, Gauthier D 2018 Chaos:An Interdisciplinary Journal of Nonlinear Science 28 123119Google Scholar

    [22]

    Yi Y, Liao Y, Fu X 2016 Microprocess. Microsyst. 46 175Google Scholar

    [23]

    Bertschinger N, Natschlager T 2004 Neural Comput. 16 1413Google Scholar

    [24]

    Yang X, Chen W, Wang F 2016 Analog Integr. Circuits Signal Process. 87 263Google Scholar

    [25]

    Merkel C, Saleh Q, Donahue C, Kudithipudi D 2014 5 th Annual International Conference on Biologically Inspired Cognitive Architectures (BICA) MIT Campus, Cambridge, Massachusetts, USA, November 7–9, 2014 p249

    [26]

    Donahue C, Merkel C, Saleh Q, Dolgovs L, Ooi Y, Kudithipudi D, Wysocki B 2015 IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA) Verona, New York, May 26–28, 2015 p24

    [27]

    Demis E, Aguilera R, Scharnhorst K, Aono M, Stieg A, Gimzewski J 2016 Jpn. J. Appl. Phys. 55 1102B2Google Scholar

    [28]

    Lilak S, Woods W, Scharnhorst K, Dunham C, Teuscher C, Stieg A, Gimzewski J 2021 Front. in Nanotechnol. 3 1

    [29]

    Vandoorne K, Mechet P, van Vaerenbergh T, et al. 2014 Nat. Commun. 5 3541Google Scholar

    [30]

    Milano G, Pedretti G, Montano K, Ricci S, Hashemkhani S, Boarino L, Ielmini D, Ricciardi C 2021 Nat. Mater. doi: 10.1038/s41563-021-01099-9

    [31]

    Gallicchio C, Micheli A, Pedrelli L 2017 Neurocomputing 268 87Google Scholar

    [32]

    Qiao J, Li F, Han H G, Li W 2016 IEEE Trans. Neural Networks Learn. Syst. 28 391

    [33]

    Tong Z Q, Tanaka G 2018 24th International Conference on Pattern Recognition (ICPR) Beijing, China, August 20–24, 2018 p1289

    [34]

    Murakamli M, Kroger B, Birkholz P, Triesch J 2015 5th IEEE Joint International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EpiRob) Providence, Rhode Island, August 13–16, 2015 p208

    [35]

    Sussillo D, Abbott L F 2009 Neuron 63 544Google Scholar

    [36]

    Tanaka G, Yamane T, Heroux J B, et al. 2019 Neural Networks 115 100Google Scholar

    [37]

    Lepri S, Giacomelli G, Politi A, Arecchi F T 1994 Physica D 70 235Google Scholar

    [38]

    Appeltant L, Soriano M C, van der Sande G, et al. 2011 Nat. Commun. 2 468 468

    [39]

    Brunner D, Penkovsky B, Marquez B A, Jacquot M, Fischer I, Larger L 2018 J. Appl. Phys. 124 152004Google Scholar

    [40]

    Penkovsky B, Larger L, Brunner D 2018 J. Appl. Phys. 124 162101Google Scholar

    [41]

    Yu J, Li Y, Sun W, et al. 2021 Symposium on VLSI Technology Kyoto, Japan, June 13–19, 2021 p1

    [42]

    Gauthier D J, Bollt E, Griffith A, Barbosa W A S 2021 Nat. Commun. 12 5564Google Scholar

    [43]

    Strukov D B, Snider G S, Stewart D R, Williams R S 2008 Nature 453 80Google Scholar

    [44]

    Li H, Wang S, Zhang X, Wang W, Yang R, Sun Z, Feng W, Lin P, Wang Z, Sun L, Yao Y 2021 Adv. Intell. Syst. 3 2100017Google Scholar

    [45]

    Li Y, Loh L, Li S, Chen L, Li B, Bosman M, Ang K W 2021 Nat. Electron. 4 348Google Scholar

    [46]

    Kim H, Mahmoodi M R, Nili H, Strukov D B 2021 Nat. Commun. 12 5198Google Scholar

    [47]

    Xiao T P, Bennett C H, Feinberg B, Agarwal S, Marinella M J 2020 Appl. Phys. Rev. 7 031301Google Scholar

    [48]

    Lorenz E N 2004 The Theory of Chaotic Attractors (New York: Springer New York) pp25–36

    [49]

    Zhang W, Gao B, Tang J, Yao P, Yu S, Chang M F, Yoo H J, Qian H, Wu H 2020 Nat. Electron. 3 371Google Scholar

  • 图 1  三种RC结构 (a) 传统RC结构; (b) 单节点延时RC结构; (c) 非线性向量自回归RC结构

    Fig. 1.  Three types of RC frameworks: (a) Conventional RC; (b) RC using a single nonlinear node reservoir with time-delayed feedback; (c) NGRC, which is equivalent to nonlinear vector autoregression.

    图 2  基于忆阻阵列的NGRC储池结构 (a)用于预测三维时序信号的NGRC储池结构. 输入为三维时序信号; 提取ti时刻(红色框)和ti-s(紫色框)时刻信号的值组成线性特征向量Olin, 将第i个线性特征向量编码为时序电压和电导, 时序电压作为忆阻器阵列的输入, 电导映射到忆阻器阵列上作为权重; 非线性特征向量${\boldsymbol{O}}_{{\rm{nonlin}}}$由忆阻器阵列特定单元(绿色方框)的输出构成; 总特征向量由$ {\boldsymbol{O}}_{\mathrm{l}\mathrm{i}\mathrm{n}} $$ {\boldsymbol{O}}_{\mathrm{n}\mathrm{o}\mathrm{n}\mathrm{l}\mathrm{i}\mathrm{n}} $直接拼接而成. (b) 图(a)中的线性特征向量$ {\boldsymbol{O}}_{\mathrm{l}\mathrm{i}\mathrm{n}, i} $映射到忆阻器阵列的方式. $ {\boldsymbol{O}}_{\mathrm{l}\mathrm{i}\mathrm{n}, i} $中的每一个值都由两个忆阻器电导的差分g+, g表示

    Fig. 2.  Structure of the NGRC based on memristor-based crossbar. (a) Structure of the NGRC reservoir for three dimensional (3D) timing signals predicting. The input is a 3D timing signal. The linear feature vector Olin is formed by extracting the signal values of ti time (red box) and ti-s time (purple box). The ith linear feature vector is encoded as timing voltage and conductance, and the timing voltage is the input of the memristor array, and the conductance is mapped to the memristor array as weight. The nonlinear feature vector Ononlin consists of the outputs of specific elements of the memristor array (green boxes). The total feature vector is directly spliced by Olin and Ononlin. (b) The way the linear feature vector Olin, i in panel (a) mapping to the memristor array. The g+ and g represent the device conductance values for the positive and negative weights in the differential pair, respectively.

    图 3  基于忆阻器阵列(包括正、负列)的矩阵乘法运算仿真平台结构示意图, gr为忆阻器的电导, gT为晶体管电导

    Fig. 3.  Simulation platform of memristor array (including positive and negative arrays) as analog dot-product engine. The memristor conductance corresponds to gr and the transistor conductance corresponds to gT.

    图 4  输入精度为定点32 bit, 输出精度为定点64 bit, 不同权重精度下800个时间步的预测XZ截面图 (a) 64 bit; (b) 32 bit; (c) 16 bit; (d) 8 bit; (e) 6 bit; (f) 4 bit

    Fig. 4.  The XZ cross sections of 800 time steps with different weight precision, when input precision of integer is 32 bit and output precision of integer is 64 bit: (a) 64 bit; (b) 32 bit; (c) 16 bit; (d) 8 bit; (e) 6 bit; (f) 4 bit.

    图 5  短期预测(1个李雅普诺夫周期)的NRMSE随不同权重精度(8, 16, 32, 64 bit)和不同输出精度(8, 16, 32, 64 bit)的变化

    Fig. 5.  The variation diagram of NRMSE for short-term prediction (1 Lyapunov cycle) with different weight precision (8, 16, 32, 64 bit) and different output precision (8, 16, 32, 64 bit).

    图 6  (a) Lorenz63的z回归图(紫色)与不同权重精度下预测的z回归图; (b)图(a)红框区域标记中的放大图

    Fig. 6.  (a) The z return map of Lorenz63 (purple) overlaid with the z return map under different weight accuracy; (b) detail of the region marked in Fig. (a).

    图 7  短期预测结构相似度的NRMSE在不同权重精度条件下随权重噪声强度的变化

    Fig. 7.  The variation of NRMSE under different weight precision conditions for short-term prediction with increasing weight noise intensity.

  • [1]

    Guillem C, Jordi F 2015 Front. Psychol. 6 818

    [2]

    Dayan P, Abbott L F 2001 J. Cogn. Neurosci. 15 154

    [3]

    Vogels T P, Rajan K, Abbott L F 2005 Annu. Rev. Neurosci. 28 357Google Scholar

    [4]

    Tian Y, Li G, Sun P 2021 Phys. Rev. Res. 3 043085Google Scholar

    [5]

    Borst A, Theunissen F E 1999 Nat. Neurosci. 2 947Google Scholar

    [6]

    Amit D J, Gutfreund H, Sompolinsky H 1987 Phys. Rev. A 35 2293Google Scholar

    [7]

    Danilo P, Mandic J A C 2001 Recurrent Neural Networks Architecture (Hoboken: John Wiley & Sons Ltd) pp69–89

    [8]

    Choi E, Schuetz A, Stewart W F, Sun J 2016 J. Am. Med. Inform. Assoc. 24 361

    [9]

    Pascanu R, Mikolov T, Bengio Y 2013 Proceedings of the 30 th International Conference on Machine Learning Atlanta, Georgia, USA, June 16–21, 2013 p1310

    [10]

    Dominey P, Arbib M, Joseph J P 1995 J. Cogn. Neurosci. 7 311Google Scholar

    [11]

    Jaeger H 2001 German National Research Institute for Computer ScienceGerman National Research Centre for Information Technology, GMD Technical Reports Bonn, Germany, January 01, 2001 p13

    [12]

    Jaeger H, Haas H 2004 Science 304 78Google Scholar

    [13]

    Maass W, Natschlager T, Markram H 2002 Neural Comput. 14 2531Google Scholar

    [14]

    Kan S, Nakajima K, Takeshima Y, Asai T, Kuwahara Y, Akai-Kasaya M 2021 Phys. Rev. Appl. 15 024030Google Scholar

    [15]

    Pathak J, Hunt B, Girvan M, Lu Z, Ott E 2018 Phys. Rev. Lett. 120 024102Google Scholar

    [16]

    Chattopadhyay A, Hassanzadeh P, Subramanian D 2020 Nonlinear Processes Geophys. 27 373Google Scholar

    [17]

    Lukoševičius M, Jaeger H, Schrauwen B 2012 KI - Künstliche Intelligenz 26 365

    [18]

    Boyd S, Chua L 1985 IEEE Trans. Circuits Syst. 32 1150Google Scholar

    [19]

    Grigoryeva L, Ortega J P 2018 Neural Networks 108 495Google Scholar

    [20]

    Zhao C, Li J, Liu L, Koutha L S, Liu J, Yi Y 2016 Proceedings of the 3 rd ACM International Conference on Nanoscale Computing and Communication New York, New York, USA, September 28–30, 2016 p1

    [21]

    Canaday D, Griffith A, Gauthier D 2018 Chaos:An Interdisciplinary Journal of Nonlinear Science 28 123119Google Scholar

    [22]

    Yi Y, Liao Y, Fu X 2016 Microprocess. Microsyst. 46 175Google Scholar

    [23]

    Bertschinger N, Natschlager T 2004 Neural Comput. 16 1413Google Scholar

    [24]

    Yang X, Chen W, Wang F 2016 Analog Integr. Circuits Signal Process. 87 263Google Scholar

    [25]

    Merkel C, Saleh Q, Donahue C, Kudithipudi D 2014 5 th Annual International Conference on Biologically Inspired Cognitive Architectures (BICA) MIT Campus, Cambridge, Massachusetts, USA, November 7–9, 2014 p249

    [26]

    Donahue C, Merkel C, Saleh Q, Dolgovs L, Ooi Y, Kudithipudi D, Wysocki B 2015 IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA) Verona, New York, May 26–28, 2015 p24

    [27]

    Demis E, Aguilera R, Scharnhorst K, Aono M, Stieg A, Gimzewski J 2016 Jpn. J. Appl. Phys. 55 1102B2Google Scholar

    [28]

    Lilak S, Woods W, Scharnhorst K, Dunham C, Teuscher C, Stieg A, Gimzewski J 2021 Front. in Nanotechnol. 3 1

    [29]

    Vandoorne K, Mechet P, van Vaerenbergh T, et al. 2014 Nat. Commun. 5 3541Google Scholar

    [30]

    Milano G, Pedretti G, Montano K, Ricci S, Hashemkhani S, Boarino L, Ielmini D, Ricciardi C 2021 Nat. Mater. doi: 10.1038/s41563-021-01099-9

    [31]

    Gallicchio C, Micheli A, Pedrelli L 2017 Neurocomputing 268 87Google Scholar

    [32]

    Qiao J, Li F, Han H G, Li W 2016 IEEE Trans. Neural Networks Learn. Syst. 28 391

    [33]

    Tong Z Q, Tanaka G 2018 24th International Conference on Pattern Recognition (ICPR) Beijing, China, August 20–24, 2018 p1289

    [34]

    Murakamli M, Kroger B, Birkholz P, Triesch J 2015 5th IEEE Joint International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EpiRob) Providence, Rhode Island, August 13–16, 2015 p208

    [35]

    Sussillo D, Abbott L F 2009 Neuron 63 544Google Scholar

    [36]

    Tanaka G, Yamane T, Heroux J B, et al. 2019 Neural Networks 115 100Google Scholar

    [37]

    Lepri S, Giacomelli G, Politi A, Arecchi F T 1994 Physica D 70 235Google Scholar

    [38]

    Appeltant L, Soriano M C, van der Sande G, et al. 2011 Nat. Commun. 2 468 468

    [39]

    Brunner D, Penkovsky B, Marquez B A, Jacquot M, Fischer I, Larger L 2018 J. Appl. Phys. 124 152004Google Scholar

    [40]

    Penkovsky B, Larger L, Brunner D 2018 J. Appl. Phys. 124 162101Google Scholar

    [41]

    Yu J, Li Y, Sun W, et al. 2021 Symposium on VLSI Technology Kyoto, Japan, June 13–19, 2021 p1

    [42]

    Gauthier D J, Bollt E, Griffith A, Barbosa W A S 2021 Nat. Commun. 12 5564Google Scholar

    [43]

    Strukov D B, Snider G S, Stewart D R, Williams R S 2008 Nature 453 80Google Scholar

    [44]

    Li H, Wang S, Zhang X, Wang W, Yang R, Sun Z, Feng W, Lin P, Wang Z, Sun L, Yao Y 2021 Adv. Intell. Syst. 3 2100017Google Scholar

    [45]

    Li Y, Loh L, Li S, Chen L, Li B, Bosman M, Ang K W 2021 Nat. Electron. 4 348Google Scholar

    [46]

    Kim H, Mahmoodi M R, Nili H, Strukov D B 2021 Nat. Commun. 12 5198Google Scholar

    [47]

    Xiao T P, Bennett C H, Feinberg B, Agarwal S, Marinella M J 2020 Appl. Phys. Rev. 7 031301Google Scholar

    [48]

    Lorenz E N 2004 The Theory of Chaotic Attractors (New York: Springer New York) pp25–36

    [49]

    Zhang W, Gao B, Tang J, Yao P, Yu S, Chang M F, Yoo H J, Qian H, Wu H 2020 Nat. Electron. 3 371Google Scholar

  • [1] 郭慧朦, 梁燕, 董玉姣, 王光义. 蔡氏结型忆阻器的简化及其神经元电路的硬件实现. 物理学报, 2023, 72(7): 070501. doi: 10.7498/aps.72.20222013
    [2] 武长春, 周莆钧, 王俊杰, 李国, 胡绍刚, 于奇, 刘洋. 基于忆阻器的脉冲神经网络硬件加速器架构设计. 物理学报, 2022, 71(14): 148401. doi: 10.7498/aps.71.20220098
    [3] 张宇琦, 王俊杰, 吕子玉, 韩素婷. 应用于感存算一体化系统的多模调控忆阻器. 物理学报, 2022, 71(14): 148502. doi: 10.7498/aps.71.20220226
    [4] 温新宇, 王亚赛, 何毓辉, 缪向水. 忆阻类脑计算. 物理学报, 2022, 71(14): 140501. doi: 10.7498/aps.71.20220666
    [5] 周正, 黄鹏, 康晋锋. 基于非挥发存储器的存内计算技术. 物理学报, 2022, 71(14): 148507. doi: 10.7498/aps.71.20220397
    [6] 胡炜, 廖建彬, 杜永乾. 一种适用于大规模忆阻网络的忆阻器单元解析建模策略. 物理学报, 2021, 70(17): 178505. doi: 10.7498/aps.70.20210116
    [7] 史晨阳, 闵光宗, 刘向阳. 蛋白质基忆阻器研究进展. 物理学报, 2020, 69(17): 178702. doi: 10.7498/aps.69.20200617
    [8] 徐威, 王钰琪, 李岳峰, 高斐, 张缪城, 连晓娟, 万相, 肖建, 童祎. 新型忆阻器神经形态电路的设计及其在条件反射行为中的应用. 物理学报, 2019, 68(23): 238501. doi: 10.7498/aps.68.20191023
    [9] 邵楠, 张盛兵, 邵舒渊. 具有经验学习特性的忆阻器模型分析. 物理学报, 2019, 68(19): 198502. doi: 10.7498/aps.68.20190808
    [10] 邵楠, 张盛兵, 邵舒渊. 具有感觉记忆的忆阻器模型. 物理学报, 2019, 68(1): 018501. doi: 10.7498/aps.68.20181577
    [11] 吴洁宁, 王丽丹, 段书凯. 基于忆阻器的时滞混沌系统及伪随机序列发生器. 物理学报, 2017, 66(3): 030502. doi: 10.7498/aps.66.030502
    [12] 袁泽世, 李洪涛, 朱晓华. 基于忆阻器的数模混合随机数发生器. 物理学报, 2015, 64(24): 240503. doi: 10.7498/aps.64.240503
    [13] 徐晖, 田晓波, 步凯, 李清江. 温度改变对钛氧化物忆阻器导电特性的影响. 物理学报, 2014, 63(9): 098402. doi: 10.7498/aps.63.098402
    [14] 刘玉东, 王连明. 基于忆阻器的spiking神经网络在图像边缘提取中的应用. 物理学报, 2014, 63(8): 080503. doi: 10.7498/aps.63.080503
    [15] 李志军, 曾以成, 李志斌. 改进型细胞神经网络实现的忆阻器混沌电路. 物理学报, 2014, 63(1): 010502. doi: 10.7498/aps.63.010502
    [16] 田晓波, 徐晖, 李清江. 横截面积参数对钛氧化物忆阻器导电特性的影响. 物理学报, 2014, 63(4): 048401. doi: 10.7498/aps.63.048401
    [17] 刘东青, 程海峰, 朱玄, 王楠楠, 张朝阳. 忆阻器及其阻变机理研究进展. 物理学报, 2014, 63(18): 187301. doi: 10.7498/aps.63.187301
    [18] 董哲康, 段书凯, 胡小方, 王丽丹. 两类纳米级非线性忆阻器模型及串并联研究. 物理学报, 2014, 63(12): 128502. doi: 10.7498/aps.63.128502
    [19] 许碧荣. 一种最简的并行忆阻器混沌系统. 物理学报, 2013, 62(19): 190506. doi: 10.7498/aps.62.190506
    [20] 贾林楠, 黄安平, 郑晓虎, 肖志松, 王玫. 界面效应调制忆阻器研究进展. 物理学报, 2012, 61(21): 217306. doi: 10.7498/aps.61.217306
计量
  • 文章访问数:  4857
  • PDF下载量:  373
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-01-12
  • 修回日期:  2022-01-26
  • 上网日期:  2022-07-14
  • 刊出日期:  2022-07-20

/

返回文章
返回