Search

Article

x

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

Single-step and multiple-step prediction of chaotic time series using Gaussian process model

Li Jun Zhang You-Peng

Citation:

Single-step and multiple-step prediction of chaotic time series using Gaussian process model

Li Jun, Zhang You-Peng
PDF
Get Citation

(PLEASE TRANSLATE TO ENGLISH

BY GOOGLE TRANSLATE IF NEEDED.)

  • For the chaotic time series single-step and multi-step prediction, Gaussian processes (GPs) method based on composite covariance function is proposed. GP priors over functions are determined mainly by covariance functions, and through learning from training data sets, all hyperparameters that define the covariance function and mean function can be estimated by using matrix operations and optimal algorithms within evidence maximum bayesian framework. As a probabilistic kernel machine, the number of tunable parameters for a GP model is greatly reduced compared with those for neural networks and fuzzy models. GP models with different composite covariance functions are applied to chaotic time series single-step and multi-step ahead prediction and compared with other models such as standard GP model with single covariance function, standard support vector machines, least square support vector machine, radial basis functional (RBF) neural networks, etc. Simulation results reveal that GP method with using different composite covariance functions can be used to accurately predict the chaotic time series and show stable performance with robustness. Hence,it provides an effective approach to studying the properties of complex nonlinear system modeling and control.
    [1]

    Abarbanel H D I 1996 Analysis of Observed Chaotic Data (New York: Springer-Verlag)

    [2]

    Takens F 1981 Dynamical Systems and Turbulence 898 366

    [3]

    Haykin S, Principe J 1998 IEEE Signal Processing Magazine 15 66

    [4]

    Jaeger H 2004 Science 308 78

    [5]

    Schilling R J, Carroll J J 2001 IEEE Trans. Neural Networks 12 1

    [6]

    Li J, Liu J H 2005 Acta Phys.Sin. 54 4569(in Chinese) [李 军、刘君华 2005 物理学报 54 4569]

    [7]

    Ma Q L, Zheng Q L, Peng H Z, Tan W, Qin J W 2008 Chin. Phys. B 17 536

    [8]

    Han M, Shi Z W, Guo W 2007 Acta Phys.Sin. 56 43(in Chinese) [韩 敏、史志伟、郭 伟 2007 物理学报 56 0043]

    [9]

    Jang J S R 1993 IEEE Trans. Syst., Man, Cybern. 23 665

    [10]

    Cui W Z, Zhu C C, Bao W X, Liu J H 2005 Chin. Phys. 14 922

    [11]

    Ye M Y,Wang X D, Zhang H R 2005 Acta Phys. Sin. 54 2568(in Chinese)[叶美盈、汪晓东、张浩然 2005 物理学报 54 2568]

    [12]

    Li J, Dong H Y 2008 Acta Phys. Sin. 57 4756(in Chinese)[李军、董海鹰 2008 物理学报 57 4756]

    [13]

    Ding G, Zhong S S, Li Y 2008 Chin. Phys. B 17 1998

    [14]

    Weigend A S, Gershenfeld N A 1994 Time Series Prediction: forecasting the future and understanding the past (Harlow, UK: Addison Wesley)

    [15]

    Williams C K I, Barber D 1998 IEEE Trans. PA M I 20 1342

    [16]

    Seeger M 2004 International Journal of Neural System 14 69

    [17]

    Rasmussen C E, Williams C K I 2006 Gaussian Processes for Machine Learning (Cambridge, MA: The MIT Press)

    [18]

    MacKay D J C 1999 Neural Computation 11 1035

    [19]

    Gregorcic G, Lightbody G 2009 Engineering Applications of Artificial Intelligence 22 522

    [20]

    Cristianini N, Shawe-Taylor J 2000 An introduction to support vector machines and other kernel-based learning(Cambridge,UK: Cambridge Univeristy Press)

    [21]

    Scholkopf B, Smola A J 2002 Learning with Kernels(Cambridge MA: MIT Press)

    [22]

    Neal R M 1996 Bayesian Learning for Neural Networks (New York: Springer-Verlag)

    [23]

    Lorenz E N 1963 J. Atmos. Sciences 20 130

    [24]

    Kennel M B, Brown R, Abarbanel H D I 1992 Phys. Rev. A 45 3403

    [25]

    Fraser A M 1989 IEEE Trans. on Information Theory 35 245

    [26]

    Suykens J A K, Gestel T V, Brabanter J De, Moor B De, Vandewalle J 2002 Least Squares Support Vector Machines (Singapore: World Scientific Pub. Co.)

    [27]

    Chen S, Cowan C F N, Grant P M 1991 IEEE Trans. Neural Networks 2 302

    [28]

    Scholkopf B, Burges C J C, Smola A J 1999 Advances in Kernel Methods — Support Vector Learning(Cambridge MA: MIT Press) 211

    [29]

    Mackey M C, Glass L 1977 Science 197 287

  • [1]

    Abarbanel H D I 1996 Analysis of Observed Chaotic Data (New York: Springer-Verlag)

    [2]

    Takens F 1981 Dynamical Systems and Turbulence 898 366

    [3]

    Haykin S, Principe J 1998 IEEE Signal Processing Magazine 15 66

    [4]

    Jaeger H 2004 Science 308 78

    [5]

    Schilling R J, Carroll J J 2001 IEEE Trans. Neural Networks 12 1

    [6]

    Li J, Liu J H 2005 Acta Phys.Sin. 54 4569(in Chinese) [李 军、刘君华 2005 物理学报 54 4569]

    [7]

    Ma Q L, Zheng Q L, Peng H Z, Tan W, Qin J W 2008 Chin. Phys. B 17 536

    [8]

    Han M, Shi Z W, Guo W 2007 Acta Phys.Sin. 56 43(in Chinese) [韩 敏、史志伟、郭 伟 2007 物理学报 56 0043]

    [9]

    Jang J S R 1993 IEEE Trans. Syst., Man, Cybern. 23 665

    [10]

    Cui W Z, Zhu C C, Bao W X, Liu J H 2005 Chin. Phys. 14 922

    [11]

    Ye M Y,Wang X D, Zhang H R 2005 Acta Phys. Sin. 54 2568(in Chinese)[叶美盈、汪晓东、张浩然 2005 物理学报 54 2568]

    [12]

    Li J, Dong H Y 2008 Acta Phys. Sin. 57 4756(in Chinese)[李军、董海鹰 2008 物理学报 57 4756]

    [13]

    Ding G, Zhong S S, Li Y 2008 Chin. Phys. B 17 1998

    [14]

    Weigend A S, Gershenfeld N A 1994 Time Series Prediction: forecasting the future and understanding the past (Harlow, UK: Addison Wesley)

    [15]

    Williams C K I, Barber D 1998 IEEE Trans. PA M I 20 1342

    [16]

    Seeger M 2004 International Journal of Neural System 14 69

    [17]

    Rasmussen C E, Williams C K I 2006 Gaussian Processes for Machine Learning (Cambridge, MA: The MIT Press)

    [18]

    MacKay D J C 1999 Neural Computation 11 1035

    [19]

    Gregorcic G, Lightbody G 2009 Engineering Applications of Artificial Intelligence 22 522

    [20]

    Cristianini N, Shawe-Taylor J 2000 An introduction to support vector machines and other kernel-based learning(Cambridge,UK: Cambridge Univeristy Press)

    [21]

    Scholkopf B, Smola A J 2002 Learning with Kernels(Cambridge MA: MIT Press)

    [22]

    Neal R M 1996 Bayesian Learning for Neural Networks (New York: Springer-Verlag)

    [23]

    Lorenz E N 1963 J. Atmos. Sciences 20 130

    [24]

    Kennel M B, Brown R, Abarbanel H D I 1992 Phys. Rev. A 45 3403

    [25]

    Fraser A M 1989 IEEE Trans. on Information Theory 35 245

    [26]

    Suykens J A K, Gestel T V, Brabanter J De, Moor B De, Vandewalle J 2002 Least Squares Support Vector Machines (Singapore: World Scientific Pub. Co.)

    [27]

    Chen S, Cowan C F N, Grant P M 1991 IEEE Trans. Neural Networks 2 302

    [28]

    Scholkopf B, Burges C J C, Smola A J 1999 Advances in Kernel Methods — Support Vector Learning(Cambridge MA: MIT Press) 211

    [29]

    Mackey M C, Glass L 1977 Science 197 287

Metrics
  • Abstract views:  10413
  • PDF Downloads:  11521
  • Cited By: 0
Publishing process
  • Received Date:  13 September 2010
  • Accepted Date:  04 November 2010
  • Published Online:  15 July 2011

/

返回文章
返回