Learning dynamical systems by recurrent neural networks from orbits

Neural Netw. 1998 Dec;11(9):1589-1599. doi: 10.1016/s0893-6080(98)00098-7.

Abstract

This paper investigates the problem of approximating a dynamical system (DS) by a recurrent neural network (RNN) as one extension of the problem of approximating orbits by an RNN. We systematically investigate how an RNN can produce a DS on the visible state space to approximate a given DS and as a first step to the generalization problem for RNNs, we also investigate whether or not a DS produced by some RNN can be identified from several observed orbits of the DS. First, it is proved that RNNs without hidden units uniquely produce a certain class of DS. Next, neural dynamical systems (NDSs) are proposed as DSs produced by RNNs with hidden units. Moreover, affine neural dynamial systems (A-NDSs) are provided as nontrivial examples of NDSs and it is proved that any DS can be finitely approximated by an A-NDS with any precision. We propose an A-NDS as a DS that an RNN can actually produce on the visible state space to approximate the target DS. For the generalization problem of RNNs, a geometric criterion is derived in the case of RNNs without hidden units. This theory is also extended to the case of RNNs with hidden units for learning A-NDSs.