Send to

Choose Destination
See comment in PubMed Commons below
PLoS One. 2012;7(5):e37372. doi: 10.1371/journal.pone.0037372. Epub 2012 May 24.

Transferring learning from external to internal weights in echo-state networks with sparse connectivity.

Author information

Department of Electrical Engineering, Stanford University, Stanford, California, United States of America.


Modifying weights within a recurrent network to improve performance on a task has proven to be difficult. Echo-state networks in which modification is restricted to the weights of connections onto network outputs provide an easier alternative, but at the expense of modifying the typically sparse architecture of the network by including feedback from the output back into the network. We derive methods for using the values of the output weights from a trained echo-state network to set recurrent weights within the network. The result of this "transfer of learning" is a recurrent network that performs the task without requiring the output feedback present in the original network. We also discuss a hybrid version in which online learning is applied to both output and recurrent weights. Both approaches provide efficient ways of training recurrent networks to perform complex tasks. Through an analysis of the conditions required to make transfer of learning work, we define the concept of a "self-sensing" network state, and we compare and contrast this with compressed sensing.

[Indexed for MEDLINE]
Free PMC Article
PubMed Commons home

PubMed Commons


    Supplemental Content

    Full text links

    Icon for Public Library of Science Icon for PubMed Central
    Loading ...
    Support Center