Hopfield network

ALIFE XV Late Breaking Abstract

Can we incorporate sleep-like interruptions into evolutionary robotics?

Mario A. Zarco-Lopez and Tom Froese

Traditional use of Hopfield networks can be divided into two main categories: (1) constraint satisfaction based on predefined a weight space, and (2) model induction based on a training set of patterns. Recently, Watson et al. (2011) have demonstrated that combining these two aspects, i.e. by inducing a model of the network’s attractors by applying Hebbian learning after constraint satisfaction, can lead to self-optimization of network connectivity. A key element of their approach is a repeated randomized reset and relaxation of network state, which has been interpreted as similar to the function of sleep (Woodward, Froese, & Ikegami, 2015). This perspective might give rise to an alternative “wake-sleep” algorithm (Hinton, Dayan, Frey, & Neal, 1995). All of this research, however, has taken place with isolated artificial neural networks, which goes against decades of work on situated robotics (Cliff, 1991). We consider the challenges involved in extending this work on sleep-like self-optimization to the dynamical approach to cognition, in which behavior is seen as emerging from the interactions of brain, body and environment (Beer, 2000).

Beer, R. D. (2000). Dynamical approaches to cognitive science. Trends in Cognitive Sciences, 4(3), 91-99.

Cliff, D. (1991). Computational neuroethology: A provisional manifesto. In J.-A. Meyer & S. W. Wilson (Eds.), From Animals to Animats (pp. 29-39). MIT Press.

Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The “wake-sleep” algorithm for unsupervised neural networks. Science, 268, 1158-1161.

Watson, R. A., Buckley, C. L., & Mills, R. (2011). Optimization in “self-modeling” complex adaptive systems. Complexity, 16(5), 17-26.

Woodward, A., Froese, T., & Ikegami, T. (2015). Neural coordination can be enhanced by occasional interruption of normal firing patterns: A self-optimizing spiking neural network model. Neural Networks, 62, 39-46.

Advertisement