learning

Self-modeling in Hopfield Neural Networks with Continuous Activation Function

Finally a large part of Mario’s thesis on unsupervised learning in artificial neural networks has been published and is available open access:

Self-modeling in Hopfield Neural Networks with Continuous Activation Function

Mario Zarco and Tom Froese

Hopfield networks can exhibit many different attractors of which most are local optima. It has been demonstrated that combining states randomization and Hebbian learning enlarges the basin of attraction of globally optimal attractors. The procedure is called self-modeling and it has been applied in symmetric Hopfield networks with discrete states and without self-recurrent connections. We are interested in knowing which topological constraints can be relaxed. So, the self-modeling process is tested in asymmetric Hopfield networks with continuous states and self-recurrent connections. The best results are obtained in networks with modular structure.

Advertisement

Mario Zarco graduates with honors!

Today Mario Zarco graduated with honors from UNAM’s Master’s degree in Computer Science and Engineering for his work on self-optimization in neural networks.

The title and extended abstract of his thesis are as follows:

􀀈􀀓􀀔􀀕􀀇􀀌􀀐􀀁􀀇􀀈􀀁􀀄􀀕􀀔􀀐􀀂􀀐􀀑􀀔􀀌􀀎􀀌􀀖􀀄􀀆􀀌􀀘􀀏􀀁􀀈􀀏􀀁􀀒􀀈􀀇􀀈􀀓􀀁Estudio de Auto-Optimización en Redes Neuronales de Hopfield
􀀏􀀈􀀕􀀒􀀐􀀏􀀄􀀍􀀈􀀓􀀁􀀇􀀈􀀁􀀋􀀐􀀑􀀉􀀌􀀈􀀍􀀇􀀁
Mario Alberto Zarco López

Las redes neuronales de Hopfield de tiempo discreto, cuya dinámica presentan múltiples atractores de punto fijo, han sido ampliamente usadas en dos casos: (1) memoria asociativa, basada en aprender un conjunto de patrones de entrenamiento los cuales son representados por atractores, y (2) optimización, basado en representar un problema de satisfaccion de restricciones con la topología de la red de tal forma que los atractores sean soluciones de ese problema. En el ultimo caso, la función de energía de la red debe tener la misma forma que la función a ser optimizada, de modo que los m´ınimos de la primera también sean mínimos de la segunda. Aunque se ha demostrado que los atractores de baja energía tienen un amplio domino de atracción, la red usualmente queda atrapada en mínimos locales. Recientemente se demostró que las redes de Hopfield de tiempo-discreto pueden converger en atractores globalmente óptimos ampliando las mejores cuencas de atracción. La red combina el aprendizaje de sus propios atractores usando aprendizaje Hebbiano y la aleatorizacion de los estados neuronales una vez que la red ha reforzada su configuración actual.
(more…)

ALIFE XV Late Breaking Abstract

Can we incorporate sleep-like interruptions into evolutionary robotics?

Mario A. Zarco-Lopez and Tom Froese

Traditional use of Hopfield networks can be divided into two main categories: (1) constraint satisfaction based on predefined a weight space, and (2) model induction based on a training set of patterns. Recently, Watson et al. (2011) have demonstrated that combining these two aspects, i.e. by inducing a model of the network’s attractors by applying Hebbian learning after constraint satisfaction, can lead to self-optimization of network connectivity. A key element of their approach is a repeated randomized reset and relaxation of network state, which has been interpreted as similar to the function of sleep (Woodward, Froese, & Ikegami, 2015). This perspective might give rise to an alternative “wake-sleep” algorithm (Hinton, Dayan, Frey, & Neal, 1995). All of this research, however, has taken place with isolated artificial neural networks, which goes against decades of work on situated robotics (Cliff, 1991). We consider the challenges involved in extending this work on sleep-like self-optimization to the dynamical approach to cognition, in which behavior is seen as emerging from the interactions of brain, body and environment (Beer, 2000).

Beer, R. D. (2000). Dynamical approaches to cognitive science. Trends in Cognitive Sciences, 4(3), 91-99.

Cliff, D. (1991). Computational neuroethology: A provisional manifesto. In J.-A. Meyer & S. W. Wilson (Eds.), From Animals to Animats (pp. 29-39). MIT Press.

Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The “wake-sleep” algorithm for unsupervised neural networks. Science, 268, 1158-1161.

Watson, R. A., Buckley, C. L., & Mills, R. (2011). Optimization in “self-modeling” complex adaptive systems. Complexity, 16(5), 17-26.

Woodward, A., Froese, T., & Ikegami, T. (2015). Neural coordination can be enhanced by occasional interruption of normal firing patterns: A self-optimizing spiking neural network model. Neural Networks, 62, 39-46.