self-optimization

CFP: 6th Int. Conf. on the Theory and Practice of Natural Computing

***************************************************************************
6th INTERNATIONAL CONFERENCE ON THE THEORY AND PRACTICE OF NATURAL COMPUTING (TPNC 2017)

Prague, Czech Republic

December 18-20, 2017

Organized by:

Institute of Computer Science
Czech Academy of Sciences

Faculty of Mathematics and Physics
Charles University

Research Group on Mathematical Linguistics (GRLMC)
Rovira i Virgili University

http://grammars.grlmc.com/TPNC2017/
***************************************************************************

AIMS:

TPNC is a conference series intending to cover the wide spectrum of computational principles, models and techniques inspired by information processing in nature. TPNC 2017 will reserve significant room for young scholars at the beginning of their career and particular focus will be put on methodology. The conference aims at attracting contributions to nature-inspired models of computation, synthesizing nature by means of computation, nature-inspired materials, and information processing in nature.

VENUE:

TPNC 2017 will take place in Prague, whose historic centre is UNESCO World Heritage Site and which is home to famous attractions like the Prague Castle, the Charles Bridge, etc. The venue will be:

Faculty of Mathematics and Physics
Charles University
Ke Karlovu 3
121 16 Praha 2

SCOPE:
(more…)

ALIFE XV Late Breaking Abstract

Can we incorporate sleep-like interruptions into evolutionary robotics?

Mario A. Zarco-Lopez and Tom Froese

Traditional use of Hopfield networks can be divided into two main categories: (1) constraint satisfaction based on predefined a weight space, and (2) model induction based on a training set of patterns. Recently, Watson et al. (2011) have demonstrated that combining these two aspects, i.e. by inducing a model of the network’s attractors by applying Hebbian learning after constraint satisfaction, can lead to self-optimization of network connectivity. A key element of their approach is a repeated randomized reset and relaxation of network state, which has been interpreted as similar to the function of sleep (Woodward, Froese, & Ikegami, 2015). This perspective might give rise to an alternative “wake-sleep” algorithm (Hinton, Dayan, Frey, & Neal, 1995). All of this research, however, has taken place with isolated artificial neural networks, which goes against decades of work on situated robotics (Cliff, 1991). We consider the challenges involved in extending this work on sleep-like self-optimization to the dynamical approach to cognition, in which behavior is seen as emerging from the interactions of brain, body and environment (Beer, 2000).

Beer, R. D. (2000). Dynamical approaches to cognitive science. Trends in Cognitive Sciences, 4(3), 91-99.

Cliff, D. (1991). Computational neuroethology: A provisional manifesto. In J.-A. Meyer & S. W. Wilson (Eds.), From Animals to Animats (pp. 29-39). MIT Press.

Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The “wake-sleep” algorithm for unsupervised neural networks. Science, 268, 1158-1161.

Watson, R. A., Buckley, C. L., & Mills, R. (2011). Optimization in “self-modeling” complex adaptive systems. Complexity, 16(5), 17-26.

Woodward, A., Froese, T., & Ikegami, T. (2015). Neural coordination can be enhanced by occasional interruption of normal firing patterns: A self-optimizing spiking neural network model. Neural Networks, 62, 39-46.