The core part of Mario’s Master’s thesis has now been published in Frontiers in Robotics and AI!
We were able to generalize the powerful self-optimization process to continuous-time neural networks, the class of neural networks most used by evolutionary robotics.
Mario Zarco and Tom Froese
A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors—even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application…
View original post 59 more words