New paper: Self-Optimization in Continuous-Time Recurrent Neural Networks

The core part of Mario’s Master’s thesis has now been published in Frontiers in Robotics and AI!

Dr. Tom Froese

We were able to generalize the powerful self-optimization process to continuous-time neural networks, the class of neural networks most used by evolutionary robotics.

Self-Optimization in Continuous-Time Recurrent Neural Networks

Mario Zarco and Tom Froese

A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors—even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application…

View original post 59 more words

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s