Finally a large part of Mario’s thesis on unsupervised learning in artificial neural networks has been published and is available open access:
Self-modeling in Hopfield Neural Networks with Continuous Activation Function
Mario Zarco and Tom Froese
Hopfield networks can exhibit many different attractors of which most are local optima. It has been demonstrated that combining states randomization and Hebbian learning enlarges the basin of attraction of globally optimal attractors. The procedure is called self-modeling and it has been applied in symmetric Hopfield networks with discrete states and without self-recurrent connections. We are interested in knowing which topological constraints can be relaxed. So, the self-modeling process is tested in asymmetric Hopfield networks with continuous states and self-recurrent connections. The best results are obtained in networks with modular structure.
Reblogged this on Dr. Tom Froese and commented:
The unsupervised learning technique our group has been working on has been extended to a more general class of artificial neural network.
LikeLike