Kohonen Self Organizing Networks

Posted By on May 16, 2016


Download PDF
Radial Basis Function Network
Simple Competitive Learning Networks

Kohonen’s networks are one of basic types of self-organizing neural networks. The ability to self-organize provides new possibilities – adaptation to formerly unknown input data. It seems to be the most natural way of learning, which is used in our brains, where no patterns are defined. Those patterns take shape during the learning process, which is combined with normal work. Kohonen’s networks are a synonym of whole group of nets which make use of self-organizing, competitive type learning method. We set up signals on net’s inputs and then choose winning neuron, the one which corresponds with input vector in the best way. Precise scheme of rivalry and later modifications of synapthic wages may have various forms. There are many sub-types based on rivalry, which differ themselves by precise self-organizing algorithm.

 

    ::Architecture of self-organizing maps::

Structure of neural network is a very crucial matter. Single neuron is a simple mechanism and it’s not able to do much by itself. Only a compound of neurons makes complicated operations possible. Because of our little knowledge about actual rules of human’s brain functioning many different architectures were created, which try to imitate the structure and behaviour of human’s nervous system. Most often one-way, one-layer type of network architecture is used. It is determined by the fact that all neurons must participate in the rivalry with the same rights. Because of that each of them must have as many inputs as the whole system.

siec

Neural network

 

mapa

2-D map of neurons

    ::Stages of operations::

Functioning of self-organizing neural network is divided into three stages:

  • construction
  • learning
  • identification

     System, which is supposed to realize functioning of self-organizing network, should consist of few basic elements. First of them is a matrix of neurons which are stimulated by input signals. Those signals should describe some attributes of effects which occure in the surrounding. Thanks to that description the net is able to group those effects. Information about events is translated into impulses which stimulate neurons. Group of signals transfered to every neuron doesn’t have to be identical, even its number may be various. However they have to realize one condition: unambiguously define those events.

     Another part of the net is a mechanism which defines the stage of similarity of every neuron’s wage and input signal. Moreover it assigns the unit with the best match – the winner. At the beginning the wages are small random numbers. It’s important that no symetry may occure. While learning, those wages are being modificated in the best way to show an internal structure of input data. However there is a risk that neurons could link with some values before groups are correctly recognized. Then the learning process should be repeated with diffrent wages.

     At last, absolutely necessary for self-organizing process is that the net is able to adapt wages values of winning neuron and his neighbours, according to response strenght. Net topology can be defined in a very simple way by determining the neighbours of every neuron. Let’s call the unit whose response on stimulation is maximal the image of this stimulation. Then we can presume that the net is in order, if topologic relations between input signals and their images are identical.

 

    ::Algorithm of learning::

The name of the whole class of networks came from the designation of algorithm called self-organizing Kohonen’s maps. They had been described ine the publication “Self Organizing Map”. Kohonen proposed two kinds of proximity : rectangular and gauss. The first is :

 

and the second:

 

“lambda” is the radius of proximity, it decreases in time.

Use of Kohonen’s method gives us better results than “Winner Takes All” method. Organization of the net is better (neurons organization represents the distribution of input data in a better way) and the convergence of the algorithm is higher. Because of that the time of single iteration is a few times longer – wages of many neurons , not only winners’, have to be modified.

Radial Basis Function Network
Simple Competitive Learning Networks

Download PDF

Posted by Akash Kurup

Founder and C.E.O, World4Engineers Educationist and Entrepreneur by passion. Orator and blogger by hobby

Website: http://world4engineers.com