You must have the knowledge of working of artificial neural networks for that you can refer my video :- https://www.youtube.com/watch?v=GNit49Djv3g&lc=z12wubx4do2tyn3gx04cifnj3zj5cjug10s
Self organizing map is a technique which is neurobiologically motivated. Like in Human brain , for different type of sensory inputs like Visual(eye), Auditory(ear) or any other, understanding of each of them takes place in different region of our cerebral cortex. Due to this reason one can believe that there is a topological ordering of computations that takes place in our brain. This means their is a spatial relation between the type of inputs and our brain. Self organizing maps are based on this function of our brain.
Just for example if in future we have to program a robot which can capture different type sensory inputs from our surroundings and it has a different algorithm feed in different region of it’s memory for each type of those inputs. Now, just from that knowledge of sensory raw inputs it has decide which region’s memory to access so that it can make a meaning out of the captured data by using the right algorithm, consequently. Now that’s when self organizing maps will come into the scene.
Self organizing maps are unsupervised type of learning technique. They uses artificial neural networks. In a nutshell you can consider their working as a mapping of high dimensional instances of a input data which is represented by input layer of neurons to the low dimensional output which is represented by outer layer of neurons of artificial neural networks. Self organizing maps are based on two very important pillars that is competitive learning and winner takes all approach. Neurons in outer layer competes with each other and the winning neurons takes all. We will discuss about it in detail.
Models of Self organizing maps.
Wilshaw von-der malsburg model
Wilshaw von-der malsburg model explains the Retina optic mapping from the retina to the visual cortex in our brain. Here, our retina consists of pre-synaptic neurons that is our input layer of neurons which are fully connected with the post synaptic neurons in our visual cortex which are our outer layer of neurons. Input layer and output layer both are organized in this model.
Note:- Electric signals of presynaptic neurons are based on geometric proximities that means will have highly correlated electrical signals.
In this Model dimensional space of our input layer and output layer has to be same which is the biggest limitation of this model as it does not allow dimensionality reduction of input space of data.
Kohonen Model or Kohonen Self organizing maps.
This is a more popular model than it’s previously discussed counterpart as it has the scope of dimensionality reduction. As, This model consists of only organized output layer and it’s input layer is unorganized.
Kohonen model is based on vector coding algorithm that simply can be described as optimal placing of a fixed number of vectors which in case vector coding algorithm are codewords into much high dimensional input space.
So kohonen model can be used for Data compression and dimensionality reduction .
Essential processes in the formation of Self-Organizing maps.
Each neuron computes a discriminant function for each instance of input data or for each neuron. And this discriminant function provides the basis of the competition. The Neuron with the largest discriminant function is the winner. This process is a part of the long range inhibition for the self organizing maps.
Winning neuron determines the Spatial location of topological neighborhood of the excited neurons. In simple terms the winning neurons has the power to decide the boundaries of the neighborhood region consisting of the excited neurons nearby it. Winning neuron uses a neighborhood function to determine the spatial location of topological neighborhood.
* Synaptic Adaptation.
Synaptic Adaptation enables the excited neurons to increase their individual values of discriminant function in relation to the input pattern. Basically, if presynaptic properties and post synaptic properties are correlated the neuron connection is strengthened by increasing the weight.
Applications of SOMs.
SOMs are commonly used as visualization aids. They can make it easy for us humans to see relationships between vast amounts of data. Let me show you an example
World Poverty Map.
A SOM has been used to classify statistical data describing various quality-of-life factors such as state of health, nutrition, educational services etc. Countries with similar quality-of-life factors end up clustered together. The countries with better quality-of-life are situated toward the upper left and the most poverty stricken countries are toward the lower right. The hexagonal grid is a unified distance matrix, commonly known as a u-matrix. Each hexagon represents a node in the SOM.
This colour information can then be plotted onto a map of the world like so:
SOMs have been applied in many areas. Here are just some of them.
Image browsing systems
Interpreting seismic activity
Speech recognition (this is what Kohonen used them for initially)
Separating sound sources
Mathematical intuition behind the each process of self organizing maps and implementation of them by the python code.
Self organizing maps lecture:- https://www.youtube.com/watch?v=LjJeT7rwvF4&index=35&list=PL53BE265CE4A6C056
A.I. Junkie blog on Soms :- http://www.ai-junkie.com/ann/som/som1.html