Self organizing maps-II

Prerequisites :-

My Previous blog :- https://machinelearners.net/2017/07/24/first-blog-post/

Some basic knowledge of python language and Linear algebra.

Mathematics behind each process of Self organizing maps.

 

·      Competitive Process :-

 

2017-07-31 (2).png

 

·      Co-operative process

 

 

2017-07-31 (2).png

img9.gif

·      Synaptic Weight Adaptation process.

Synaptic weight adaptation is based on the principle of Hebbian learning mechanism which states that if presynaptic properties and post synaptic properties are correlated then the neuron connection is strengthened by increasing the weight. And with this Hebbian hypothesis term we also add a forgetting term so that unlimited learning doesn’t take place. After saturation of increasing term of weights learning should be stopped.

2017-07-31 (3)

·      Algorithm Of Self Organizing maps :-

2017-07-30 (2).png

·      Python Implementation of Self organizing maps:-

In this code we cluster out input inhibiting similar properties, a  high dimensional image(RGB type) using self organizing maps.

import numpy as np
import math
from PIL import Image

 

class SOM:
def __init__(self, x_size, y_size, trait_num, t_iter, t_step):

#randomly initializing weights
self.weights = np.random.randint(256, size=(x_size, y_size, trait_num)).astype(‘float64’)

#number of iterations and steps
self.t_iter = t_iter
self.map_radius = max(self.weights.shape)/2
self.t_const = self.t_iter/math.log(self.map_radius)
self.t_step = t_step

 

def show(self):
im = Image.fromarray(self.weights.astype(‘uint8′), mode=’RGB’)
im.format = ‘JPG’
im.show()

 

def distance_matrix(self, vector):

#Discriminant function
return np.sum((self.weights – vector) ** 2, 2)

#Best matching unit

def bmu(self, vector):
distance = self.distance_matrix(vector)
return np.unravel_index(distance.argmin(), distance.shape)

#Topological neighborhood

def bmu_distance(self, vector):
x, y, rgb = self.weights.shape
xi = np.arange(x).reshape(x, 1).repeat(y, 1)
yi = np.arange(y).reshape(1, y).repeat(x, 0)
return np.sum((np.dstack((xi, yi)) – np.array(self.bmu(vector))) ** 2, 2)

 

def hood_radius(self, iteration):
return self.map_radius * math.exp(-iteration/self.t_const)

#Final neighborhood

def teach_row(self, vector, i, dis_cut, dist):
hood_radius_2 = self.hood_radius(i) ** 2
bmu_distance = self.bmu_distance(vector).astype(‘float64’)
if dist is None:
temp = hood_radius_2 – bmu_distance
else:
temp = dist ** 2 – bmu_distance
influence = np.exp(-bmu_distance / (2 * hood_radius_2))
if dis_cut:
influence *= ((np.sign(temp) + 1) / 2)
return np.expand_dims(influence, 2) * (vector – self.weights)

#Weights updation

def teach(self, t_set, distance_cutoff=False, distance=None):
for i in range(self.t_iter):
for x in t_set:
self.weights += self.teach_row(x, i, distance_cutoff, distance)
self.show()

#We apply SOM on image of size 200*200*3

s = SOM(200, 200, 3, 100, 0.1)
# t_set = np.array([[200, 0, 0], [0, 200, 0], [0, 0, 200], [120, 0, 100]])
t_set = np.random.randint(256, size=(15, 3))
s.teach(t_set)

These nodes of different colors in the following self organized maps represents the clusters of similar type of inputs in the image matrix.

2017-07-31 (4).png

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s