Learning Vector Quantization

Posted By on May 16, 2016

Download PDF
Simple Competitive Learning Networks
Hebbian Learning

Download PDF

Learning vector quantization (LVQ), is a prototype-based supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems.

LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to self-organizing maps (SOM) and related to neural gas, and to the k-Nearest Neighbor algorithm (k-NN). LVQ was invented by Teuvo Kohonen.

An LVQ system is represented by prototypes W=(w(i),...,w(n)) which are defined in the feature space of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.

An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain.LVQ systems can be applied to multi-class classification problems in a natural way. It is used in a variety of practical applications.

A key issue in LVQ is the choice of an appropriate measure of distance or similarity for training and classification. Recently, techniques have been developed which adapt a parameterized distance measure in the course of training the system.

LVQ can be a source of great help in classifying text documents.watch full movie Alibi.com 2017


Below follows an informal description.
The algorithm consists of 3 basic steps. The algorithm’s input is:

  • how many neurons the system will have M
  • what weight each neuron has \vec{W_i} for i = 0,1,...,M - 1
  • how fast the neurons are learning  \eta .
  • and an input list containing vectors to train the neurons  L

The algorithm’s flow is:

  1. For next input \vec{X} in  L find the neuron \vec{Wm} at which d(\vec{X},\vec{W_m}) gets its minimum value, where \, d\, is the metric used ( Euclidean, etc. ).
  2. Update \vec{W_m}. A better explanation is get \vec{W_m} closer to the input \vec{X}.
     \vec{W_m} \gets \vec{W_m} + \eta \cdot \left( \vec{X}  -  \vec{W_m} \right) .
  3. While there are vectors left in  L go to step 1, else terminate.

Download PDF
Simple Competitive Learning Networks
Hebbian Learning

Download PDF

Posted by Akash Kurup

Founder and C.E.O, World4Engineers Educationist and Entrepreneur by passion. Orator and blogger by hobby

Website: http://world4engineers.com