Hi I am sunimali, I would like to get details on java code for vector quantization algorithm.This for my college academic project
Thanks
Posts: 14,118
Threads: 61
Joined: Oct 2014
Learning Vector Quantization.*;
Coded in VB.Net
View LVQ1Class.vb
View Form1.vb
Coded in C++
View cLVQ.h
View cLVQ.cpp
View globals.h
View maincode.cpp
Coded in Java
View LVQ_Example1.java
Coded in Python
View LVQ_Example1.py
LVQ Example 1
Most of this code was taken from the self-organizing map examples and modified for LVQ. The weights are not initialized to random values, and they're also updated according to a different scheme.
This is one of the ways in which LVQ is heavily dependent on supervised learning; the weights are initially set to match specific patterns, as well as training through the input nodes. Note both the "initializeWeights()" and "training()" functions.
Also, you can how positive and negative reinforcement come into play in UpdateWeights().
Example Results 1
I could already guess that with THAT MUCH supervised learning going on, the results will be a little too perfect.
All A's belong to cluster 0, B's to cluster 1, C's to cluster 2, etc.
Weights for cluster 0 initialized to pattern A1
Weights for cluster 1 initialized to pattern B1
Weights for cluster 2 initialized to pattern C1
Weights for cluster 3 initialized to pattern D1
Weights for cluster 4 initialized to pattern E1
Weights for cluster 5 initialized to pattern J1
Weights for cluster 6 initialized to pattern K1
Pattern A1 belongs to cluster 0
Pattern B1 belongs to cluster 1
Pattern C1 belongs to cluster 2
Pattern D1 belongs to cluster 3
Pattern E1 belongs to cluster 4
Pattern J1 belongs to cluster 5
Pattern K1 belongs to cluster 6
Pattern A2 belongs to cluster 0
Pattern B2 belongs to cluster 1
Pattern C2 belongs to cluster 2
Pattern D2 belongs to cluster 3
Pattern E2 belongs to cluster 4
Pattern J2 belongs to cluster 5
Pattern K2 belongs to cluster 6
Pattern A3 belongs to cluster 0
Pattern B3 belongs to cluster 1
Pattern C3 belongs to cluster 2
Pattern D3 belongs to cluster 3
Pattern E3 belongs to cluster 4
Pattern J3 belongs to cluster 5
Pattern K3 belongs to cluster 6
All are correct
Example Results 2
You might be wondering if there's any learning going on at all if the weights are already adjusted before the algorithm even starts.
Try this, comment out the following line from the UpdateWeights() function:
"w(DMin, i) -= (Alpha * (mPattern(VectorNumber)(i) - w(DMin, i"
And notice the difference in the results:
Weights for cluster 0 initialized to pattern A1
Weights for cluster 1 initialized to pattern B1
Weights for cluster 2 initialized to pattern C1
Weights for cluster 3 initialized to pattern D1
Weights for cluster 4 initialized to pattern E1
Weights for cluster 5 initialized to pattern J1
Weights for cluster 6 initialized to pattern K1
Pattern A1 belongs to cluster 0
Pattern B1 belongs to cluster 1
Pattern C1 belongs to cluster 2
Pattern D1 belongs to cluster 3
Pattern E1 belongs to cluster 4
Pattern J1 belongs to cluster 5
Pattern K1 belongs to cluster 6
Pattern A2 belongs to cluster 0
Pattern B2 belongs to cluster 1
Pattern C2 belongs to cluster 2
Pattern D2 belongs to cluster 2 - incorrect
Pattern E2 belongs to cluster 1 - incorrect
Pattern J2 belongs to cluster 5
Pattern K2 belongs to cluster 0 - incorrect
Pattern A3 belongs to cluster 0
Pattern B3 belongs to cluster 4 - incorrect
Pattern C3 belongs to cluster 2
Pattern D3 belongs to cluster 3
Pattern E3 belongs to cluster 4
Pattern J3 belongs to cluster 5
Pattern K3 belongs to cluster 6
As you can see, both positive and negative reinforcement play a role in the LVQ learning process.
/*
* A simple learning vector quantization (LVQ) neural network used to map datasets
* (right now, however, without a normalization of the input data)
*
* Copyright © stes 2011
*/
import java.io.IOException;
import java.util.ArrayList;
import java.util.Random;
public class VectorQuantizationNetwork
{
// for test purposes
public static void main(String[] args)
{
VectorQuantizationNetwork map = new VectorQuantizationNetwork(3, 2, 0.5);
map.addTrainingSet(new double[] { 0.0, 0.0, 5.0 });
map.addTrainingSet(new double[] { 0.0, 0.0, 5.0 });
map.addTrainingSet(new double[] { 0.0, 0.0, 5.0 });
map.addTrainingSet(new double[] { 1.0, 0.0, 0.0 });
map.addTrainingSet(new double[] { 1.0, 0.0, 0.0 });
map.addTrainingSet(new double[] { 1.0, 0.0, 0.0 });
map.addTrainingSet(new double[] { 1.0, 0.0, 0.0 });
map.addTrainingSet(new double[] { 1.0, 0.0, 0.0 });
System.out.println("start");
while (true)
{
for (int i = 0; i < 20; i++)
{
System.out.println();
}
map.trainIteration();
map.printResults();
try
{
System.in.read();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
private static Random _random = new Random();
private int _inputNeurons;
private int _outputNeurons;
private double[][] _weights;
private double[] _netInputs;
private double[] _outputs;
private double[] _currentInput;
private double[] _normalizer;
private int _winner;
private double _learnrate;
public double[] getCurrentInput()
{
return _currentInput;
}
public void setCurrentInput(double[] trainingSet)
{
_currentInput = trainingSet.clone();
}
private ArrayList<double[]> _trainingSets;
public VectorQuantizationNetwork(int inputNeurons, int outputNeurons, double learnrate)
{
_trainingSets = new ArrayList<double[]>();
_inputNeurons = inputNeurons;
_outputNeurons = outputNeurons;
_learnrate = learnrate;
_normalizer = new double[_inputNeurons];
_weights = new double[_inputNeurons][_outputNeurons];
_netInputs = new double[_outputNeurons];
_outputs = new double[_outputNeurons];
setCurrentInput(new double[_inputNeurons]);
this.initWeights();
}
public void addTrainingSet(double[] trainingSet)
{
if (trainingSet.length != _inputNeurons)
throw new IllegalArgumentException();
_trainingSets.add(trainingSet);
this.updateNormalizer(trainingSet);
}
public void trainIteration()
{
for (double[] trainingSet : _trainingSets)
{
setCurrentInput(trainingSet);
this.calcOutput();
for (int i = 0; i < _inputNeurons; i++)
{
_weights[i][_winner] += _learnrate * _outputs[_winner] * getCurrentInput()[i];
}
}
}
private double neuronOutput(double netInput)
{
return Math.tanh(netInput);
}
private void calcNetInput()
{
for (int j = 0; j < _outputNeurons; j++)
{
_netInputs[j] = 0.0;
for (int i = 0; i < _inputNeurons; i++)
{
_netInputs[j] += _weights[i][j] * getCurrentInput()[i];
}
}
}
public void calcOutput()
{
_winner = -1;
double maxValue = Double.NEGATIVE_INFINITY;
this.calcNetInput();
for (int j = 0; j < _weights[0].length; j++)
{
_outputs[j] = this.neuronOutput(_netInputs[j]);
if (_outputs[j] > maxValue)
{
maxValue = _outputs[j];
_winner = j;
}
}
}
private void updateNormalizer(double[] trainingSet)
{
for (int i = 0; i < _inputNeurons; i++)
{
_normalizer[i] += trainingSet[i];
}
}
private void normalize()
{
for (int i = 0; i < _inputNeurons; i++)
{
if (_normalizer[i] != 0)
{
getCurrentInput()[i] /= _normalizer[i];
getCurrentInput()[i] = getCurrentInput()[i] * 2 - 1;
}
}
}
private void initWeights()
{
for (int i = 0; i < _weights.length; i++)
{
for (int j = 0; j < _weights[0].length; j++)
{
_weights[i][j] = _random.nextDouble() * 2 - 1;
}
}
}
public void printWeights()
{
for (int i = 0; i < _inputNeurons; i++)
{
for (int j = 0; j < _outputNeurons; j++)
{
System.out.print(_weights[i][j] + "; ");
}
System.out.println();
}
}
public void printResults()
{
for (double[] trainingSet : _trainingSets)
{
setCurrentInput(trainingSet);
this.calcOutput();
for (int i = 0; i < _inputNeurons; i++)
{
System.out.print(trainingSet[i] + " ;");
}
System.out.print(" => ");
System.out.println(_winner);
}
}
}
Sign up for free
Posts: 636
Threads: 0
Joined: Jun 2016
Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms.
The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensioned data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for lossy data compression. It can also be used for lossy data correction and density estimation.
Vector quantization is based on the competitive learning paradigm, so it is closely related to the self-organizing map model and to sparse coding models used in deep learning algorithms such as autoencoder.