ARTIFICIAL NEURAL NETWORKS BASED DEVNAGRI NUMERAL RECOGNITION
#1

[attachment=1391]
ARTIFICIAL NEURAL NETWORKS BASED
DEVNAGRI NUMERAL RECOGNITION BY USING S.O.M
ABSTRACT:
In the present paper the proposed system is self organizing map method used to recognize the Devanagiri characters. The som is trained for hundred handwritten devangri characters. The som is trained for hundred hand written devanagari numerals. The Network was tested for different parameters such as o/p nodes, neighborhood size, no of cycle. The process of training was repeated for number of times.
INTRODUCTION:
We begin by considering an Artificial neural n/w architecture in which every node is connected to every node and these connect as are either excitatory or inhibitory or irrelevant.
A single node is insufficient for many patricidal problems and large number of nodes is frequently used. The way nodes are connected determines how computations proceed and constitutes an important early design decision by a neural network developer. A brief discussion of biological neural networks is relevant, prior to examining artificial neural network architectures.
Different parts of the central nervous system are structured differently hence incorrect to claim that a single architecture models all neural processing. The cerebral cortex, where most processing is believed to occur, consists of five to seven layers of neurons with each layer supplying inputs to the next. However, layer boundaries are not strict and connections that cross layers are known to exit. Feedback pathways are also known to exists, e.g. between (to and fro) the visual context and the lateral genetical nucleus. Each neuron is connected with many, but not all, of the neighboring neurons within the some veto neurons that have overwhelming power of neutralizing the effects of a large number of excitatory inputs to a neuron. Some amount of indirect self-excitation also occurs “ one node™s activation excites its neighbor, which excites the first again. In the following sub sections, we discuss artificial neural network architectures, some of which derive inspiration from biological neural networks
KOHONEN FEATURE MAP
Kohonenâ„¢s self-organizing feature map is a two-layered network that can organize a topological map from a random starting point. The resulting maps shows-the natural relationships among the patterns that are given to the network. The network combines an input layer with a competitive layer of processing units, and is trained by unsupervised learning. Kohonen presented this paradigm, although seeds of the same idea appear elsewhere. Kononenâ„¢s feature map can lend some insight into how a topological mapping can be organized by a neural network model.
The Kohonen feature map finds the organization of relationships among patterns. The units that they activate in the completive layer classify incoming patterns. Similarities among patterns are mapped into closeness relationships on the competitive layer grid. After training is complete pattern relationships and groupings are observed from the competitive laver. The Kohonen network provides advantages over classical pattern-recognition techniques because it utilizes the parallel architecture or a neural network and provides a graphical organization of pattern relationships.
BASIC STRUCTURE
The Kohonen feature map is a two-layered network. The first layer of the network is the input layer. Typically the second-competitive-competitive-layer is organized as a two dimensional grid. All interconnections go from the first layer to the second; the two layers are fully interconnected, as each input unit is connected to all of the units in the competitive layer shows this basic network structure.
When an input pattern is presented, each unit in the first layer takes on the value of the corresponding entry in the input pattern. The second layer units then sum their inputs and compete to find a single winning unit. The overall operation of the Kohonen network is similar to the competitive learning paradigm.
Each interconnection in the Kohonen feature map has an associated weight value. The initial state of the network has randomized values for the weights. Typically the initial weight values are set by adding a small random number to the average value for the entries in the input patterns.
A random process that makes each entry in the pattern vector uniformly distributed between 0 and 1 generates an example training set of the network in Figure 1 in this example each input pattern. And the input pattern is a vector with entries. As a result, the input patterns are uniformly spread over a dimensional hypercube. If =2, then the input patterns are uniformly spread over a square: such two dimensional input patterns will be used in our first example. In other examples and in actual applications, any set of patterns.
An input pattern to the Kohonen feature map is denoted here as
E = [e1, e2, e3¦¦.en]
The connections from this input to a single unit in the competitive layer are shown in figure 3. The weights are given by
U = [u11, u12, e3¦¦.uin]
Where identifies the unit in the competitive layer (these weights go to unit i. we identify the unit in the competitive layer by a single index, even though there is a two dimensional grid of units in this layer.)
The first step in the operation of a Kohonen network is to computer a matching value for each unit in the competitive layer. This value measures the extent to which the weights of each unit match the corresponding values of the input pattern. The matching value for unit i is
|E “ U1 |
Which is the distance between vectors E and U1 and is computed by :
v (ej “ uy )2
The unit with the lowest matching value (the best match) wins the competition. Here we denote the unit with the best match as unit c1 and c is chosen such that
¦E “ Uc ¦ = min {¦ E-U1 ¦}
Where the minimum is taken over all units i in the competitive layer. If two units have the same matching value from (9.1), then, by convention, the unit with the lower index value i is chosen.
After the winning unit is identified, the next step is to identify the neighborhood around it. The neighborhood, illustrated in Figure 9-4, consists of those processing units that are close to the winner in the competitive layer grid. The neighborhood in this case consists of the units that are within 2 square that is centered on the winning unit c. the size of the neighborhood changes, as shown by squares of different sized in the figure. The neighborhood changes, as shown by squares of different sized in the figure. The neighborhood is denoted by the set of units Nc. weights are updated for all neurons that are in the neighborhood of the winning unit. The update equation is
uy = { (e1 “ uy ) if unit i is in the neighborhood Nc O otherwise
and
uy ncw = u ynid + un
This adjustment results in the winning unit and its neighbors having their weights modifies, becoming more like the input pattern. The winner then becomes more likely to win the competition should the same or a similar input pattern be presented subsequently.
Note that there are two parameters that must be specified: the value of 1 the learning rate parameter in the weight “ adjustment equation, and the size of the neighborhood c
The learning rate
1
1 = 0 (1 - ---- )
T
Where t = the current training iteration and T = the total number of training iterations to be done. Thus 0 and is decreased until it reaches the value of 0. The decrease is linear with the number of training iterations completed.
The size of the neighborhood is the second parameter to be specified. Typically the initial neighborhood width is relatively large, and the width is decreased over many training iterations. For illustration, consider the neighborhood in Figure, which is centered on the winning unit c, at position (xc, yc). Let d be the distance from c to the edge of the neighborhood. The neighborhood is then all (x,y) such that
c-d < x < c + d
And
c-d < y < c + d
This defines a square neighborhood about c. sometimes this calculated neighborhood goes outside the grid of units in the competitive layer; in this case the actual neighborhood is cut off at the edge of the grid.
Since the width of the neighborhood decreases over the training iterations, a half or a d decreases. Initially d is set at a chosen value denoted by do may be chosen at a half or a third of the width of the competitive layer of processing units. The value of d is then made to decrease according the equation.
Where t =he current training iteration and T the total number of training iterations to be done. This process assures a gradual linear decrease in d, starting with d0 and going down to 1. The same amount of time is spent at each value.
HANDWRITTEN NURMERAL RECOGNITION SYSTEM
Input data
The scope of the system was restricted to the ten digits, in view of the limitation imposed by training time. The database was obtained by scanning hand written numerals of different persons. The samples are shown in fig
The scanner from H.P. with 300 dpi resolutions is used to scan the hand written numerals. During the scanning process image size for each numeral was sent to 64 x 64 pixels and images re scanned with 26 gray levels.
Data Representation
Scaling down each pixel value with 256 normalizes scanned images. The normalized image is fit into minimum rectangle and then the bitmap image of 8X8 pixel size is obtained as shown in Fig 5.2 and 5.3.
In the process of obtaining bitmap image 64 windows are obtained for each numeral. For each local window average of the pixel value is computed and the hold value is used to get a feature for the local window. If the average value is greater than threshold, the local window value is set to 1 otherwise set to 0.The final feature vector is of 64 elements as shown
0 0 1 1 1 1 1 0
0 1 0 0 0 0 1 0
1 0 0 0 0 0 0 1
1 0 0 0 0 0 0 1
1 0 0 0 0 0 0 1
1 0 0 0 0 0 0 1
0 1 0 0 0 1 1 0
0 0 1 1 1 1 0 0


Data for the scanned numeral Zero : Featured Vector
Training and testing:
The feature vector computed for different data sets are stored in a file. In the process of training, these features are applied to a SOM and the network is trained. In the training process, features for each digit from 0 to 9 are applied to SOM sequentially and the network is trained for number of iterations. After training, the resultant prototypes are used for fine-tuning. For the purpose of fine tuning type 1 learning vector quantization scheme is used, and resulted prototypes are used in the recognition process to test the performance of the network. Network is trained and tested for different parameters such as output nodes, neighborhood size, and number of iterations. Here scalar valued gain coefficient and neighborhood size decreases monotonically. Initial value of scalar valued gain coefficient is kept one and neighborhood size is equal to the half the number of output nodes. The relation used for alpha and Nc is as follows.
g
= () 1- ___
h
g
Nc = Nc0 1- __
h
where 0, Nco are initial values.
G is current training iterations.
H is the total number of training iterations to done.
CONCLUSION
The SOM is used pattern classification. The performance achieved by SOM is better than the back propagation algorithm. Generally back propagation algorithm is used for such tasks. The disadvantages associated with back propagation algorithm such as local minima and deciding numbers of hidden units are not observed in the implemented system.
A hand written isolated, size invariant devanagari numeral recognition using self-organizing map with type one vector quantization method is implemented. As SOM is an unsupervised neural network, initially training the network seems difficult SOM is trained for number of times for different parameters. The classification accuracy obtained is 93% for 300 modes. The implemented system was found to perform reasonably wee, if number of output nodes and number of input patterns presented to the SOM during the training process increases. Training time is proportional to the number of patters used for training, number of output nodes and iteration. Since learning is the statistic process, the final statistical accuracy of the mapping depends on the member of iterations. On the other hand, the number of components in the x has no effect on the number of iteration steps. Type one learning vector quantization (LVQ1) method helps to demarcate the class borders more accurately. LVQ1 improves classification accuracy from 93% to 96%. The implemented system gives 100% classification accuracy for printed, fixed font numerals. The SOM algorithm can be used in practical speech recognition, robotics, telecommunications, etc.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: potty trained shih, lotter som bod**r result, pollution pdf info in devnagri, seminar report on the topic voice recognition based on artificial neural network, artificial neural networks based power system restoration pdf, neural networks based seminar reports, conjunct character recognition devnagri,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,445 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  Vertical Handoff Decision Algorithm Providing Optimized Performance in Heterogeneous Wireless Networks computer science topics 2 30,480 07-10-2016, 09:02 AM
Last Post: ijasti
  Dynamic Search Algorithm in Unstructured Peer-to-Peer Networks seminar surveyer 3 2,823 14-07-2015, 02:24 PM
Last Post: seminar report asees
  Heterogeneous Wireless Sensor Networks in a Tele-monitoring System for Homecare electronics seminars 2 2,564 26-02-2015, 08:03 PM
Last Post: Guest
  Shallow Water Acoustic Networks (SWANs project report helper 2 1,856 24-03-2014, 10:10 PM
Last Post: seminar report asees
  ARTIFICIAL INTELLIGENCE IN VIRUS DETECTION AND RECOGNITION seminar project explorer 2 3,349 22-07-2013, 11:44 AM
Last Post: computer topic
  Computerized Paper Evaluation using Neural Network computer science crazy 12 17,853 17-07-2013, 04:08 PM
Last Post: Guest
Brick Face Recognition using the Techniques Base on Principal Component Analysis (PCA) computer science crazy 3 2,968 04-03-2013, 01:21 PM
Last Post: Guest
  Bluetooth Based Smart Sensor Networks (Download Full Seminar Report) Computer Science Clay 75 53,858 16-02-2013, 10:16 AM
Last Post: seminar details
  FACE RECOGNITION USING NEURAL NETWORKS (Download Seminar Report) Computer Science Clay 70 32,015 01-02-2013, 09:28 PM
Last Post: Guest

Forum Jump: