Back propagation algorithm
#1

[attachment=12675]
Back propagation algorithm
What is neural network?

The term neural network was traditionally used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes.
In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis in order to construct software agents or autonomous robots.
Neural networks resemble the human brain in the following two ways:
A neural network acquires knowledge through learning
A neural network's knowledge is stored within inter-neuron connection strengths known as synaptic weights.
How A Multi-Layer Neural Network Works?
The inputs to the network correspond to the attributes measured for each training tuple
Inputs are fed simultaneously into the units making up the input layer
They are then weighted and fed simultaneously to a hidden layer
The number of hidden layers is arbitrary, although usually only one
The weighted outputs of the last hidden layer are input to units making up the output layer, which emits the network's prediction
The network is feed-forward in that none of the weights cycles back to an input unit or to an output unit of a previous layer
From a statistical point of view, networks perform nonlinear regression: Given enough hidden units and enough training samples, they can closely approximate any function
Back propagation algorithm
Backpropagation: A neural network learning algorithm
Started by psychologists and neurobiologists to develop and test computational analogues of neurons
A neural network: A set of connected input/output units where each connection has a weight associated with it
During the learning phase, the network learns by adjusting the weights so as to be able to predict the correct class label of the input tuples
Also referred to as connectionist learning due to the connections between units
Iteratively process a set of training tuples & compare the network's prediction with the actual known target value
For each training tuple, the weights are modified to minimize the mean squared error between the network's prediction and the actual target value
Modifications are made in the “backwards” direction: from the output layer, through each hidden layer down to the first hidden layer, hence “backpropagation”
Steps
Initialize weights (to small random #s) and biases in the network
Propagate the inputs forward (by applying activation function)
Backpropagate the error (by updating weights and biases)
Terminating condition (when error is very small, etc.)
Efficiency of backpropagation: Each epoch (one interation through the training set) takes O(|D| * w), with |D| tuples and w weights, but # of epochs can be exponential to n, the number of inputs, in the worst case
Rule extraction from networks: network pruning
Simplify the network structure by removing weighted links that have the least effect on the trained network
Then perform link, unit, or activation value clustering
The set of input and activation values are studied to derive rules describing the relationship between the input and hidden unit layers
Sensitivity analysis: assess the impact that a given input variable has on a network output. The knowledge gained from this analysis can be represented in rules
Two phases: propagation and weight update.
Phase 1: Propagation
Each propagation involves the following steps:
Forward propagation of a training pattern's input through the neural network in order to generate the propagation's output activations.
Back propagation of the propagation's output activations through the neural network using the training pattern's target in order to generate the deltas of all output and hidden neurons.
Contd..
Phase 2: Weight update
For each weight-synapse:
Multiply its output delta and input activation to get the gradient of the weight.
Bring the weight in the opposite direction of the gradient by subtracting a ratio of it from the weight.
This ratio influences the speed and quality of learning; it is called the learning rate. The sign of the gradient of a weight indicates where the error is increasing, this is why the weight must be updated in the opposite direction.
Repeat the phase 1 and 2 until the performance of the network is good enough.
Actual algorithm for a 3-layer network (only one hidden layer):
Weakness
Long training time
Require a number of parameters typically best determined empirically, e.g., the network topology or ``structure."
Poor interpretability: Difficult to interpret the symbolic meaning behind the learned weights and of ``hidden units" in the network
Strength
High tolerance to noisy data
Ability to classify untrained patterns
Well-suited for continuous-valued inputs and outputs
Successful on a wide array of real-world data
Algorithms are inherently parallel
Techniques have recently been developed for the extraction of rules from trained neural networks

Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: back propagation learning, global back projection algorithm, back propagation learning algorithm artificial neural network by b yegnanarayana free download, label propagation algorithm, ip trace back algorithm pseudocode, multiple back propagation, neural network back propagation,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  An Efficient Algorithm for Mining Frequent Patterns full report project topics 3 4,805 01-10-2016, 10:02 AM
Last Post: Guest
  watermarking algorithm seminar class 3 2,690 27-04-2016, 11:17 AM
Last Post: dhanabhagya
  DYNAMIC SEARCH ALGORITHM IN UNSTRUCTURED PEER-TO-PEER NETWORKS--PARALLEL AND DISTRIBU electronics seminars 9 7,394 14-07-2015, 02:25 PM
Last Post: seminar report asees
  TEA ENCRYPTION (ALGORITHM) computer science technology 1 2,672 11-11-2014, 10:45 AM
Last Post: Guest
  A TABU SEARCH ALGORITHM FOR CLUSTER BUILDING IN WIRELESS SENSOR NETWORKS- MOBILE COMP electronics seminars 1 2,080 01-12-2012, 12:06 PM
Last Post: seminar details
  A Tabu Search Algorithm for Cluster Building in Wireless Sensor Networks Electrical Fan 13 8,711 01-12-2012, 12:05 PM
Last Post: seminar details
  A ROBUST DIGITAL IMAGE WATERMARKING ALGORITHM USING DNA SEQUENCES smart paper boy 1 1,944 29-11-2012, 01:42 PM
Last Post: seminar details
  Medical image segmentation using clustering algorithm computer science technology 2 6,002 08-11-2012, 01:00 PM
Last Post: seminar details
  Design and Analysis of the Gateway Relocation and Admission Control Algorithm in Mobi Projects9 1 1,721 10-10-2012, 12:22 PM
Last Post: seminar details
  RED BLACK TREE ALGORITHM FULL REPORT seminar class 1 2,125 08-10-2012, 12:59 PM
Last Post: seminar details

Forum Jump: