Neural Networks seminars report
#1

[attachment=4099]
INTRODUCTION
Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated.
A subject of current research in theoretical neuroscience is the question surrounding the degree of complexity and the properties that individual neural elements should have to reproduce something resembling animal intelligence.
Historically, computers evolved from the von Neumann architecture, which is based on sequential processing and execution of explicit instructions. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems, which may rely largely on parallel processing as well as implicit instructions based on recognition of patterns of 'sensory' input from external sources. In other words, at its very heart a neural network is a complex statistical processor (as opposed to being tasked to sequentially process and execute)
An artificial neural network (ANN), also called a simulated neural network (SNN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.
In more practical terms neural networks are non-linear statistical data modeling or decision making tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data

OVERVIEW
What is a Neural Network?
An Artificial Neural Network (ANN) is an information-processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.

Historical background
Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived several eras. Many important advances have been boosted by the use of inexpensive computer emulations. The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pitts.

First Attempts: There were some initial simulations using formal logic. McCulloch and Pitts (1943) developed models of neural networks based on their understanding of neurology. These models made several assumptions about how neurons worked. Their networks were based on simple neurons, which were considered to be binary devices with fixed threshold.





Promising & Emerging Technology: Not only was neuroscience, but psychologists and engineers also contributed to the progress of neural network simulations. Rosenblatt (1958) stirred considerable interest and activity in the field when he designed and developed the Perceptron. The Perceptron had three layers with the middle layer known as the association layer. This system could learn to connect or associate a given input to a random output unit. Another system was the ADALINE (Adaptive Linear Element) which was developed in 1960 by Widrow and Hoff (of Stanford University). The ADALINE was an analogue electronic device made from simple components. The method used for learning was different to that of the Perceptron, it employed the Least-Mean-Squares (LMS) learning rule.

Today: Progress during the late 1970s and early 1980s was important to
the re-emergence on interest in the neural network field. Significant progress has been made in the field of neural networks-enough to attract a great deal of attention and fund further research. Neurally based chips are emerging and applications to complex problems developing. Clearly, today is a period of transition for neural network technology.

Why use neural networks?

Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained





neural network can be thought of as an "expert" in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer "what if" questions. Other advantages include:

1. Adaptive learning: An ability to learn how to do tasks based on the data
given for training or initial experience.

2. Self-Organisation: An ANN can create its own organization or representation of the information it receives during learning time.

3. Real Time Operation: ANN computations may be carried out in parallel,
and special hardware devices are being designed and manufactured which take advantage of this capability.

4. Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

Why would anyone want a `new' sort of computer?

What are (everyday) computer systems good at... .....and not so good at?
HUMAN AND ARTIFICIAL NEURONSINVESTIGATING
THE SIMILARITIES

How the Human Brain Learns?
Much is still unknown about how the brain trains itself to process information, so
theories abound. In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches. At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity from the axon into electrical effects that inhibit or excite activity in the connected neurons. When a neuron receives excitatory input that is sufficiently large compared with its inhibitory input, it sends a spike of electrical activity down its axon.

Learning occurs by changing the effectiveness of the synapses so that the
influence of one neuron on another changes.

COMPONENTS OF THE NEURON THE SYNAPSE
Typically, brain cells, i.e., neurons are five to six orders of magnitude slower than
the silicon chip happen in the nanosecond range, whereas neural events happen in the millisecond range. However, the brain makes up the slow rate of operation of a neuron by massive interconnection between them. It is estimated that the human brain consists of about one hundred billion neural cell,about the same number as the stars in our galaxy.

From Human Neurons to Artificial Neurons
Neural networks are realized by first trying to deduce the essential features of neurons and their interconnections.
Inputs, xi:
Typically, these values are external stimuli from the environment or come from the outputs of other artificial neurons. They can be discrete values from a set, such as {0,1}, or real-valued numbers.
Weights, wi:
These are real-valued numbers that determine the contribution of each input to the neuron's weighted sum and eventually its output. The goal of neural network training algorithms is to determine the best possible set of weight values for the problem under consideration. Finding the optimal set is often a trade-off between computation time and minimizing the network error.
Threshold, u:
The threshold is referred to as a bias value. In this case, the real number is added to the weighted sum. For simplicity, the threshold can be regarded as another input / weight pair, where w0 = u and x0 = -1.
Activation Function, f:The activation function for the original McCulloch-Pitts neuron was the unit stepfunction. However, the artificial neuron model
has been expanded to include other functions such as the sigmoid, piecewise linear, and Gaussian
INTERCONNECTION LAYERS


The most common neural network model is the multilayer perceptron (MLP). This type of neural network is known as a supervised network because it requires a desired output in order to learn. The goal of this type of network is to create a model that correctly maps the input to the output using historical data so that the model can then be used to produce the output when the desired output is unknown. A graphical representation of an MLP is shown below
Block diagram of a two hidden layer multilayer perceptron (MLP). The inputs are fed into the input layer and get multiplied by interconnection weights as they are passed from the input layer to the first hidden layer. Within the first hidden layer, they get summed then processed by a nonlinear function (usually the hyperbolic tangent). As the processed data leaves the first hidden layer, again it gets multiplied by interconnection weights, then summed and processed by the second hidden layer. Finally the data is multiplied by interconnection weights then processed one last time within the output layer to produce the neural network output.
BACK-PROPAGATION ALGORITHIM


The MLP and many other neural networks learn using an algorithm called
backpropagation. With backpropagation, the input data is repeatedly presented to the neural network. With each presentation the output of the neural network is compared to the desired output and an error is computed. This error is then fed back (backpropagated) to the neural network and used to adjust the weights such that the error decreases with each iteration and the neural model gets closer and closer to producing the desired output.



Demonstration of a neural network learning to model the exclusive-or (Xor) data. The Xor data is repeatedly presented to the neural network. With each presentation, the error between the network output and the desired output is computed and fed back to the neural network. The neural network uses this error to adjust its weights such that the error will be decreased. This sequence of events is usually repeated until an acceptable error has been reached or until the network no longer appears to be learning In order to train a neural network to perform some task, we must adjust the weights of each unit in such a way that the error between the desired output and the actual output is reduced. This process requires that the neural network compute the error derivative of the weights (EW).
APPLICATIONS
Given this description of neural networks and how they work, what real world applications are they suited for? Neural networks have broad applicability to real world problems. In fact, they have already been successfully applied in many industries. Neural networks have been successfully applied to broad spectrum of dataintensive applications, such as:

Voice Recognition - Transcribing spoken words into ASCII text.

Target Recognition - Military application which uses video and/or infrared
image data to determine if an enemy target is present.

Medical Diagnosis - Assisting doctors with their diagnosis by analyzing the
reported symptoms and/or image data such as MRIs or X-rays.

Process Modeling and Control - Creating a neural network model for a
physical plant then using that model to determine the best control settings for the plant

Credit Rating - Automatically assigning a company's or individuals credit
rating based on their financial condition.

Targeted Marketing - Finding the set of demographics, which have the
highest response rate for a particular marketing campaign.
Financial forecasting - Using the historical data of a security to predict the
future movement of that security. Now we shall look into a few interesting applications developed across the world.

NETTALK

The most famous example of a neural-network pattern classifier is the NETtalk system developed by Terry Sejnowski and Charles Rosenberg, used to generate synthetic speech.
Once the network is trained to produce the correct phonemes in the 5000-word training set, it performs quite reasonably when presented with words that it was not explicitly taught to recognize. The data “representation scheme employed allows a temporal pattern sequence to be represented spatially, while simultaneously providing the network with a means of easily extracting the important features of the input pattern.

NETtalk Data Representation

Anyone who has learned to read the English language knows that for every
pronunciation rule, there is an exception. For example, consider the English
pronunciation of the following words:
FIND FIEND FRIEND FEINT
While these four words are very similar in their form and structure, thepronunciation of each is vastly different. In each case, the pronunciation of the vowel(s) is dependent on a learned relationship between the vowel and its neighboring characters. The NETtalk system captures the implicit relationship between text and sounds by using a BPN to learn these relationships through experience. Sejnowski and Rosenberg adopted a sliding window technique for representing words as patterns. Essentially, the window is nothing more than a fixed-width representation of characters that form the complete input pattern for the network. The window slides across a word, from left to right, each time capturing (and simultaneously losing) one character. Sejnowski and Rosenberg used a window of seven characters, with the third position (middle) designated as the focus character. According
to the language studies three characters were adequate to exert the proper influence on the pronunciation of any one character in an English word. Sejnowski and Rosenberg chose to represent the input characters as pattern vectors composed of 29 binary elements-one for each of the 26 upper-case English alphabet characters, and one for each of the punctuation characters that influence pronunciation.

NETtalk Training

The training data for the NETtalk application consist of 5000 common English words, together with the corresponding phonetic sequence for each word. For each exemplar set of n input patterns are defined such that each input pattern contained one instance of the seven “character sliding window, with each character represented as a 29- element vector, where n represents the number of characters in the word. Using this scheme, the dimension of each input pattern was 203 elements (7characters x 29 elements per character). Choosing a 26-element vector gave 30,000 exemplars .A three layer BPN with 80 sigmoidal units on the hidden layer, completely interconnected with all elements on the input and output layers.

NETtalk Results

Training the NETtalk BPN from the exemplar data was only marginally better, requiring 10 hours of computer time on a VAX 11/780 class computer system. While the network was training, Sejnowski periodically stopped the process and allowed the network to simply produce whatever classifications it could, given a partial set of the training words as input. The classification produced by the network was used to drive a speech synthesizer to produce the sounds that were recorded on audiotape. Before the training started, the network produced random sounds, freely mixing
consonants and vowel sounds. After 100 epochs, the network had begun to separate words, recognizing the role of the blank character in text.After 500 epochs, the network was making clear distinctions between the consonant sounds and the vowel sounds.After 1000 epochs, the words that the network were classifying had become distinguishable,although not phonetically correct.After 1500 epochs, the networks had clearly captured the phonetic rules,as the sounds produced by the BPN were nearly perfect, albeit somewhat mechanical.Training





was stopped after epoch 1500, and the network state was frozen. At that point, the NETtalk system was asked to pronounce 2000 words that it had not been explicitly trained to recognize.Using the relationships that the network had found during training,the NETtalk system had only minor problems reading these new
words aloud.Virtualy all of the words were recognisable to the researchers,and are also easily recognised by people not familiar with the system when they hear the audio tape recording. Sejnowski and Rosenberg reported that NETtalk can read English text with an accuracy of about 95%.

RADAR “SIGNATURE CLASSIFIER

The primary application of pulse Doppler radar is to detect an air-borne target, and determine the range and velocity of the target relative to the radar station. Pulse Doppler radar operates on two very simple principles of physics: First electromagnetic radiation (EMR) travels at a constant speed, and, second, EMR waves reflected from a moving body are frequency shifted in the direction of travel. Usually, the radar system provides a digital readout of these parameters for each target acquired, leaving the chore of using the information to the radar operator. They make this determination based on the electronic signature of the radar return. As we indicated previously, radar-signature recognition is currently a strictly human phenomenon; there are no automatic means of identifying a target based on its radarsignature incorporated in radar systems.
Reply
#2

[attachment=7580]

1.1 INTRODUCTION:
Ever since eternity, one thing that has made human beings stand apart from the rest of the animal kingdom is, its brain .The most intelligent device on earth, the “Human brain” is the driving force that has given us the ever-progressive species diving into technology and development, as each day progresses.
Due to his inquisitive nature, man tried to make machines that could do intelligent job processing, and take decisions according to instructions fed to it. What resulted was the machine that revolutionized the whole world, the “Computer” (more technically speaking the Von Neumann Computer). Even though it could perform millions of calculations every second, display incredible graphics and 3-dimentional animations, play audio and video but it made the same mistake every time.
Practice could not make it perfect. So the question for making more intelligent device continued. These researches lead to birth of more powerful processors with high-tech equipments attached to it, super computers with capabilities to handle more than one task at a time and finally networks with resources sharing facilities. But still the problem of designing machines with intelligent self-learning, loomed large in front of mankind. Then the idea of initiating human brain stuck the designers who started their researches .One of the technologies that will change the way for working of computer, i.e. “Artificial Neural Networks”.

1.1.1 WHAT IS NEURAL NETWORK?
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain to process the information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working together to solve specific problems Typically Neural Network is trained or fed large amount of data and rules about data relationships i.e. “A Grandfather is older than person’s Father”. A program can then tell the network how to behave? As people learn from experience, the network is trained by learning. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons.

1.1.2 WHY WE USE NEURAL NETWORK?
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data , that can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer "what if" questions.
Other advantages include:
1. Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
2. Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time.
3. Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
4. Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

1.1.3 NEURAL NETWORK VERSUS CONVENTIONAL COMPUTERS:
Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don't exactly know how to do.
Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable.
On the other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.
Neural networks and conventional algorithmic computers are not in competition but complement each other. There are tasks are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks. Even more, a large number of tasks, require systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network) in order to perform at maximum efficiency.
Neural networks do not perform miracles. But if used sensibly they can produce some amazing results.
Reply
#3

[attachment=8701]
Neuro-Computing
Introduction
 Neurocomputing is concerned with information processing.
 A neurocomputing approach to information processing first involves a learning process within a neural network architecture that adaptively responds to inputs according to a learning rule.
 After the neural network has learned what it needs to know , the trained network can be used to perform certain tasks depending on a particular application.
 Neural networks have the capability to learn from their environment and to adapt to it in an interactive manner.
What do u think which is faster?
A DIGITAL COMPUTER OR A HUMAN BEING?
For multiplying two 7-digit numbers,of course, digital computer is faster, but……
At perceiving or identifying an object of interest in a natural scene or interpreting natural language human beings are faster than a digital computer.
BUT WHY?
How can we perform certain tasks better and faster than a digital computer?
Because our BRAIN is organized
DO U KNOW DIFFERENCE BETWEEN BRAIN AND A DIGITAL COMPUTER?
Difference lies in their PROCESSING UNITS.
The basic processing unit of BRAIN is NEURON or A NERVE CELL
The main processing unit of a digital computer is SILICON LOGIC GATES
Neurons are approximately six orders of magnitude slower than silicon logic gates.
However the brain can compensate for the relatively slow operational speed of the neuron by processing data in a highly parallel architecture that is massively interconnected.It is estimated that the human brain must contain in the order of 10 raise to powere 11 neurons and approximately three orders of magnitude more connections or synapses
Therefore, the BRAIN is an adaptive,nonlinear,parallel computer that is capable of organizing neurons to perform certain tasks.




Reply
#4
[attachment=11075]
Introduction:
 A Neural network is analogy to biological neural network in the Human brain.
In other words, A neural network is, in essence, an attempt to simulate the brain
 A Neural network is a massively parallel distributed processor made up of simple processing units, which has a propensity for storing experiential knowledge and making it available for use.
 Neural networks represent a technology that is rooted in many disciplines: neurosciences, mathematics, statistics, physics, computer science, and engineering.
 A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain
About biological Neural Networks
• Neurons are primary structural components of Biological neural network.
• Neurons are interconnected by synapses.
• We are born with about 100 billion neurons
• A neuron may connect to as many as 100,000 other neurons
• A Biological neuron
• Neurone vs. Node
• ANNs – The basics
• ANNs incorporate the two fundamental components of biological neural nets:
History of Neural Networks
• McCulloch & Pitts (1943) are generally recognised as the designers of the first neural network
• Many of their ideas still used today (e.g. many simple units combine to give increased computational power and the idea of a threshold)
• Hebb (1949) developed the first learning rule (on the premise that if two neurons were active at the same time the strength between them should be increased)
• During the 50’s and 60’s many researchers worked on the perception amidst great excitement.
• 1969 saw the death of neural network research for about 15 years
• Only in the mid 80’s (Parker and LeCun) was interest revived (in fact Werbos discovered algorithm in 1974)
• Model of a neuron
• Comparison between Feedforward and Recurrent Networks
Feed forward networks:
– Information only flows one way
– One input pattern produces one output
– No sense of time (or memory of previous state)
– Recurrency
– Nodes connect back to other nodes or themselves
– Information flow is multidirectional
– Sense of time and memory of previous state(s)
– Biological nervous systems show high levels of recurrency (but feed-forward structures exists too)
Example: Face Recognition
• From Machine Learning by Tom M. Mitchell
• Input: 30 by 32 pictures of people with the following properties:
– Wearing eyeglasses or not
– Facial expression: happy, sad, angry, neutral
– Direction in which they are looking: left, right, up, straight ahead
• Output: Determine which category it fits into for one of these properties (we will talk about direction)
Reply
#5
[attachment=15547]
INTRODUCTION TO NEURAL NETWORKS
1.1 What is a Neural Network?

An Artificial Neural Network (ANN) is an information processing pattern that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this pattern is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.
1.2 Historical background
Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback and several eras.
Many important advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few researchers.
These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.
The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pits. But the technology available at that time did not allow them to do too much.
1.2 Why use neural networks?
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyse. This expert can then be used to provide projections given new situations of interest and answer "what if" questions.
Other advantages include:
• Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
• Self-Organisation: An ANN can create its own organisation or representation of the information it receives during learning time.
• Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
• Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.
1.3 Criticism
A common criticism of neural networks, particularly in robotics, is that they require a large diversity of training for real-world operation. Dean Pomerleau, in his research presented in the paper "Knowledge-based Training of Artificial Neural Networks for Autonomous Robot Driving," uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.). A large amount of his research is devoted to [1] extrapolating multiple training scenarios from a single training experience, and [2] preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns – it should not learn to always turn right). These issues are common in neural networks that must decide from amongst a wide variety of responses.
1.4 Neural networks versus conventional computers – a comparison
Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve.
Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly.
The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable.
On the other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault. It is also used to supervise the neural network
Neural networks and conventional algorithmic computers are not in competition but complement each other. There are tasks are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks. Even more, a large number of tasks, require systems that use a combination of the two approaches[1].
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: neural networks related seminar, powere, tom mitchell, neural networks seminar report, perceptron, levy surname pronunciation, 4g networks seminars,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  blast seminars report electronics seminars 6 14,558 09-09-2017, 04:08 PM
Last Post: jaseela123d
  Electronics seminars lists10 computer science crazy 169 91,156 28-03-2015, 10:07 AM
Last Post: seminar report asees
  gsm pdf and final seminars report suvendu9238 10 11,440 19-11-2014, 09:34 PM
Last Post: jaseela123d
  ARTIFICIAL NEURAL NETWORK AND FUZZY LOGIC BASED POWER SYSTEM STABILIZER project topics 4 6,139 28-02-2014, 04:00 AM
Last Post: Guest
  advanced mobile phone signal jammer for gsm cdma and 3g networks with prescheduled ti shilpa16 1 1,678 28-10-2013, 12:17 PM
Last Post: ShayneThill
  wireless sensor networks full report project report tiger 18 16,200 15-07-2013, 12:18 PM
Last Post: computer topic
  optical switching seminars report electronics seminars 7 10,233 29-04-2013, 10:55 AM
Last Post: computer topic
  memristor seminars report project report tiger 21 22,278 25-01-2013, 12:02 PM
Last Post: seminar details
Smile smart note taker seminars full report [email protected] 59 30,421 25-01-2013, 12:00 PM
Last Post: seminar details
  DISTRIBUTED CALL ADMISSION CONTROL SCHEME FOR MOBILE MICRO-CELLULAR NETWORKS project topics 2 2,934 24-01-2013, 09:31 PM
Last Post: fake_lover01

Forum Jump: