ACUMENE NEURAL CONVERTER full report
#1

[attachment=3776]
ACUMENE NEURAL CONVERTER

Presented By:
Rajesh.A (Y2ECO73),
Ushoday Srinivas.S (Y2EC107),
¾ B.Tech,
V.R. Siddhartha Engineering College,
Vijayawada.


ABSTRACT

This paper brings to light the various nuances of Radio propagation and suggests different models that can be used in various practical situations. Over the years of electronic evolution different A/D converters are designed and have been analyzed on base of performance, accuracy and acumen, but the evolution has failed in realizing the natural A/D converter “ The human brain. This paper presents a method in the direction to realize it artificially “ The Neural Converter.

LIST OF CONTENTS :

¢ Introduction to radio propagation
¢ Factors affecting radio propagation
¢ Various models used in radio propagation
¢ Introduction to neural networks
¢ Back propagation algorithm
¢ Application of neural network to 4 bit A/D converter
¢ Applications and Uses
¢ Conclusion.
In this article a review of popular propagation models for wireless communication channels are discussed. One of the most important characteristics of the propagation environment is the path (propagation) loss. By knowing propagation losses, one can efficiently determine the field signal strength, signal-to-noise ratio (SNR), carrier-to-interference (C/I) ratio, etc.

Propagation Phenomena :

Propagation mechanisms are very complex and diverse. First, because of the separation between the receiver and the transmitter, attenuation of the signal strength occurs. In addition, the signal propagates by means of diffraction, scattering, reflection, transmission, refraction, etc.
Diffraction occurs when the direct line-of-sight (LoS) propagation between the transmitter and the receiver is obstructed by an opaque obstacle whose dimensions are considerably larger than the signal wavelength.
Scattering occurs when the propagation path contains the obstacles whose dimensions are comparable to the wavelength.
Reflection occurs when the radio wave impinges the obstacle whose dimensions are considerably larger than the wavelength of the incident wave. A reflected wave can either decrease or increase the signal level at the reception point.
Transmission occurs when the radio wave encounters an obstacle that is to some extent transparent for the radio waves.
Refraction is very important in macro cell radio system design. Due to an inconstant refractive index of the atmosphere, the radio waves do not propagate along a straight line, but rather along a curved one.

Propagation Models :

A propagation model is a set of mathematical expressions, diagrams, and algorithms used to represent the radio characteristics of a given environment. Generally, the prediction models can be either empirical (also called statistical) or theoretical (also called deterministic), or a combination of these two. While the empirical models are based on measurements, the theoretical models deal with the fundamental principles of radio wave propagation phenomena. In the empirical models, all environmental influences are implicitly taken into account. This is the main advantage of these models. The deterministic models are based on the principles of physics and, due to that, they can be applied to different environments without affecting the accuracy. The algorithms used by deterministic models are usually very complex and lack computational efficiency.
Further, in respect of the size of the coverage area, the outdoor propagation models can be subdivided into two additional classes, macro cell and micro cell prediction models.

MODELS:
Model of Okumura et al. :

The Okumura et al. method is based on empirical data collected in detailed propagation tests over various situations of an irregular terrain and environmental clutter. The basic prediction of the median field strength is obtained for the quasi-smooth terrain in the urban area. The correction factors, such as for a rolling hilly terrain, the isolated mountain, mixed land-sea paths, street direction, general slope of the terrain etc., make the final prediction closer to the actual field strength values. This is a method originally intended for VHF and UHF land-mobile radio systems.
ITU (CCIR) Model :

The CCIR method is based on the statistical analysis of a considerable amount of data obtained by measurements. The curves for the field strength prediction refer to the kind of rolling irregular terrain, for which a value of parameter h, defining the degree of terrain irregularity Parameter h is defined as the difference in the heights exceeded by 10 percent and 90 percent of the terrain over propagation paths in the range of 10km to 50km from the transmitter. The original curves are intended for use in planning broadcasting services for the solution of interference problems over a wide area. However, due to its simplicity the model is used for frequency coordination and frequency planning purposes in the border areas (for example, between countries).

Lee Microcell Model :

The Lee model for predicting the electric field in microcells assumes that there is a high correlation between the signal attenuation and the total depth of building blocks along the radio path. An aerial photograph can be used to calculate the proportional length of a direct wave path being attenuated by the building blocks. The accuracy of the model can be significantly improved by introducing specific corrections based on the arrangement of the streets and their types.(For example, a main street under LoS conditions, a main street under NLoS conditions, a narrow side street, a wide side street, and a street parallel to the main).

Artificial Neural Networks Macro cell Model :

Recently, several prediction models utilizing artificial neural networks (ANN) have been proposed. The main intention of the work is to obtain high accuracy in real time that uses
just ordinary databases. The ANN model, proposed is based on a very popular feed-forward neural network architecture (precisely, multilayer perceptron). Feed-forward neural networks with sigmoid activation functions have demonstrated very good performance in solving problems with mild nonlinearities on the set of noisy data. Another key feature of neural networks is the intrinsic parallelism allowing for a fast evaluation of solutions. The proposed neural network has three groups of inputs. The first group consists of an input only and it is the normalized distance from the transmitter to the receiver. The second group of inputs (4 inputs) is based on the terrain profile analysis. These inputs are: 1) portion through the terrain; 2) and 3) modified "clearance angle" factors for both the transmitter and the receiver sites, respectively; and 4) the rolling factor.
Fig. 1: Training of Input
The third group of input parameters is based on the land category analysis along the straight line drawn between the transmitter and the receiver. The implementation of the proposed ANN model requires two databases. The first is the standard digital terrain elevation database; the other is the ground cover (i.e., land usage or "clutter") database. When the four inputs as mentioned above, are given to neural network it senses to the input and presence out the output . The network consists of previous stages within it and when it matches with the present input and gives the desired output. This is known as training of neural network. Comparison to other popular prediction models, the ANN model demonstrated very good performance. The ANN model has been realized and used in 450MHz and 900MHz frequency bands for the purpose of TETRA and GSM system design, respectively.
Basic Neuron :
A neuron is the fundamental unit of a biological neural network. At the junction of the signal-sending axon and the signal-receiving dendrite lies a small gap called a synapse.

The signals from different neurons are thus weighted differently based on the strength of the synaptic connections. If the total effect of all the received signals is adequate, the neuron is activated and it will begin to send a signal to the other neurons via its axon.
Fig. 2 : Biological Neuron

DEFINITION OF NEURAL NETWORK:

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. Neural networks are a form of multiprocessor computer system, with
¢ simple processing elements
¢ a high degree of interconnection
¢ simple scalar messages
¢ adaptive interaction between elements

Fig. 3 : Complicated Neural Network

NETWORK PROPERTIES:

The topology of a neural network refers to its frame work as well as its interconnection scheme. The framework is often specified by the number of layers and the number of nodes per layer. The types of layers include:
¢ The input layer: The nodes in it are called input units, which encode the instance presented to the network for processing. Inputs units do not process information; they simply distribute information to other units.
¢ The hidden layer: The nodes in it are called hidden units, which are not directly observable and hence hidden. They provide nonlinearities for the network.
¢ The output layer: The nodes in it are called output units, which encode possible concepts to be assigned to the instance under consideration.

Fig. 4 : Structure of Layers
NEURAL INFORMATION PROCESSING :

Each artificial neuron receives a set of inputs which is multiplied by a weight analogous to synaptic strength. The sum of all weighted inputs determines the degree of firing called the activation level. Notation ally, each input Xi is modulated by a weight Wi and the total input is expressed as . The input signal is further processed by an activation function to produce the output signal which, if not zero, is transmitted along. The activation function can be a threshold function or a smooth function like a sigmoid or a hyperbolic tangent function. A neural network is represented by set a nodes and arrows, which is a fundamental concept in graph theory. A node corresponds to a neuron, and an arrow corresponds to a connection along with the direction of signal flow between neurons, while some are connected to system input and others to system output for information processing. Neural networks solve problems by self-learning and self-organization. They derive their intelligence from the collective behavior of simple computational mechanisms at individual neurons. Neural networks can recognize, classify, convert, and learn patterns. There are different types of networks, feed back and feed forward. Feed forward networks are unconditionally stable. Because recurrent networks have feedback paths from their outputs back to their inputs, the response of such networks is dynamic; that is after applying a new input. Such networks are called hop field networks.

Multilayer Perceptrons

In this type of network the units each perform a biased weighted sum of their inputs and pass this activation level through a transfer function to produce their output, and the units are arranged in a layered feed forward topology. Such networks can model functions of almost arbitrary complexity, with the number of layers, and the number of units in each layer, determining the function complexity. The number of input and output units is defined by the problem.

Training Multilayer Perceptrons

Once the number of layers, and number of units in each layer, has been selected, the network's weights and thresholds must be set so as to minimize the prediction error made by the network. This is the role of the training algorithms. The error of a particular configuration of the network can be determined by running all the training cases through the network, comparing the actual output generated with the desired or target outputs. The differences are combined together by an error function to give the network error. The most common error functions are the sum squared error where the individual errors of output units on each case are squared and summed together. Each of the N weights and thresholds of the network (i.e., the free parameters of the model) is taken to be a dimension in space. The N+1th dimension is the network error. The objective of network training is to find the lowest point in this many-dimensional surface. In a linear model with sum squared error function, this error surface is a parabola (a quadratic), which means that it is a smooth bowl-shape with a single minimum. Neural network error surfaces are much more complex, and are characterized by a number of unhelpful features, such as local minima (which are lower than the surrounding terrain, but above the global minimum), flat-spots and plateaus, saddle-points, and long narrow ravines.

Back-Propagation Algorithm :

In order to train a neural network to perform some task, we must adjust the weights of each unit in such a way that the error between the desired output and the actual output is reduced. This process requires that the neural network compute the error derivative of the weights (EW). The back propagation algorithm is the most widely used method for determining the EW.
The back-propagation algorithm is easiest to understand if all the units in the network are linear. The algorithm computes each EW by first computing the EA, the rate at which the error changes as the activity level of a unit is changed. For output units, the EA is simply the difference between the actual and the desired output. To compute the EA for a hidden unit in the layer just before the output layer, we first identify all the weights between that hidden unit and the output units to which it is connected. We then multiply those weights by the EA s of those output units and add the products. This sum equals the EA for the chosen hidden unit. After calculating all the EA s in the hidden layer just before the output layer, we can compute in like fashion the EA s for other layers, moving from layer to layer in a direction opposite to the way activities propagate through the network. Once the EA has been computed for a unit, it is straight forward to compute the EW for each incoming connection of the unit. The EW is the product of the EA and the activity through the incoming connection. Before back-propagating, the EA must be converted into the EI, the rate at which the error changes as the total input received by a unit is changed.

APPLICATION OF NEURAL NETWORK ON 4 BIT A/D CONVERTER :

Fig. 6 Four-Bit Analog-to-Digital Converter Using Hopfield Net
The Fig. 6 shows a block diagram of the circuit, with amplifiers serving as artificial neurons. Resistors, representing weights, connect each neurons output to the inputs of all others. To satisfy the stability constraint, no resistor connects a neuronâ„¢s output to its own input and the weights are symmetrical; that is, a resistor from the output of neuron i to the input of neuron j to the input of neuron i The amplifiers have both normal and inverting outputs. In a realistic circuit, each amplifier will have a finite input resistance and input capacitance that must be included to characterize the dynamic response. Network stability does not require that these elements be the same for all amplifiers.
The application assumes that a threshold function is used (the limit of the sigmoid function as approaches 8). Furthermore, all of the outputs are changed at athe beginning of discrete time intervals called epochs. At the start of each epoch, the summation of the inputs to each neuron is examined. If it is greater than the threshold, the output becomes one: if it is less than the threshold, it becomes zero. Neuron outputs remain unchanged during an epoch.
The object is to select the resistors (weights) so that a continuously increasing voltage X applied to the single-input terminal produces a set of four outputs representing a binary number, the value of which is an approximation to the input voltage (see Figure ). First, the energy function is defined as follows:
E = - ½ + ¦(1)
Where X is the input voltage. When E is minimized, the desired outputs have been reached. The first expression in brackets is minimized when the binary number formed by the outputs is as close as possible to the analog value of the input X . The second bracketed expression goes to 0 when all of the outputs are either 1 or 0. If the above equation (1) is rearranged and compared with the equation
E = - ½
resulting expression for the weights is Wij =-2(i-j)
Yi =2i
Where Wij= conductance from the output of the neuron i to the input of neuron j.
Yi=conductance from the input X to the input of neuron i
Fig. 7. Four-Bit Analog-to-Digital Converter Ideal Input-Output Relationship
The idealized input-output relationship of Fig. 7 will be realized only if the inputs are set to zero prior to performing a conversion.

APPLICATONS:

¢ Detection of medical phenomena. A variety of health-related indices (e.g., a combination of heart rate, levels of various substances in the blood, respiration rate) can be monitored. Neural networks have been used to recognize this predictive pattern so that the appropriate treatment can be prescribed.
¢ Stock market prediction. Fluctuations of stock prices and stock indices are another example of a complex, multidimensional. Neural networks are being used by many technical analysts to make predictions about stock prices based upon a large number of factors such as past performance of other stocks and various economic indicators.
¢ Monitoring the condition of machinery. A neural network can be trained to distinguish between the sounds a machine makes when it is running normally ("false alarms") versus when it is on the verge of a problem.
¢ Engine management. Neural networks have been used to analyze the input of sensors from an engine. The neural network controls the various parameters within which the engine functions, in order to achieve a particular goal, such as minimizing fuel consumption.
USES:
1. Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
2. Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time.
3. Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.

Conclusion :

The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture. They also contribute to other areas of research such as neurology and psychology. Perhaps the most exciting aspect of neural networks is the possibility that some day 'conscious' networks might be produced.
Finally, it is to state that even though neural networks have a huge potential we will only get the best of them when they are integrated with computing.
References :

1. Limin Fu, Neural networks in Computer Intelligence, 1994 Tata Mc Graw-Hill Inc.
2. Alkon, D.L 1989, Memory Storage and Neural Systems, Scientific American.
3. Bark Kosko, Neural networks and Fuzzy systems-A dynamical systems approach to machine intelligence, 1992 prentice-Hall Inc.
4. Haykin Simon, Neural networks, Macmillan college publishing company Inc 1994.
5. Data Analysis Center for software, Neural Networks technology, 1992.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: free ppts on acumene neural converter, acumene neural converter pdf, synaptic lollipop, full report on acumene neural converter pdf, acumene,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Transparent electronics full report seminar surveyer 8 24,549 04-04-2018, 07:54 AM
Last Post: Kalyani Wadkar
  wireless charging through microwaves full report project report tiger 90 70,949 27-09-2016, 04:16 AM
Last Post: The icon
  Wireless Power Transmission via Solar Power Satellite full report project topics 32 50,416 30-03-2016, 03:27 PM
Last Post: dhanabhagya
  surge current protection using superconductors full report computer science technology 13 26,981 16-03-2016, 12:03 AM
Last Post: computer science crazy
  paper battery full report project report tiger 57 61,786 16-02-2016, 11:42 AM
Last Post: Guest
  IMOD-Interferometric modulator full report seminar presentation 3 11,438 18-07-2015, 10:14 AM
Last Post: [email protected]
  digital jewellery full report project report tiger 36 66,683 27-04-2015, 01:29 PM
Last Post: seminar report asees
  LOW POWER VLSI On CMOS full report project report tiger 15 22,280 09-12-2014, 06:31 PM
Last Post: seminar report asees
  eddy current brake full report project report tiger 24 33,504 14-09-2014, 08:27 AM
Last Post: Guest
  dense wavelength division multiplexing full report project reporter 3 4,532 16-06-2014, 07:00 PM
Last Post: seminar report asees

Forum Jump: