Fault-Tolerant and Bayesian Approaches to Self-Organizing Neural Networks
#1

[attachment=13460]
ABSTRACT :
Heterogeneous types of gene expressions may provide a better insight into the biological role of gene interaction with the environment, disease development and drug effect at the molecular level. In this paper for both exploring and prediction
purposes a Time Lagged Recurrent Neural Network with trajectory learning is proposed for identifying and classifying the gene functional patterns from the heterogeneous nonlinear time series microarray experiments. The proposed procedures identify gene functional patterns from the dynamics of a state-trajectory learned in the heterogeneous time series and the gradient information over time. Also, the trajectory learning with Backpropagation through time algorithm can recognize gene expression patterns vary over time. This may reveal much more information about the regulatory network underlying gene expressions.
The analyzed data were extracted from spotted DNA microarrays in the budding yeast expression measurements, produced by Eisen et al. The gene matrix contained 79 experiments over a variety of heterogeneous experiment conditions. The number of recognized gene patterns in our study ranged from two to ten and were divided into three cases.
Optimal network architectures with different memory structures were selected based on Akaike and Bayesian information statistical criteria using two-way factorial design. The optimal model performance was compared to other popular gene classification algorithms such as Nearest Neighbor, Support Vector Machine, and Self-Organized Map. The reliability of the performance was verified with multiple iterated runs.
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION TO NEURAL NETWORK:

An artificial neural network is a system based on the operation of biological neural networks, in other words, is an emulation of biological neural system. Why would be necessary the implementation of artificial neural networks? Although computing these days is truly advanced, there are certain tasks that a program made for a common microprocessor is unable to perform; even so a software implementation of a neural network can be made with their advantages and disadvantages.
Another aspect of the artificial neural networks is that there are different architectures, which consequently requires different types of algorithms, but despite to be an apparently complex system, a neural network is relatively simple.
Artificial neural networks are among the newest signal processing technologies nowadays. The field of work is very interdisciplinary, but the explanation I will give you here will be restricted to an engineering perspective.
In the world of engineering, neural networks have two main functions: Pattern classifiers and as non linear adaptive filters. As its biological predecessor, an artificial neural network is an adaptive system. By adaptive, it means that each parameter is changed during its operation and it is deployed for solving the problem in matter. This is called the training phase.
A artificial neural network is developed with a systematic step-by-step procedure which optimizes a criterion commonly known as the learning rule. The input/output training data is fundamental for these networks as it conveys the information which is necessary to discover the optimal operating point. In addition, a non linear nature make neural network processing elements a very flexible system.
Basically, an artificial neural network is a system. A system is a structure that receives an input, process the data, and provides an output. Commonly, the input consists in a data array which can be anything such as data from an image file, a WAVE sound or any kind of data that can be represented in an array. Once an input is presented to the neural network, and a corresponding desired or target response is set at the output, an error is composed from the difference of the desired response and the real system output.
The error information is fed back to the system which makes all adjustments to their parameters in a systematic fashion (commonly known as the learning rule). This process is repeated until the desired output is acceptable. It is important to notice that the performance hinges heavily on the data. Hence, this is why this data should pre-process with third party algorithms such as DSP algorithms.
In neural network design, the engineer or designer chooses the network topology, the trigger function or performance function, learning rule and the criteria for stopping the training phase. So, it is pretty difficult determining the size and parameters of the network as there is no rule or formula to do it. The best we can do for having success with our design is playing with it. The problem with this method is when the system does not work properly it is hard to refine the solution. Despite this issue, neural networks based solution is very efficient in terms of development, time and resources. By experience, I can tell that artificial neural networks provide real solutions that are difficult to match with other technologies.
Fifteen years ago, Denker said: “artificial neural networks are the second best way to implement a solution” this motivated by their simplicity, design and universality. Nowadays, neural network technologies are emerging as the technology choice for many applications, such as patter recognition, prediction, system identification and control.
1.2 INTRODUCTION TO BAYESIAN APPROACH:
A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Formally, Bayesian networks are directed acyclic graphs whose nodes represent random variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Edges represent conditional dependencies; nodes which are not connected represent variables which are conditionally independent of each other. Each node is associated with a probability function that takes as input a particular set of values for the node's parent variables and gives the probability of the variable represented by the node. For example, if the parents are m Boolean variables then the probability function could be represented by a table of 2m entries, one entry for each of the 2m possible combinations of its parents being true or false.
Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
1.3 INTRODUCTION TO SELF ORGANISING NEURAL NETWORK :
SOMs generally present a simplified, relational view of a highly complex data set.Once map objects or nodes are organized, all the data associated with a given node may be made available via that node. However this does not mean that all this data participated in the process of self-organization. A data set of nations might self-organize by annual rainfall, and once organized provide additional information such as color-coding by GNP.
A Self-organizing Map is a data visualization technique developed by Professor Teuvo Kohonen in the early 1980's. SOMs map multidimensional data onto lower dimensional subspaces where geometric relationships between points indicate their similarity. The reduction in dimensionality that SOMs provide allows people to visualize and interpret what would otherwise be, for all intents and purposes, indecipherable data. SOMs generate subspaces with an unsupervised learning neural network trained with a competitive learning algorithm. Neuron weights are adjusted based on their proximity to "winning" neurons (i.e. neurons that most closely resemble a sample input). Training over several iterations of input data sets results in similar neurons grouping together and vice versa. The components of the input data and details on the neural network itself are described in the "Basics" section. The process of training the neural network itself is presented in the "Algorithm" section. Optimizations used in training are discussed in the "Optimizations" section.
SOMs have been applied to several problems. The simple yet powerful algorithm has been able to reduce incredibly complex problems down to easily interpreted data mappings. The main drawback of the SOM is that it requires neuron weights be necessary and sufficient to cluster inputs. When an SOM is provided too little information or too much extraneous information in the weights, the groupings found in the map may not be entirely accurate or informative. This shortcoming, along with some other problems with SOMs are addressed in the "Conclusions" section.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: phase 1 ordering phase self organizing map tutorial, tolerant and understandinging scr wikipedia, tutorirllr self organizing maps, bayesian spam filtering ppt, self organizing map opencv, bayesian attack graphmnar, matlab code for bayesian equalization,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  DESIGN AND IMPLEMENTATION OF GOLAY ENCODER AND DECODER computer science crazy 2 23,414 26-08-2016, 03:46 PM
Last Post: anasek
  WORMHOLE ATTACK DETECTION IN WIRELESS ADHOC SENSOR NETWORKS seminar class 7 18,926 17-08-2016, 09:23 AM
Last Post: jaseela123d
  Measuring the Performance of IEEE 802.11p Using ns-2 Simulator for Vehicular Networks smart paper boy 3 2,558 07-10-2014, 06:34 PM
Last Post: seminar report asees
  ANTI THEFT ALERT AND AUTO ARRESTING SYSTEM FOR MUSEUMS AND JEWELRY SHOPS project report helper 11 14,527 12-08-2013, 09:57 AM
Last Post: computer topic
  A neural network based artificial vision system for licence plate recognition on reception projectsofme 2 2,758 27-07-2013, 11:57 AM
Last Post: computer topic
  AUTOMATIC VEHICLE ACCIDENT DETECTION AND MESSAGING SYSTEM USING GSM AND GPS MODEM smart paper boy 14 10,748 02-01-2013, 06:16 PM
Last Post: naidu sai
  PON Topologies for Dynamic Optical Access Networks smart paper boy 1 1,793 12-12-2012, 12:40 PM
Last Post: seminar details
  Simulation and Fault analysis of HVDC project report helper 1 2,636 08-12-2012, 04:43 PM
Last Post: seminar details
  SHORT TERM LOAD FORECASTING USING ARTIFICIAL NEURAL NETWORKS AND FUZZY LOGIC seminar class 1 2,893 06-12-2012, 01:03 PM
Last Post: seminar details
  Scalable Multicasting in Mobile Ad Hoc Networks smart paper boy 1 1,415 29-11-2012, 01:06 PM
Last Post: seminar details

Forum Jump: