matlab code for functional link artificial neural networks
#1

Hi, i would like to know how can you set up ANN toolbox in matlab or ANN procedure in matlab for Functional page link artificial neural network?. Since the ANN methods always requires a hidden layer while there is no hidden layer in FLANN.
Reply
#2

Abstract— Here we have presented an alternate ANN
structure called functional page link ANN (FLANN) for image
denoising. In contrast to a feed forward ANN structure i.e.
a multilayer perceptron (MLP), the FLANN is basically a
single layer structure in which non-linearity is introduced
by enhancing the input pattern with nonlinear function
expansion. In this work three different expansions is
applied. With the proper choice of functional expansion in
a FLANN , this network performs as good as and in some
case even better than the MLP structure for the problem
of denoising of an image corrupted with Salt and Pepper
noise. In the single layer functional page link ANN (FLANN)
the need of hidden layer is eliminated. The novelty of this
structure is that it requires much less computation than
that of MLP. In the presence of additive white Gaussian
noise in the image, the performance of the proposed
network is found superior to that of a MLP .In particular
FLANN structure with Chebyshev functional expansion
works best for Salt and Pepper noise suppression from an
image.
Index Terms—MLP, FLANN, Chebyshev FLANN, Salt
and Pepper noise.
I. INTRODUCTION
DENOISING of image is a major field of image
processing. When data is transmitted in channel, noise
gets added in the image, it varies from time to time and
also it changes in a fraction of second. A human expert
can't take decision to choose a filter to suppress the
noise at that small time. To avoid different limitations
of fixed filters, adaptive filters are designed that adapt
themselves to the changing conditions of signal and
noise. In such an application, the image filter must
adapt the image local statistics, the noise type, and the
noise power level and it must adjust itself to change its
characteristics so that the overall filtering performance
has been enhanced to a high level. One of the most
important example of it is neural network based
adaptive image filter.
Artificial neural networks (ANN) have emerged as a
powerful learning technique to perform complex tasks
in highly nonlinear environment [1]. Some of the
advantage of ANN model are : (i)There ability to learn
based on optimization technique of an appropriate error
function,(ii) There excellent performance for
approximation of nonlinear functions. Most of the ANN
based systems are based on multilayer feed forward
networks such as MLP trained with back propagation
(BP). This is due to the fact that these networks are
robust and effective in denoising of image. As an
alternative to the MLP, there has been considerable
interest in radial basis function (RBF) network in [2].
The functional page link artificial neural network
(FLANN) by pao [5] can be used for function
approximation and pattern classification with faster
convergence and lesser computational complexity than
a MLP network. A FLANN using sine and cosine
functions for functional expansion for the problem of
nonlinear dynamic system identification has been
reported [6]. For functional expansion of the input
pattern, we choose the trigonometric, exponential,
Chebyshev expansion and compare the outputs with
MLP. The primary purpose of this paper is to highlight
the effectiveness of the proposed simple ANN structure
in the problem of denoising of image corrupted with
Salt and Pepper noise.
II. STRUCTURE OF THE ARTIFICIAL NEURAL NETWORK
FILTERS
Here, we briefly describe the architecture and
learning algorithm for multilayer neural network and
FLANN.
A. Multilayer perceptron
The MLP has a multilayer architecture with one or
more hidden layers between its input and output layers.
All the nodes of a lower layer are connected with all the
nodes of the adjacent layer through a set of weights.
All the nodes in all layers (except the input layer) of the
© 2009 ACADEMY PUBLISHER
RESEARCH PAPER
International Journal of Recent Trends in Engineering, Vol. 1,No.1, May 2009
414
MLP contain a nonlinear tanh( ) function. A pattern is
applied to the input layer, but no computation takes
place in this layer. Thus the output of the nodes of this
layer is the input pattern itself. The weighted sum of
outputs of a lower layer is passed through the nonlinear
function of a node in the upper layer to produce its
output. Thus, the outputs of all the nodes of the network
are computed. The outputs of the output layer are
compared with a target pattern associated with the input
pattern. The error between the target pattern and the
output layer node is used to update the weights of the
network. The MSE is used as a cost function and BP
algorithm attempts to minimize the cost function by
updating all weights of the network [1].
B. Functional page link ANN
The FLANN, which is initially proposed by Pao, is a
single layer artificial neural network structure capable
of performing complex decision regions by generating
nonlinear decision boundaries. In a FLANN the need of
hidden layer is removed. In contrast to linear weighting
of the input pattern produced by the linear links of a
MLP, the functional page link acts on the entire pattern by
generating a set of linearly independent functions. If
network has two input i.e.
[ ]' 1 2 X = x x An enhanced pattern obtained by using functional
expansion is given by
[1 ( ) ( ) ... ]' 1 1 2 2 2 X = x T x x T x . (1)
Figure 1. A FLANN structure.
In this paper the input pattern of the noisy image is
sent in the input node of the FLANN structure and an
enhanced pattern is obtained. The target will be the
corresponding single pixel from original image. This
process continues iteratively till all pattern of the image
gets completed. The whole process continues for 100
times to find out error power with iteration. The BP
algorithm used to train the FLANN becomes simple and
has a faster convergence due to its single layer
architecture. For functional expansion of the input
pattern, the trigonometric, power series, exponential
polynomials are chosen individually.
C. Different functional expansions
Here the functional expansion block make use of a
functional model comprising of a subset of orthogonal
sine and cosine basic functions and the original pattern
along with its outer products. For example, considering
a two dimensional input pattern i.e.
[ ]' 1 2 X = x x
the enhanced pattern is obtained by using a
trigonometric functions as
[ ]
T
X x1cos( x1)sin( x1)... x2cos( x2 )sin( x2 )...x1x2 1 ' ' ' ' = Π Π Π Π (2)
Using exponential expansion will be
[ ]
T x x x x X x1exp exp ... x2exp exp ... 1 1 2 1 2 2 2 = (3)
The Chebyshev polynomials are a set of orthogonal
polynomials defined as the solution to the Chebyshev
differential equation. The structure of a ChNN is
shown in Fig. 2.
These higher Chebyshev polynomials for -1<x<1
may be generated using the recursive formula given by
(4)
The first few Chebyshev polynomials are given by
(5)
Figure 2. A ChNN structure.
© 2009 ACADEMY PUBLISHER
RESEARCH PAPER
International Journal of Recent Trends in Engineering, Vol. 1,No.1, May 2009
415
Exponential polynomial expansion needs less
number of computations and is very easy to implement
then other three type of polynomial expansion.
Chebyshev polynomial expansion gives better
performance for the prediction of financial time series.
III. COMPUTATIONAL COMPLEXITY
Here, we present a comparison of the
computational complexity between an MLP, and a
FLANN having different expansions and all having the
tanh(.) as their nonlinear function. In all the cases,
multiplications, additions and computations of the
tanh(.) are required. However in the case of FLANN,
additional computations of the sine and cosine
functions are needed for its functional expansion. In the
training and updating of the weights of MLP, extra
computations are incurred due to its hidden layer. This
is due to the error propagation for the calculation of the
square error derivative of each neuron in the hidden
layer. For each iteration the computation are: (1)
Forward calculations to find the activation value of all
the nodes of the entire network.(2) Back-error
propagation for calculation of square error
derivatives.(3) Updating weights of the entire network.
In the case of MLP with {I-J-K}, the total number of
weights is given by (I+1)J + (J+1)K. Whereas, in the
case of FLANN with {D-K}, it is given by (D+1)K.
The number of computation for both MLP and FLANN
are shown in Table 1:
TABLE. 1
COMPARISON OF COMPUTATIONAL COMPLEXITY IN ONE ITERATION
From this table it may be seen that the number of
additions, multiplications and computation of tanh are
much less in case of a FLANN than that of a MLP
network. As the number of hidden layer increases the
computations in a MLP increases. But due to absence of
hidden layer in the FLANN its computational
complexity reduces drastically.
IV. SIMULATION STUDIES
Extensive simulation studies were carried out with
several examples to compare performance of MLP
with FLANN for denoising of image. This work is
carried out when the types of noise is Salt and Pepper
noise.In our simulation we set MLP to be {9-4-1}.
Different parameters are decided after experimenting
with different values of the parameters. It is observed
that large window size, more hidden layer or more
number of hidden layer neuron does not sure to produce
better results. In all types of FLANN the input pattern
is expanded in such a way that the total numbers of
weights in the three ANNs are approximately same.
The structure of FLANN is {9-1} and ease input of the
input pattern was expanded five times using different
expansion. Hence the total number of weighs for the
MLP and FLANN having different expansion will be
same and be equal to 45. The learning rate for ANN
and FLANN is set at 0.03. The number of iteration
was set to 3000 for all the models. The BP learning
algorithm has been used. MATLAB simulation tool
has been implemented here. The training inputs and
corresponding targets were normalized to fall with in
the interval of [0, 1]. The MLP has logistic sigmoid
nonlinear function at the hidden layer. In all the cases
the output node has tan hyperbolic nonlinear function.
For the training the neural network, we uses the back
propagation algorithm. It is supervised learning, hence
test image to which additive noise has been applied
have been used. While training, the noisy pixels of 3X3
window form the noisy image will be entered into the
network as a vector. The associated desire value is the
corresponding pixel value from original image. For this
the network do not take into account the border values
of the noisy image. Here the images taken are 256X256
size and hence the network input vector is of 253X253
image. For the training of network, a different intensity
combination that may arise from noisy image is used.
For this Lena image is used which is rich in different
patterns. It is important to note that the neural network
has a general training and can be applied to any kind of
image with Salt and Pepper noise. Hence the network
trained with any noisy image and can be tested with any
noisy image.
A. Peak Signal to Noise Ratio TABLE 2
RESULT FOR FILTERS IN TERMS OF PSNR VALUE
Noisy ANN Ch-NN
Image 1 20.28 26.52 27.42
Image 2 20.54 26.77 27.42
Image 3 20.43 26.67 27.36
Image 4 20.74 26.12 27.28
In this work computer simulations are carried out to
compare the PSNR value of filtered images obtained
from these adaptive models. Images were corrupted
by Salt and Pepper noise of density 0.05, before
filtration
The shown numbers corresponds to the peak
signal-to- noise ratio PSNR value of Images. From this
table it can be seen, the non linear adaptive filter
FLANN having Chebyshev functional expansion have
shows better result then MLP or any other expansion
© 2009 ACADEMY PUBLISHER
RESEARCH PAPER
International Journal of Recent Trends in Engineering, Vol. 1,No.1, May 2009
416
in FLANN in all the images. Table shows the result
obtained when applying the neural network to a set of
standard testing images. These images are shown in the
figure.
Lena Barbara
Cameraman Bridge
Figure 2. Original Images
B. The Convergence Characteristics
.
0 500 1000 1500 2000 2500 3000 -35
-30
-25
-20
-15
-10
-5
0
Number of Iteration
NMSE
T-Flann
E-Flann
Ch-NN
MLP
Figure 5. Convergence Characteristics of ANN and Three Different
FLANN
The general convergence characteristics of ANN and
FLANN having different expansion are shown in the
figure 5.
T-FLANN: FLANN having Trigonometric expansion.
E-FLANN: FLANN having Exponential expansion. PCh-NN:
FLANN having Chebyshev expansion. The
convergence characteristics for ANN and FLANN
having different expansion are depicted here. It can be
observed that FLANN having Chebyshev expansion
shows much better convergence rate and lower MSE
floor than other FLANN and ANN. It shows its superior
performance in terms of convergence speed and steady
state MSE level.
C. Subjective Evaluation
The performance of ANN and FLANN structure
with different expansion can also be judge by
subjective evaluation i.e. from seeing the noise free
image.



Figure 4. Filtered Images
D. The Computational Complexity.
The computational complexity of ANN with
FLANN are analyzed and compared in the table . It
can be seen that the number of additions are almost
same in ANN and FLANN structure but the number of
multiplication and computation of tanh( )function is
much lesser than that of the MLP.
TABLE.3.
COMPARISON OF COMPUTATIONAL COMPLEXITY IN ONE ITERATION
Number of
Operation
MLP
9-4-1
FLANN
45-1
Addition 2x9x4+3x4x1
+3x1=87
2x1(45+1)
+1=93
Multiplication 3x9x4+4x4x1
+3x4+5x1=141
3x1(45+1)
+2x1=140
Tanh(.) 4+1=5 1
E. CPU Ttraining Time.
The training time is the average time taken for the
completion of the training phase of each of the ANNs
Filtered Image Using
Ch-NN
Filtered Image Using
E-FLANN
Filtered Image Using
T-FLANN
Filtered Image Using
MLP
© 2009 ACADEMY PUBLISHER
RESEARCH PAPER
International Journal of Recent Trends in Engineering, Vol. 1,No.1, May 2009
417
on a computer with the specification of AMD 1.8 GHz
processor and 1024 MB of RAM.
TABLE.4.
COMPARISON OF TRAINING TIME BETWEEN THE ANN AND C-FLANNN
Avg, Training
Time(s)
MLP
{9-4-1}
C-FLANN
{9-1}
3000 iteration 454.5 211.64
1000 iteration 152.83 71.17
From the table it is shown that the MLP requires
about 152 second for training with 1000 iteration. But
in FLANN it needs about 70 second. The average time
require to compute the expansions of polynomials were
found to be about 4 second.
V. CONCLUSION
Here we have proposed use of single layer
FLANN structure which is computationally efficient for
denoising of image corrupted with Salt and Pepper
noise. The functional expansion may be thought of
analogous to the nonlinear processing of signals in the
hidden layer of an MLP. This functional expansion of
the input increases the dimension of the input pattern.
In the FLANN structure proposed for denoising of
image, the input functional expansion is carried out
using the trigonometric, exponential or Chebyshev
polynomials. The prime advantage of the FLANN
structure is that it reduces the computational complexity
without any sacrifice on its performance.
Simulation results indicate that the performance of
FLANN is better than MLP for Salt and Pepper noise
suppression from an image. From these work it is clear
that FLANN having Chebyshev Functional expansion is
better for Salt and Pepper noise suppression than other
FLANN structure. The FLANN structure having
Chebyshev functional expansion may be used for online
image processing application due to its less
computational requirement and satisfactory
performance. The new nonlinear adaptive filter FLANN
shown satisfactory results in its application to images
with additive noise. Its adaptive capacity to different
parameters when generating the image with Gaussian
noise has to be studied. Generalization of this filter
applicable to other types of noise has to be developed.
REFERENCES
[1] S. Haykin., Neural Networks, Ottawa.ON.Canda,
Maxwell Macmillan, 1994.
[2] J. Park, and I. W. Sandberg, “Universal approximation
using radial basis function networks,”Neural Comput.,
vol. 3, 1991,pp. 246–257.
[3] S.Chen, S.A.Billings, and P.M Grant, "Recursive Hybrid
Algorithm for nonlinear system identification using
radial basis function networks."Int.J.Contr.vol
55.no.5,1992, pp.1051-1070.
[4] Q.Zhang, and A.Benvenister, "Wavwlet networks" IEEE
Trans Neural Network vol 3, Mar 1992.pp 889-898.
[5] Y.H .Pao, Adaptive Pattern Recognition and neural
networks. Reading .MA addison- Wesley.1989
[6] Patra.J.C, Pal.R. N, Chatterji.B.N, Panda,G,
"Identification of nonlinear dynamic systems using
functional page link artificial neural networks" IEEE
Transactions , Systems, Man and Cybernetics, Part
B,Vol 29 , April-1999, pp 254 – 262.
[7] A.Namatame, and N.Ueda,"Pattern classification with
Chebyshev neural networks," Ind.J.Neural Networks Vol
3,Mar. 1992, pp 23-31
[8] Patra,J.C, Pal,R.N, "Functional page link artificial neural
network-based adaptive channel equalization of
nonlinear channels with QAM signal" IEEE
International Conference, Systems, Man and
Cybernetics, 1995,Vol3, Oct.-1995, pp 2081-2086
[9 ] R Grino, G.Cembrano, and C.Torres, "Nonlinear system
Identification using additive dynamic neural networks
two on line approaches."IEEE Trans Circuits
SystemIvol47, Feb 2000, pp 150-165.
[10] A.R.Foruzan, B.N.Araabi, "Iterative median filtering for
restoration of images with impulsive noise.’’
Electronics, Circuits and Systems, 2003. ICECS 2003.
Dec 2003 ,PP 14-17 Dec.
[11] L. Corbalan, G.Osella, Massa.C.Russo, L.Lanzarini,. De
Giusti ‘’Image Recovery Using a New Nonlinear daptive
Filter Based on Neural Networks’’Journal of Computing
and Information Technology - CIT 14, Apr.2006, pp
315– 320.
[12] F. Russo, ‘’A method for estimation and filtering of
Gaussian noise in images. Instrumentation and
Measurement,’’ IEEE Transactions on Volume 52,Issue
4, Aug. 2003, pp. 1148–1154.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: billings, wavwlet, having, polynomials, computations, functional link artificial neural network matlab code, who is gina torres,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  matlab code 1 3,438 31-01-2019, 02:52 PM
Last Post: [email protected]
  underwater optical communication matlab code 0 3,291 02-11-2018, 07:32 PM
Last Post: Guest
  source code for blood group detection in matlab 0 6,366 22-10-2018, 10:59 AM
Last Post: Guest
  hackchina matlab code 0 621 27-09-2018, 10:45 PM
Last Post: Guest
  heart disease prediction system source code for matlab 0 764 27-09-2018, 04:40 PM
Last Post: Guest
  computer networks by bakshi pdf 0 664 22-08-2018, 09:36 AM
Last Post: Guest
  matlab code for echo hiding 1 779 17-08-2018, 07:35 PM
Last Post: Guest
  matlab code for echo hiding 1 710 17-08-2018, 07:34 PM
Last Post: Guest
  download source code of zrp in matlab 0 730 14-08-2018, 02:48 PM
Last Post: Guest
  download artificial neural networks solution for zurada 0 1,013 12-08-2018, 11:17 PM
Last Post: Guest

Forum Jump: