Generic Visual Perception Processor
#1

Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eye s neural networks and the brain, the chip can handle about 20 billion instructions per second. This electronic eye on the chip can handle a task that ranges from sensing the variable parameters as in the form of video signals and then process it for coGeneric visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eye s neural networks and the brain, the chip can handle about 20 billion instructions per second
Reply
#2
[attachment=2711]


GENERIC VISUAL PERCEPTION PROCESSOR
ABSTRACT
Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eye s neural networks and the brain, the chip can handle about 20 billion instructions per second. This electronic eye on the chip can handle a task that ranges from sensing the variable parameters as in the form of video signals and then process it for co-Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eye s neural networks and the brain, the chip can handle about 20 billion instructions per second.

Presented By:
VACHAN ADITYA
Information science and engineering of Visvesvariaiah technological university, belgaum

1. A visual perception processor for automatically detecting an event occurring in a multidimensional space (i, j) evolving over time with respect to at least one digitized parameter in the form of a digital signal on a data bus, said digital signal being in the form of a succession aijT of binary numbers associated with synchronization signals enabling to define a given instant (T) of the multidimensional space and the position (i, j) in this space, the visual perception processor comprising: the data bus; a control unit a time coincidences bus carrying at least a time coincidence signal; and at least two histogram calculation units for the treatment of the at least one parameter, the histogram calculation units being configured to form a histogram representative of the parameter as a function of a validation signal and to determine by classification a binary classification signal resulting from a comparison of the parameter and a selection criterion C, wherein the classification signal is sent to the time coincidences bus, and wherein the validation signal is produced from time coincidences signals from the time coincidence bus so that the calculation of the histogram depends on the classification signals carried by the time coincidence bus.
2. A visual perception processor according to claim 1, further comprising, to process several parameters, several histogram calculation units organized into a matrix, wherein each of the calculation units is connected to the data bus and to the time coincidences bus.
3. A visual perception processor, comprising: data bus; a time coincidences bus; and two or more histogram calculation units that receive the data DATA(A), DATA(B), . . . DATA(E) via the data bus and supply classification information to the single time coincidences bus, wherein at least one of said two or more histogram calculation unit processes data aijT associated with pixels forming together a multidimensional space (i, j) evolving over time and represented at a succession of instants (T), wherein said data reaches said at least one calculation unit in the form of a digital signal DATA(A) in the form of a succession aijT of binary numbers of n bits associated with synchronization signals enabling to define the given instant (T) of the multidimensional space and the position (i, j) of the pixels in this space, to which the signal aijT received at a given instant (T) is associated, said unit comprising: an analysis memory including a memory with addresses, each address associated with possible values of the numbers of n bits of the signal DATA(A) and whose writing process is controlled by a WRITE signal; a classifier unit comprising a memory intended for receiving a selection criterion C of the parameter DATA(A), said classifier unit receiving the signal DATA(A) at the input and outputting a binary output signal having a value that depends on a result of the comparison of the signal DATA(A) with the selection criterion C; a time coincidences unit that receives the output signal from the classifier unit and, from outside the histogram calculation unit, individual binary enabling signals affecting parameters other than DATA(A), wherein said time coincidences unit outputs a positive global enabling signal when all the individual time coincidences signals are positive; a test unit; an analysis output unit including output memory; an address multiplexer; an incrementation enabling unit; and a learning multiplexer; wherein a counter of each address in the memory corresponds to the value d of aijT at a given instant, which is incremented by one unit when the time coincidences unit outputs a positive global enabling signal; wherein the test unit is provided for calculating and storing statistical data processes, after receiving the data aijT corresponding to the space at an instant T, a content of the analysis memory in order to update the output memory of the analysis output unit, wherein the output memory is deleted before a beginning of each frame for a space at an instant T by an initialization signal;
wherein the learning multiplexer is configured to receive an external command signal and initiate an operation according to a learning mode in which registers of the classifier unit and of the time coincidences unit are deleted when starting to process a frame, wherein the analysis output unit supplies values typical of a sequence of each of these registers.
4. A visual perception processor according to claim 3, wherein the memory of the classifier is an addressable memory enabling real time updating of the selection criterion C and having a data input register, an address command register and a writing command register, receiving on its input register the output from the analysis memory and a signal End on its writing command register, the processor further comprising a data input multiplexer with two inputs and one output, receiving on one of its inputs a counting signal and on its other input the succession of data aijT to the address command of the memory of the classifier and an operator OR controlling the address multiplexer and receiving on its inputs an initialization signal and the end signal END.
5. A visual perception processor according to claim 4, wherein the space (i, j) is two-dimensional and wherein the signal DATA(A) is associated with the pixels of a succession of images.
6. A visual perception processor according to claim 3, further comprising means for anticipating the value of the classification criterion C.
7. A visual perception processor according to claim 6, wherein the means for anticipating the value of the classification criterion C comprises memories intended for containing the values of statistical parameters relating to two successive frames T0 and T1.
8. A visual perception processor according to claim 7, wherein the statistical parameters are the average values of the data aijT enabled.
9. A visual perception processor according to claim 3, wherein the analysis output register stores in its memory at least one of the following values: the minimum 'MIN', the maximum 'MAX', the maximum number of pixels for which the signal Vijt has a particular value 'RMAX', the particular value corresponding POSRMAX, and the total number of enables pixels 'NBPTS'.
10. A visual perception processor according to claim 3, wherein the statistical comparison parameter used by the classifier is RMAX/2.
11. A visual perception processor according to claim 3, further comprising a control multiplexer configured to receive at its input several statistical parameters and wherein the comparison made by the classifier depends on a command issued by the control multiplexer.
12. A visual perception processor according to claim 3, wherein the memory of the classifier includes a set of independent registers D, each comprising one input, one output and one writing command register, wherein the number of these registers D is equal to the number n of bits of the numbers of the succession Vijt, the classifier further comprising a decoder configured to output a command signal corresponding to the related input value (address) and a multiplexer controlled by this input value, thus enabling to read the chosen register.
13. A visual perception processor according to claim 12, further comprising register input multiplexers, each being associated with the input of a register, and combinatory modules connecting the registers to one another, wherein the register input multiplexers are configured to choose between a sequential writing mode and a writing mode common to all the registers connected together by the combinatory modules.
14. A visual perception processor according to claim 13, wherein the combinatory modules comprise a morphological expansion operator including a three-input logic unit 'OR', wherein the first input unit receives the output signal of the 'Q'-order register, wherein the second input unit is connected to the output of a two-input logic unit 'AND' receiving respectively the output signal of the 'Q 1'-order register and a positive expansion signal, and wherein the third input unit is connected to the output of a two-input logic unit 'AND' receiving respectively the output signal of the 'Q-1'-order register and a negative expansion signal.
15. A visual perception processor according to claim 14, wherein the combinatory modules comprise a morphological erosion operator including a three-input logic unit 'AND', wherein the first input unit receives the output signal of the 'Q'-order register, wherein the second input unit is connected to the output of a logic unit 'AND', wherein one four-input reverse receives respectively the output signal of the 'Q'-order register, the output signal of the 'Q-1'-order register, the output signal of the 'Q 1'-order register and a negative erosion signal, and wherein the third input unit is connected to the output of a four-input logic unit 'AND', wherein one reverse receives respectively the output signal of the 'Q'-order register, the output signal of the 'Q-1'-order register, the output signal of the 'Q 1'-order register and a negative erosion signal.
16. A histogram calculation unit according to claim 14, wherein each combinatory module comprises a multiplexer associating a morphological expansion operator and a morphological erosion operator.
17. A visual perception processor according to claim 3, wherein the histogram calculation units are organized into a matrix.
18. A device for detecting one or more events including aural and/or visual phenomena, the device comprising: a controller coupled to a controller bus and a transfer bus; an input portal adapted to receive data describing one or more parameters of the event being detected; and a data processing block coupled to the input portal, the transfer bus and the controller bus, the data processing block including: a histogram unit coupled to the input portal and configured to calculate a histogram for a selected parameter; a classification unit coupled to the input portal and the histogram unit, and configured to determine the data in the histogram that satisfy a selected criterion, and to generate an output accordingly, the classification unit supplying the output to the transfer bus; and a coincidence unit coupled to receive the output of the classification unit from the transfer bus and to receive selected coincidence criteria from the controller bus, the coincidence unit being configured to generate an enable signal for the histogram unit when the output of the classification unit satisfies the selected coincidence criterion, wherein classification is performed automatically by processing statistical information associated with the calculated histogram.
19. The device of claim 18, wherein the classification unit includes a memory table for storing selection criteria, and wherein automatic classification involves updating the selection criteria in the memory table based on the processed statistical information.
20. The device of claim 19, wherein the processed statistical information includes a value RMAX defining the number of data points at the maximum of the calculated histogram, and wherein automatic classification involves updating the selection criteria in the memory table based on the value RMAX.
21. The device of claim 18, wherein the classification unit includes a memory table for storing selection criteria, and wherein automatic classification involves changing an address input to the memory table based on the processed statistical information.
22. A device for detecting one or more events including aural and/or visual phenomena, the device comprising: a controller coupled to a controller bus and a transfer bus;
an input multiplexer adapted to receive data describing one or more parameters of the event being detected, and to output data describing a selected one of the one or more parameters in response to a selection signal; and a data processing block coupled to the multiplexer, the transfer bus and the controller bus, the data processing block including: a histogram unit coupled to the input portal and configured to calculate a histogram for the selected parameter; a classification unit coupled to the input portal and the histogram unit, and configured to determine the data in the histogram that satisfy a selected criterion, and to generate an output accordingly, the classification unit supplying the output to the transfer bus; and a coincidence unit coupled to receive the output of the classification unit from the transfer bus and to receive selected coincidence criteria from the controller bus, the coincidence unit being configured to generate an enable signal for the histogram unit when the output of the classification unit satisfies the selected coincidence criterion.
23. A device for detecting one or more events including aural and/or visual phenomena, the device comprising: a controller coupled to a controller bus and a transfer bus; an input portal adapted to receive data sets describing one or more parameters of the event being detected, each data set being associated with an instant of time; and a data processing block coupled to the input portal, the transfer bus and the controller bus, the data processing block including: a histogram unit coupled to the input portal and configured to calculate a histogram for a selected parameter for a particular instant of time T1; a classification unit coupled to the input portal and the histogram unit, and configured to determine the data in the histogram that satisfy a selected criterion, and to generate an output accordingly, the classification unit supplying the output to the transfer bus; and a coincidence unit coupled to receive the output of the classification unit from the transfer bus and to receive selected coincidence criteria from the controller bus, the coincidence unit being configured to generate an enable signal for the histogram unit when the output of the classification unit satisfies the selected coincidence criterion, wherein the classification unit automatically anticipates values associated with the selected parameter at a next instant of time T2 based on statistical information associated with the calculated histograms at time T1 and at a previous time T0.
24. The device of claim 23, wherein the statistical information at each time T0 and T1 includes a value POSMOY defined as the value, for a set of parameters, which is greater than or equal to half of the values of the set of parameters.
25. The device of claim 24, wherein automatic anticipation is based on a function of POSMOY at T0 minus POSMOY at T1 (P0-P1).
26. The device of claim 25, wherein the function includes one of Y=(P0-P1), Y=a(P0-P1) b, and Y=a(P0-P1)2, where a and b are predetermined constants.
27. The device of claim 26, wherein two or more of the functions are multiplexed.
28. A method of analyzing parameters associated with an event by an electronic device, comprising:
a) receiving data sets representative of one or more parameters of the event being detected, each data set being associated with an instant of time;
b) calculating, for each instant of time, a statistical distribution, defined as a histogram, of a selected parameter of the event being detected;
c) classifying the data set by comparing its parameter values to classification criteria stored in a classification memory;
d) enabling the calculating step when classified data satisfies predetermined time coincidence criteria; and
e) anticipating values associated with the selected parameter for a next instant of time T2 based on statistical information associated with the calculated histograms at an instant of time T1 and at a previous instant of time T0.
29. A method of analyzing parameters associated with an event by an electronic device, comprising:
a) receiving data representative of one or more parameters of the event being detected;
b) calculating, for a given instant of time, a statistical distribution, defined as a histogram, of a selected parameter of the event being detected;
c) classifying the data by comparing its value to classification criteria stored in a classification memory;
d) enabling the calculating step when classified data satisfies predetermined time coincidence criteria; and
e) automatically updating, for each instant of time, the classification criteria stored in the classification memory based on statistical information associated with the histogram.
REFERENCES
¢ Stephanie G. Mallat, A Theory for Multiresolution Signal Decomposition: The Wavelet Representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, No. 7, Jul. 1989, pp. 674-693.
¢ John G. Daugman, Complete Discrete 2-D Gabor Transforms by Neural Networks for Image Analysis and Compression, IEEE Transaction on Acoustics, Speech and Signal Processing, vol. 36, No. 7, Jul. 1988, pp. 1169-1179.
¢ Alberto Tomita, Jr., et al., Hand Shape Extraction from a Sequence of Digitized Gray-Scale Images, IECON '94, 20th International Conference on Industrial Electronics, Control and Instrumentation, vol. 3 of 3, Special Sessions, Signal Processign and Control, pp. 1925-1930.
¢ Giacomo Indiveri et al., System Implementations of Analog VLSI Velocity Sensors, 1996 IEEE Proceedings of MicroNeuro '96, pp. 15-22.
¢ Pierre-Francois Rüedi, Motion Detection Silicon Retina Based on Event Correlations, 1996 IEEE Proceedings of MicroNeuro '96, pp. 23-29.
¢ Revue Trimestrielle Des <>, Instantanés Technique Techniques De ingénieur, Mars 1997-No 5 (40F), ISSN 0994-0758.
¢ Es Professionnels de Linformatique En Entreprise Magazine, Objectif Securite Des Reseaux, No 24, Janvier, 1997.
¢ Electroncique International Hebdo, 5 Decembre 1996-No 245, Premier . . . oeil, Francoise Gru svelet (with translation).
¢ Nabeel Al Adsani, For Immediate Release The Generic Visual Perception Processor, Oct. 10, 1997, p. 1.
¢ Colin Johnson, Vision Chip's Circuitry Has Its Eye Out For You, http://192.215.107.74/wire/news/1997/09/0913vision.html, pp. 1-3.
¢ The Japan Times, :British firm has eye on the future, Business & Technology, Tuesday, Nov. 18, 1997, 4th Edition.
¢ Inside the Pentagon's, Inside Missile Defense, an exclusive biweekly report on U.S. missile defense programs, procurement and policymaking, Missile Technology vol. 3, No. 16-Aug. 13, 1997, p. 5.
¢ Electronique, Le Mechanisme de la Vision Humaine Dans Le Silicium, Electronique Le Mensuel Des Ingenieurs De Conception, No. 68, Mars 1997, ISSN 1157-1151 (with translation).
¢ Elecktronik Revue ER, Eine Elsevier-Thomas-Publikation, Jahrgang 8, Marz 1997, NR.3, ISSN0939-1134.
¢ Un Processor de Perception Visuelle, LehAUT pARLEUR, 25F Des solutions électroniques pour tous, No 1856, 15 janvier 1997 (with translation).
¢ Realiser Un Decodeur Pour TV Numberique, Electronique, Le Mensuel Des Ingenieurs De Conception, No. 66, Janvier 1997.
¢ Groupe Revenu Français, Air & Cosmos Aviation International, Un Calculateur De perceoption Visuelle, Hebdomadaire, vendredi 6 décembre 1996, 34 Année, No 1590, 22F.
¢ Kenichi Yamada, et al; Image Understanding Based on Edge Histogram Method for Rear-End Collision Avoidance System, Vehicle Navigation & Information Systems Conference Proceedings; (1994), pp. 445 450 published Aug. 31, 1994; XP 000641348.
Reply
#3
generic visual perception processor
Abstract
generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, that can detect objects in motion ,video signal and locate and track them in real time . imitating the human eye\'s neural networks and brain, the chip can handle some 20 billion instructions per second, an electronic eye on the chip can handle a task that ranges from sensing the variable parameters as in the form of video signals and process it for controlling purpose.
Reply
#4
[attachment=6357]
Generic Visual Perception Processor

PRESENTED BY,
ANJALI MOHAN
S7 CSE
GECI

Generic Visual Perception Processor -”the electronic eye”

Developed after 10 years of scientific study
Is a single chip modelled on the perception capabilities of the human brain
Can detect objects in a motion video signal
Can detect and track them in real time
Can handle 20 bips
Can handle most tasks that ranges from sensing the variable parameters

Can handle most tasks performed by human eye
Reply
#5
Reply
#6

[attachment=15400]
1 INTRODUCTION
While computing technology is growing in leaps and bounds, the human brain continues to be the world's fastest computer. Combine brain-power with seeing power, and you have the fastest, cheapest, most extra ordinary processor ever-the human eye. Little wonder, research labs the world over are striving to produce a near-perfect electronic eye.
The 'generic visual perception processor (GVPP)' has been developed after 10 long years of scientific effort. Generic Visual Perception Processor (GVPP) can automatically detect objects and track their movement in real-time. The GVPP, which crunches 20 billion instructions per second (BIPS), models the human perceptual process at the hardware level by mimicking the separate temporal and spatial functions of the eye-to-brain system. The processor sees its environment as a stream of histograms regarding the location and velocity of objects.
GVPP has been demonstrated as capable of learning-in-place to solve a variety of pattern recognition problems. It boasts automatic normalization for varying object size, orientation and lighting conditions, and can function in daylight or darkness.
This electronic "eye" on a chip can now handle most tasks that a normal human eye can. That includes driving safely, selecting ripe fruits, reading and recognizing things. Sadly, though modeled on the visual perception capabilities of the human brain, the chip is not really a medical marvel, poised to cure the blind.
2 BACKGROUND OF THE INVENTION
The invention relates generally to methods and devices for automatic visual perception, and more particularly to methods and devices for processing image signals using two or more histogram calculation units to localize one or more objects in an image signal using one or more characteristics an object such as the shape, size and orientation of the object. Such devices can be termed an electronic spatio-temporal neuron, and are particularly useful for image processing, but may also be used for other signals, such as audio signals. The techniques of the present invention are also particularly useful for tracking one or more objects in real time.
It is desirable to provide devices including combined data processing units of a similar nature, each addressing a particular parameter extracted from the video signal. In particular, it is desirable to provide devices including multiple units for calculating histograms, or electronic spatio-temporal neuron STN, each processing a DATA (A), by a function in order to generate individually an output value.
The present invention also provides a method for perception of an object using characteristics, such as its shape, its size or its orientation, using a device composed of a set of histogram calculation units.
Using the techniques of the present invention, a general outline of a moving object is determined with respect to a relatively stable background, then inside this outline, elements that are characterized by their tone, color, relative position etc. are determined. .
3 POTIENTIAL SIGHTED
The GVPP was invented in 1992, by BEV founder Patric Pirim . It would be relatively simple for a CMOS chip to implement in hardware the separate contributions of temporal and spatial processing in the brain. The brain-eye system uses layers of parallel-processing neurons that pass the signal through a series of preprocessing steps, resulting in real-time tracking of multiple moving objects within a visual scene.
Pirim created a chip architecture that mimicked the work of the neurons, with the help of multiplexing and memory. The result is an inexpensive device that can autonomously "perceive" and then track up to eight user-specified objects in a video stream based on hue, luminance, saturation, spatial orientation, speed and direction of motion.
The GVPP tracks an "object," defined as a certain set of hue, luminance and saturation values in a specific shape, from frame to frame in a video stream by anticipating where it’s leading and trailing edges make "differences" with the background. That means it can track an object through varying light sources or changes in size, as when an object gets closer to the viewer or moves farther away.
The GVPP’S major performance strength over current-day vision systems is its adaptation to varying light conditions. Today’s vision systems dictate uniform shadow less illumination ,and even next generation prototype systems, designed to work under “normal” lighting conditions, can be used only dawn to dusk. The GVPP on the other hand, adapt to real time changes in lighting without recalibration, day or light.
For many decades the field of computing has been trapped by the limitations of the traditional processors. Many futuristic technologies have been bound by limitations of these processors .These limitations stemmed from the basic architecture of these processors. Traditional processors work by slicing each and every complex program into simple tasks that a processor could execute. This requires an existence of an algorithm for solution of the particular problem. But there are many situations where there is an inexistence of an algorithm or inability of a human to understand the algorithm. Even in these extreme cases GVPP performs well. It can solve a problem with its neural learning function. Neural networks are extremely fault tolerant. By their design even if a group of neurons get, the neural network only suffers a smooth degradation of the performance. It won’t abruptly fail to work. This is a crucial difference, from traditional processors as they fail to work even if a few components are damaged. GVPP recognizes stores , matches and process patterns. Even if pattern is not recognizable to a human programmer in input the neural network, it will dig it out from the input. Thus GVPP becomes an efficient tool for applications like the pattern matching and recognition.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: abstract for general visual perseption processor, multidimensional, seminar on dealers perception, infini conception, generic visual perception processor seminar topic, costumer perception on honda sample questionarie, dealers perception on cars,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  VISUAL CRYPTOGRAPHIC STEGANOGRY IN IMAGES full report smart paper boy 1 2,471 01-03-2012, 11:36 AM
Last Post: seminar paper
  VISUAL CRYPTOGRAPHY project topics 9 11,946 01-03-2012, 11:36 AM
Last Post: seminar paper
  Design Of 2-D Filters Using A Parallel Processor Architecture (Download Full Seminar Computer Science Clay 3 3,024 18-02-2012, 10:37 AM
Last Post: seminar paper
  high speed protocol processor to boost gateway performance electronics seminars 1 2,883 13-02-2012, 01:26 PM
Last Post: seminar paper
  CRUSOE PROCESSOR seminar projects crazy 6 5,447 26-01-2012, 10:55 AM
Last Post: seminar addict
  Generic Shape/Texture Descriptor Over Multiscale Edge Field:2-D Walking Ant Histogram project topics 0 875 02-05-2011, 09:59 AM
Last Post: project topics
  Macro-Processor FULL REPORT seminar class 0 1,644 27-04-2011, 10:07 AM
Last Post: seminar class
  system unit & Processor seminar class 0 1,385 16-03-2011, 10:18 AM
Last Post: seminar class
  Quad-core Processor seminar class 0 1,566 07-03-2011, 03:51 PM
Last Post: seminar class
  AUTOMATIC VISUAL SURVEILLANCE SYSTEM project report helper 1 1,863 17-02-2011, 02:03 PM
Last Post: seminar class

Forum Jump: