Smart Cameras in Embedded Systems
#3
[attachment=9236]
Smart Cameras in Embedded Systems
Abstract— A smart camera performs real-time analysis to recognize scenic elements.Smart cameras are useful in a variety of scenarios: surveillance, medicine, etc.We have built a real-time system for recognizing gestures. Our smart camera uses novel algorithms to recognize gestures based on low-level analysis of body parts as well as hidden Markov models for the moves that comprise the gestures. These algorithms run on a Trimedia processor. Oursystem can recognize gestures at the rate of 20 frames/second. The camera can also fuse the results of multiple cameras
I. INTRODUCTION
Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today's digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification. Video processing has an insatiable demand for real-time performance. Fortunately, Moore's law provides an increasing pool of available computing power to apply to realtime analysis. Smart cameras leverage very large-scale integration (VLSI) to provide such analysis in a low-cost, low-power system with substantial memory. Moving well beyond pixel processing and compression, these systems run a wide range of algorithms to extract meaning from streaming video. Because they push the design space in so many dimensions, smart cameras are a leading edge application for embedded system research.
II. DETECTION AND RECOGNITION ALGORITHMS
Although there are many approaches to real-time video analysis, we chose to focus initially on human gesture recognition—identifying whether a subject is walking, standing, waving his arms, and so on. Because much work remains to be done on this problem, we sought to design an embedded system that can incorporate future algorithms as well as use those we created exclusively for this application. Our algorithms use both low-level and high-level processing. The low-level component identifies different body parts and categorizes their movement in simple terms. The high level component, which is application-dependent, uses this information to recognize each body part's action and the person's overall activity based on scenario parameters.
Human detection and activity/gesture recognition algorithm has two major parts: Low level processing (blue blocks in Figure 1) and high-level processing (green blocks in Figure 1).
A. Low-level processing
The system captures images from the video input, which can be either uncompressed or compressed (MPEG and motion JPEG), and applies four different algorithms to detect and identify human body parts.
Region extraction: The first algorithm transforms the pixels of an image like that shown in Figure 2.a, into an M ¥ N bitmap and eliminates the background. It then detects the body part's skin area using a YUV color model with chrominance values down sampled.
Next, as Figure 2b illustrates, the algorithm hierarchically segments the frame into skin-tone and non-skin-tone regions by extracting foreground regions adjacent to detected skin areas and combining these segments in a meaningful way.
Contour following:. The next step in the process, shown in Figure 2c, involves linking the separate groups of pixels into contours that geometrically define the regions. This algorithm uses a 3 ¥ 3 filter to follow the edge of the component in any of eight different directions
Ellipse fitting: To correct for deformations in image processing caused by clothing, objects in the frame, or some body parts blocking others, an algorithm fits ellipses to the pixel regions as Figure 2d shows to provide simplified part attributes. The algorithm uses these parametric surface approximations to compute geometric descriptors for segments such as area, compactness (circularity), weak perspective invariants, and spatial relationships.
Graph matching: Each extracted region modeled with ellipses corresponds to a node in a graphical representation of the human body. A piecewise quadratic Bayesian classifier uses the ellipses parameters to compute feature vectors consisting of binary and unary attributes. It then matches these attributes to feature vectors of body parts or meaningful combinations of parts that are computed offline. To expedite the branching process, the algorithm begins with the face, which is generally easiest to detect.
B High-level processing
The high-level processing component, which can be adapted to different applications, compares the motion pattern of each body part—described as a spatiotemporal sequence of feature vectors—in a set of frames to the patterns of known postures and gestures and then uses several hidden Markov models in parallel to evaluate the body's overall activity. We use discrete HMMs that can generate eight directional code words that check the up, down, left, right, and circular movement of each body part.
Human actions often involve a complex series of movements. We therefore combine each body part's motion pattern with the one immediately following it to generate a new pattern. Using dynamic programming, we calculate the probabilities for the original and combined patterns to identify what the person is doing. Gaps between gestures help indicate the beginning and end of discrete actions.
A quadratic Mahalanobis distance classifier combines HMM output with different weights to generate reference models for various gestures. For example, a pointing gesture could be recognized as a command to "go to the next slide" in a smart meeting room or "open the window" in a smart car, whereas a smart security camera might interpret the gesture as suspicious or threatening.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: smart cameras in embedded systems, smart cameras, ccd ach, wireless network ip cameras, smart cameras in embedded systems ppt seminars, embedded systems for smart brain ppt, embedded system in cameras,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Messages In This Thread
RE: Smart Cameras in Embedded Systems - by seminar class - 28-02-2011, 03:29 PM

Possibly Related Threads...
Thread Author Replies Views Last Post
  FPGA-Based Embedded System Implementation of Finger Vein Biometrics seminar project explorer 3 4,889 20-06-2016, 05:09 PM
Last Post: computer science crazy
  smart quill full report computer science technology 20 36,286 05-08-2013, 12:00 AM
Last Post: Guest
Information EMBEDDED SYSTEMS IN AUTOMOBILES seminar projects crazy 4 4,174 19-07-2013, 10:44 AM
Last Post: computer topic
  AUTOMATED RATIONING SYSTEM USING EMBEDDED SYSTEM ajukrishnan 10 9,635 12-04-2013, 11:37 AM
Last Post: computer topic
Photo Smart Pixel Arrays Computer Science Clay 4 2,731 07-02-2013, 10:19 AM
Last Post: seminar details
Smile smart note taker seminars full report [email protected] 59 31,428 25-01-2013, 12:00 PM
Last Post: seminar details
  Embedded Systems In Automobiles computer science crazy 5 6,050 23-01-2013, 09:13 PM
Last Post: Guest
  Maximum Power Point Tracking Controller for PV Systems using a PI Regulator project topics 1 3,109 19-01-2013, 12:51 PM
Last Post: seminar details
  smart dust full report computer science technology 7 13,154 24-12-2012, 02:27 PM
Last Post: seminar details
  Smart Cameras in Embedded Systems computer science crazy 1 2,165 22-12-2012, 12:04 PM
Last Post: seminar details

Forum Jump: