REAL TIME IMAGE PROCESSING
#1

REAL TIME IMAGE PROCESSING
ABSTRACT
One of the famous quotations goes like this
“Safety is a cheap and effective insurance policy”
Safety in automotives is one of the important considerations under limelight. So many efforts are also taken in this regard. Dozens of processors control every performance aspect of today’s automobiles.
While most currently available safety features utilize a wide array of sensors—principally involving microwaves, infrared light, lasers, accelerometers, or position detection—for safety applications such as front and rear warning, lane detection warning, automatic emergency braking etc.
In this paper a novel attempt has been taken to revolutionize the safety in automotive systems with the help of image processing, which aids the driver in efficient driving.
The safety feature considered is LANE DEPARTURE SYSTEM. A high speed camera scans the image of the road scene ahead at a regular instants. These image are moved to memory for further processing. Later these images go through EDGE DETECTION TECHNIQUES. This edge detection technique consists of Sobel, canny, prewitt edge detectors, which detects the lanes on the road. Then after using HOUGH TRANSFORM these lines are detected and compared with the current car position. This helps the system in placing a constant vigil on the car over its position in the lane.
This concept also aids in future revolution of automatic driving in highways. Moreover the use of high speed processors like Blackfinn processors makes it very simple to implement this system in real time.
1. VIDEO IN AUTOMOTIVE SAFETY SYSTEMS
In many ways, car safety can be greatly enhanced by video-based systems that use high-performance media processors. Because short response times are critical to saving lives, however, image processing and video filtering must be done deterministically in real time. There is a natural tendency to use the highest video frame rates and resolution that a processor can handle for a given application, since this provides the best data for decision making. In addition, the processor needs to compare vehicle speeds and relative vehicle-object distances against desired conditions—again in real time. Furthermore, the processor must interact with many vehicle subsystems (such as the engine, braking, steering, and airbag controllers), process sensor information from all these systems, and provide appropriate audiovisual output to the driver. Finally, the processor should be able to interface to navigation and telecommunication systems to react to and log malfunctions, accidents, and other problems.
shows the basic video operational elements of an automotive safety system, indicating where image sensors might be placed throughout a vehicle, and how a lane departure system might be integrated into the chassis. There are a few things worth noting. First, multiple sensors can be shared by different automotive safety functions. For example, the rear-facing sensors can be used when the vehicle is backing up, as well as to track lanes as the vehicle moves forward. In addition, the lane-departure system might accept feeds from any of a number of camera sources, choosing the appropriate inputs for a given situation. In a basic system, a video stream feeds its data to the embedded processor. In more advanced systems, the processor receives other sensor information, such as position data from GPS receivers.
2. LANE DEPARTURE—A SYSTEM EXAMPLE
The overall system diagram of Figure 2 is fairly straightforward, considering the complexity of the signal processing functions being performed. Interestingly, in a video-based lane departure system, the bulk of the processing is image-based, and is carried out within a signal processor rather than by an analog signal chain. This represents a big savings on the system bill-of-materials. The output to the driver consists of a warning to correct the car’s projected path before the vehicle leaves the lane unintentionally. It may be an audible “rumble-strip” sound, a programmed chime, or a voice message.
The video input system to the embedded processor must perform reliably in a harsh environment, including wide and drastic temperature shifts and changing road conditions. As the data stream enters the processor, it is transformed—in real time—into a form that can be processed to output a decision. At the simplest level, the lane departure system looks for the vehicle’s position with respect to the lane markings in the road. To the processor, this means the incoming stream of road imagery must be transformed into a series of lines that delineate the road surface.
The processor can find lines within a field of data by looking for edges. These edges form the boundaries within which the driver should keep the vehicle while it is moving forward. The processor must track these line markers and determine whether to notify the driver of irregularities.
Basic steps in a lane-departure algorithm and how the processor might connect to the outside world.
Let’s now drill deeper into the basic components of the lane-departure system example. Figure 3 follows the same basic operational flow as Figure 2 but with more insight into the algorithms being performed. The video stream coming into the system needs to be filtered and smoothed to reduce noise caused by temperature, motion, and electromagnetic interference. Without this step, it would be difficult to find clean lane markings.
The next processing step involves edge detection; if the system is set up properly, the edges found will represent the lane markings. These lines must then be matched to the direction and position of the vehicle. The Hough transform will be used for this step. Its output will be tracked across frames of images, and a decision will be made based on all the compiled information. The final challenge is to send a warning in a timely manner without sounding false alarms.
2.1. IMAGE ACQUISITION
An important feature of the processor is its parallel peripheral interface (PPI), which is designed to handle incoming and outgoing video streams. The PPI connects without external logic to a wide variety of video converters.
For automotive safety applications, image resolutions typically range from VGA (640 × 480 pixels/image) down to QVGA (320 × 240 pixels/image). Regardless of the actual image size, the format of the data transferred remains the same—but lower clock speeds can be used when less data is transferred. Moreover, in the most basic lane-departure warning systems, only gray-scale images are required. The data bandwidth is therefore halved (from 16 bits/pixel to 8 bits/pixel) because chroma information can be ignored.
2.2. MEMORY AND DATA MOVEMENT
Efficient memory usage is an important consideration for system designers because external memories are expensive, and their access times can have high latencies. While Blackfin processors have an on-chip SDRAM controller to support the cost-effective addition of larger, off-chip memories, it is still important to be judicious in transferring only the video data needed for the application. By intelligently decoding ITU-R 656 preamble codes, the PPI can aid this “data-filtering” operation. For example, in some applications, only the active video fields are required. In other words, horizontal and vertical blanking data can be ignored and not transferred into memory, resulting in up to a 25% reduction in the amount of data brought into the system. What’s more, this lower data rate helps conserve bandwidth on the internal and external data buses.
Because video data rates are very demanding, frame buffers must be set up in external memory, as shown in Figure 4. In this scenario, while the processor operates on one buffer, a second buffer is being filled by the PPI via a DMA transfer. A simple semaphore can be set up to maintain synchronization between the frames. With Blackfin’s flexible DMA controller, an interrupt can be generated at virtually any point in the memory fill process, but it is typically configured to occur at the end of each video line or frame.
Once a complete frame is in SDRAM, the data is normally transferred into internal L1 data memory so that the core can access it with single-cycle latency. To do this, the DMA controller can use two-dimensional transfers to bring in pixel blocks. Figure 5 shows an example of how a 16 × 16 “macroblock,” a construct used in many compression algorithms, can be stored linearly in L1 memory via a 2D DMA engine.
To efficiently navigate through a source image, four parameters need to be controlled: X Count, Y Count, X Modify, and Y Modify. X and Y Counts describe the number of elements to read in/out in the “horizontal” and “vertical” directions, respectively. Horizontal and vertical are abstract terms in this application because the image data is actually stored linearly in external memory. X and Y Modify vaues achieve this abstraction by specifying an amount to “stride” through the data after the requisite X Count or Y Count has been transferred.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: who is lana lane superman, sharc blackfin,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Image Processing for Skin Cancer Detection seminar paper 5 3,984 17-03-2016, 11:42 AM
Last Post: seminar report asees
  DIGITAL IMAGE PROCESSING project uploader 3 8,866 01-12-2015, 02:42 PM
Last Post: seminar report asees
  Time Travel seminar addict 2 1,865 10-11-2014, 10:41 PM
Last Post: jaseela123d
  Radio frequency based real time Child Monitoring and alarm system simple details seminar addict 1 2,022 06-09-2014, 06:45 PM
Last Post: Guest
  Resizing image using bilinear interpolation algorithm in MATLAB seminar addict 1 2,676 13-01-2013, 10:15 PM
Last Post: Guest
  Image Steganography Schemes For Image Authentication And Verification seminar details 1 2,795 15-10-2012, 03:25 PM
Last Post: seminar details
  The Curvelet Transform For Image Denoising seminar addict 1 1,607 10-10-2012, 12:19 PM
Last Post: seminar details
  REAL TIME OPERATING SYSTEM project uploader 2 2,021 01-10-2012, 03:43 PM
Last Post: seminar details
  IMAGE ANALYSIS TECHNIQUES FOR SCREENING OF RETINOPATHY seminar details 0 724 09-06-2012, 03:33 PM
Last Post: seminar details
  DIGITAL SPEECH PROCESSING seminar details 0 1,180 09-06-2012, 01:04 PM
Last Post: seminar details

Forum Jump: