Statistical Region Merging
#1

Presented by:
PEDDI.SRINIVASA RAO
NALABOTHULA.SRIRAM
PUVADA.PAVAN KUMAR

[attachment=12739]
ABSTRACT:
This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions. We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from both the qualitative and quantitative standpoints. This approach can be efficiently approximated in linear time/space, leading to a fast segmentation algorithm tailored to processing images described using most common numerical pixel attribute spaces. The conceptual simplicity of the approach makes it simple to modify and cope with hard noise corruption, handle occlusion, authorize the control of the segmentation scale, and process unconventional data such as spherical images. Experiments on gray-level and color images, obtained with a short C-code, display the quality of the segmentations obtained.
Chapter 1:
1.1 INTORDUCTION:

Vision is the most advanced sense among the five senses of human beings, and plays the most important role in human perception. Although the sensitivity of human vision is limited within the visible band, imaging machines can operate on the images generated by sources that human vision cannot associate with. Thus, machine vision encompasses a wide and varied field of applications, even in areas where human vision cannot function, e.g. infrared (IR), ultraviolet (UV), X-ray, magnetic resonance imaging (MRI), ultrasound.
Although there is no clear distinction among image processing, image analysis, and computer vision, usually they are considered as hierarchies in the processing continuum. The low-level processing, which involves primitive operations such as noise filtering, contrast enhancement, and image sharpening, is considered as image processing. Here, both its inputs and outputs are images. The mid-level processing, which involves segmentation and pattern classification, is considered as image analysis or image understanding. Its input generally are images, but its outputs are attributes extracted from those images, e.g. edges, contours, and the identity of individual objects, called class.
The high-level processing, which involves ‘making sense’ of an ensemble of recognized objects and performing the cognitive functions at the far end of the processing continuum, is considered as computer vision. Various technologies used in the image analysis, and presented novel segmentation methods are discussed elaborately in the subsequent chapters.
1.2 DIGITIZING AN IMAGE
A digital image a [m, n] described in 2D discrete space is derived from an analog image a (x, y) in a 2D continuous space through a sampling process that is frequently referred to as digitization. The effect of digitization is shown in the figure below.
The 2D continuous image a (x , y) is divided into N rows and M columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m , n] with {m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is a[m , n]. In most cases a (x, y) -- which might be considered to be the physical signal that impinges on the face of a 2D sensor--is actually a function of many variables including depth (z), color ( ), and time (t).
The image shown in Figure above has been divided into N = 16 rows and M = 16 columns. The value assigned to every pixel is the average brightness in the pixel rounded to the nearest integer value. The process of representing the amplitude of the 2D signal at a given coordinate as an integer value with L different gray levels is usually referred to as amplitude quantization or simply quantization.
1.3 IMAGE SEGMENTATION
In most image analysis operations, pattern classifiers require individual objects to be separated from the image, so the description of those objects can be transformed into a suitable form for computer processing. Image segmentation is a fundamental task, responsible for the separating operation. The function of segmentation is to partition an image into its constituent and disjoint sub-regions, which are uniform according to their properties, e.g. intensity, color, and texture within a sub-region, though there are some segmentation algorithms relying on both discontinuity and uniformity.
The distinction between image segmentation and pattern classification is often not clear. The function of segmentation is to partition an image into multiple sub-regions, while the function of pattern classification is to identify the partitioned sub-regions. Thus, segmentation and pattern classification usually function as separate and sequential processes.
However, they might function as an integrated process depending on the image analysis problem and the performance of the segmentation method. In either way, segmentation critically affects the results of pattern classification, and often determines the eventual success or failure of the image analysis.
The level to which segmentation is carried depends on the problem being solved. That is, segmentation should stop when the region of interest (ROI) in the application have been isolated. Due to this property of problem dependence, autonomous segmentation is one of the most difficult tasks in image analysis. Noise and mixed pixels caused by the poor resolution of sensor images make the segmentation problem even more difficult. More segmentation related details are explained in Chapter 2.
1.4 NEED FOR SEGMENTATION
One of the most fundamental issues in the fields of image processing and computer vision is image segmentation. It is the basis of higher level applications such as in medical imaging. Its objective is to determine a partition of an image into a finite number of semantically important regions. Since segmentation is an important task in image analysis, it is involved in most image analysis applications, particularly those related to pattern classification, e.g. medical imaging, remote sensing, security surveillance, military target detection. Also scope of segmentation includes automatic detection of man-made objects such as buildings or roads from digital aerial images is useful for scene understanding, image retrieval, surveillance and updating of geographic information system databases etc. It is a scientifically challenging task, since the images of natural scenes contain large amount of clutter. Much research has been devoted to the detection and recognition of man-made objects in aerial images until now.
Chapter 2:
2.1 INTRODUCTION

This chapter is dedicated on the various aspects of image segmentation. Image segmentation is one of the most important steps in the image analyses. Its main aim is to divide an image to some parts, which correlate strongly by objects of reality. Image segmentation is a difficult task mainly because of a big variability of object shapes, as well as different image quality. Images are often interfered by signals and artifacts which rose of during sampling, what may cause big problems at using of common techniques of segmentation.
Conceptually, there are two main approaches in image segmentation:
1) EDGE – Based Methods.
2) REGION – Based Methods.
Edge-based segmentation partitions an image based on discontinuities among sub-regions, while region-based segmentation does the same function based on the uniformity of a desired property within a sub-region. In this chapter, we briefly discuss existing image segmentation technologies as background.
2.2 EDGE – BASED SEGMENTATION
Edge-based segmentation looks for discontinuities in the intensity of an image. It is more likely edge detection or boundary detection rather than the literal meaning of image segmentation, introduced in section 1.3. An edge can be defined as the boundary between two regions with relatively distinct properties. The assumption of edge-based segmentation is that every sub-region in an image is sufficiently uniform so that the transition between two sub-regions can be determined on the basis of discontinuities alone. When this assumption is not valid, region-based segmentation, discussed in the next section, usually provides more reasonable segmentation results.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: fingerprint region, region ivp sc division, region 1, region 10 dallas texas, seminarhotel region, statistical region merging road, texas region 4 educational,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  A Texture based Tumor detection and automatic Segmentation using Seeded Region Growin smart paper boy 0 1,142 25-08-2011, 09:23 AM
Last Post: smart paper boy
  STATISTICAL TECHNIQUES FOR DETECTING TRAFFIC ANOMALIES THROUGH PACKET HEADER DATA computer science crazy 0 1,463 17-09-2009, 10:10 PM
Last Post: computer science crazy

Forum Jump: