PYRAMIDAL WATERSHED SEGMENTATION OF MEDICAL IMAGES
#1

PYRAMIDAL WATERSHED SEGMENTATION OF MEDICAL IMAGES


MAIN PROJECT
DONE BY


ANJALI ANIL.S (07406004)
FIONA MIRIAM ABRAHAM (07406018)
S.JEETHU REGHUNATH (07406049)
SANGITA ANN JACOB (07406052)



DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

LBS INSTITUTE OF TECHNOLOGY FOR WOMEN

POOJAPPURA, THIRUVANANTHAPURAM

[attachment=13591]

ABSTRACT




Image segmentation is the process of partitioning an image into regions. There are two ways of approaching image segmentation. The first one is boundary based and searches for local changes while the second one is region based and searches for pixel and image similarities. Watershed transformation is a region based morphological segmentation technique. Medical image segmentation refers to the segmentation of known anatomic structures from medical images. This project deals with a new segmentation algorithm for medical images which prevents over segmentation and under segmentation found in the conventional algorithm. The wavelet transform is applied to the image to describe the images in multiple resolutions of the image. A suitable resolution is chosen. The gradient image is estimated by the grey scale morphology. To avoid over segmentation, the regional minima of the image is imposed on the gradient image. The watershed transform is applied and the segmentation result is projected to a higher resolution, using the inverse wavelet transform until the required resolution of segmented image is obtained.

1. INTRODUCTION


Medical image segmentation refers to the segmentation of known anatomic structures from medical images. Structures of interest include organs or parts thereof, such as cardiac ventricles or kidneys, abnormalities such as tumors and cysts, as well as other structures such as bones, vessels, brain structures etc. The overall objective of such methods is to assist doctors in evaluating medical imagery or in recognizing abnormal findings in a medical image.
Image segmentation is the process of partitioning an image into constituent regions or objects. It is object oriented and hence useful in high-resolution image analysis. The level to which subdivision is carried depends on the problem being solved, i.e., segmentation should stop when the objects of interest in an application have been isolated. For example, in the automated inspection of electronic assemblies, interest lies in analyzing images of the products with the objective of determining the presence or absence of specific anomalies, such as missing anomalies, such as missing components or broken connection paths. There is no point in carrying segmentation past the level of detail required to identify those elements. Segmentation of nontrivial images is one of the most difficult tasks in image processing. Segmentation accuracy determines the eventual success or failure of computerized analysis procedures.

Image segmentation algorithms generally are based on one of two basic properties of intensity values: discontinuity and similarity. In the first category, the approach is to partition an image based on abrupt changes in intensity, such as edges in an image. The principal approach in the second category are based on partitioning an image into regions that are similar according to a set of predefined criterion such as thresholding, region growing and region splitting.



The multi-scale behavior of image features has been analyzed in different ways. Tracking of intensity extrema along scales was used for image segmentation. Other approach for image segmentation was based on the watershed transform wherein the transform was applied to the gradient magnitude image to obtain the segmented images. However, small fluctuations in the grey levels produce spurious gradients, which cause over segmentation. The proposed approach is expected to overcome the problem of over segmentation.

Wavelet analysis is one of the most popular techniques that can be used to detect local intensity variation and hence the wavelet transformation is used to analyze the image. Wavelet transform is applied to the image, producing detail (horizontal, vertical and diagonal) and approximation coefficients. The image gradient with selective regional minima is estimated with the grey-scale morphology for the approximation image at a suitable resolution, and then the watershed is applied to the gradient image to avoid over segmentation. The segmented image is projected up to high resolutions using the inverse wavelet transform. The watershed segmentation is applied to small subset size image, demanding less computational time.

2. WAVELET TRANSFORM



The wavelet transform is a mathematical tool that can be used to describe images in multiple resolutions. The wavelet transform can be represented with an equation like Fourier transform.



The results of wavelet transform are many wavelet coefficients C, which are functions of scale and position, s(t) is the image function and ψ(scale, position, t) is the wavelet function. If scales and positions based on powers of two, the so-called dyadic scales and positions then our analysis will be more efficient and just as accurate. We can obtain such analysis from the Discrete Wavelet Transform (DWT). For many images, the low frequency content is the most important part. This is the one which gives the image identity. The high frequency content imparts flavor or nuance.
According to Mallat’s pyramidal algorithm, the original image is convolved with low-pass and high-pass filters associated with a mother wavelet, and down sampled afterwards as shown in Figure 1.Four images each with half the size of the original image are produced. The first sub image corresponding to low frequencies in both the directions LL, is called the Approximation image cA1 of the original image Second one corresponds to low frequencies in horizontal and high frequencies in the vertical direction LH, is called the horizontal coefficients cH1 of original image.





The high frequencies in horizontal and low frequencies in the vertical direction HL, are called vertical coefficients cV1 where as high frequencies in both the directions HH, are called diagonal coefficients cD1. The process is repeated on the LL subband cA1 to generate the next level of the decomposition adopting the same .The original image; S is represented in terms of its wavelet coefficients as given by

3. IMAGE GRADIENT

The original image is decomposed into different resolutions using wavelet transform explained as above. Resolution of appropriate pixel value is chosen to avoid under segmentation due to loss of image details in further decomposition process. The image is computed up to the high-resolution The Approximation image cA1 is used as a starting point. The Sobel operator (Sobel horizontal edge emphasizing filter) is used to estimate the image gradient.
An image gradient is a directional change in the intensity or color in an image. Image gradients may be used to extract information from images. In graphics software for digital image editing, the term gradient is used for a gradual blend of color which can be considered as an even gradation from low to high values, as used from white to black in the images to the right. Another name for this is color progression. Mathematically, the gradient of a two-variable function (here the image intensity function) is at each image point a 2D vector with the components given by the derivatives in the horizontal and vertical directions. At each image point, the gradient vector points in the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction.
Since the intensity function of a digital image is only known at discrete points, derivatives of this function cannot be defined unless we assume that there is an underlying continuous intensity function which has been sampled at the image points. With some additional assumptions, the derivative of the continuous intensity function can be computed as a function on the sampled intensity function, i.e., the digital image. It turns out that the derivatives at any particular point are functions of the intensity values at virtually all image points. However, approximations of these derivative functions can be defined at lesser or larger degrees of accuracy.

The Sobel operator represents a rather inaccurate approximation of the image gradient, but is still of sufficient quality to be of practical use in many applications.
More precisely, it uses intensity values only in a 3×3 region around each image point to approximate the corresponding image gradient, and it uses only integer values for the coefficients which weight the image intensities to produce the gradient approximation. The gradient of the image is one of the fundamental building blocks in image processing. For example the canny edge detector uses image gradient for edge detection.
In computer vision, image gradients can be used to extract information from images. Gradient images are created from the original image (generally by convolving with a filter, one of the simplest being the Sobel filter) for this purpose. Each pixel of a gradient image measures the change in intensity of that same point in the original image, in a given direction. To get the full range of direction, gradient images in the x and y directions are computed.
One of the most common uses is in edge detection. After gradient images have been computed, pixels with large gradient values become possible edge pixels. The pixels with the largest gradient values in the direction of the gradient become edge pixels, and edges may be traced in the direction perpendicular to the gradient direction. One example of an edge detection algorithm that uses gradients is the canny edge detector. Image gradients can also be used in robust feature and texture matching. Different lighting or camera properties can cause two images of the same scene to have drastically different pixel values. This can cause matching algorithms to fail to match very similar or identical features. One way to solve this is to compute texture or feature signatures based on gradient images computed from the original images. These gradients are less susceptible to lighting and camera changes, so matching errors are reduced.

4. EXTENDED MINIMA TRANSFORM
The extended minima transform is the regional minima of h-minima transform.H is a non negative scalar. Regional minima are connected components of pixels with the same intensity value t,whose external boundary pixels all have a value greater than t.The h minima transform suppresses all the minima in the intensity image whose depth is less than h,where h is a scalar. The output of regional minima is a binary image with the same size as the original image, in which the pixels with value 1 represents the regional minima in the original image. This also sets all other pixel values to 0.
The choice of the parameter h is the central issue, because the h minima transform suppresses all minima whose depth is less than h.Comparing the extended minima, we can verify that when h is increased, the area of some objects increase, and some other objects disappear. In some cases, the images are noisy and present low contrast. making the choice of the parameter h critical. Considering the union of the pixel sets of the regions obtained by the extended minima when h varies between 1 and k, if k is small, we obtain small regions centered on regional minima of the image. If k increases, the regions grow and can be merged.














5. WATERSHED TRANSFORM


Watersheds are one of the classics in the field of topography. In the field of image processing, gray scale pictures are often considered as topographic reliefs. In the topographic representation of a given image I, the numerical value (i.e., the gray tone) of each pixel stands for the elevation at this point. Such a representation is extremely useful, since it allows one to better appreciate the effect of a given transformation on the image under study. In such a topographic interpretation, there are 3 types of points:
a. Points belonging to a regional minimum
b. Points at which a drop of water, if placed at the location of any of those points, would fall with certainty to a single minimum
c. Points at which water would be equally likely to fall to more than one such minimum
For a particular regional minimum, the set of points satisfying condition b is called the catchment basin or watershed of that minimum. The points satisfying condition c, form crest lines on the topographic surface and are termed divide lines or watershed lines.
The watershed segmentation algorithm used in this project, instead of scanning the entire image to modify only two pixels, has direct access to those pixels. This is possible only if the image pixels are stored in a simple array, and that the following two conditions are satisfied:
a. Random access to the pixels of an image.
b. Direct access to the neighbors of a given pixel (its 4 neighbors in 4-connectivity, 6 on a hexagonal grid, 8 on a 8-connectivity, etc.).
The two steps in this particular algorithm are:
i. Sorting
ii. Flooding


Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: accelerated projected gradient, watershed priotization technique, matlab code for watershed image segmentation including enhancement, watershed segmentation code for image in matlab, medical image segmentation algorithms ppt, current methods in medical image segmentation, civil engineering major project report on watershed management 8th sem,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Medical image fusion smart paper boy 3 2,311 13-03-2013, 11:42 AM
Last Post: computer idea
  IMAGE RETRIEVAL USING SEGMENTATION seminar class 2 2,858 15-10-2012, 03:18 PM
Last Post: seminar details
  Facial recognition using multisensor images based on localized kernel eigen spaces seminar topics 2 3,255 27-02-2012, 01:48 PM
Last Post: seminar paper
  Automatic Segmentation of Digital Images Applied In Cardiac Medical Images seminar class 1 3,373 06-02-2012, 10:46 AM
Last Post: seminar addict
  A Texture based Tumor detection and automatic Segmentation using Seeded Region Growin smart paper boy 0 1,128 25-08-2011, 09:23 AM
Last Post: smart paper boy
  Fast JPEG 2000 Decoder and Its Use in Medical Imaging smart paper boy 0 913 18-08-2011, 11:16 AM
Last Post: smart paper boy
  Vessel Boundary Delineation on Fundus Images using Graph-Based Approach smart paper boy 0 698 29-07-2011, 02:32 PM
Last Post: smart paper boy
  An Adaptive K-means Clustering Algorithm for Breast Image Segmentation smart paper boy 0 1,134 29-07-2011, 02:19 PM
Last Post: smart paper boy
  A METHODOLOGY FOR DATA HIDING USING IMAGES smart paper boy 0 721 28-07-2011, 03:37 PM
Last Post: smart paper boy
  Real Time Image Segmentation using watershed algorithm on FPGA smart paper boy 0 867 28-07-2011, 12:07 PM
Last Post: smart paper boy

Forum Jump: