digital image processing full report
#1

[attachment=1474]

A
Paper Presentation
On
DIGITAL IMAGE PROCESSING

Abstract:
Over the past dozen years forensic and medical applications of technology first developed to record and transmit pictures from outer space have changed the way we see things here on earth, including Old English manuscripts. With their talents combined, an electronic camera designed for use with documents and a digital computer can now frequently enhance the legibility of formerly obscure or even invisible texts. The computer first converts the analogue image, in this case a videotape, to a digital image by dividing it into a microscopic grid and numbering each part by its relative brightness. Specific image processing programs can then radically improve the contrast, for example by stretching the range of brightness throughout the grid from black to white, emphasizing edges, and suppressing random background noise that comes from the equipment rather than the document. Applied to some of the most illegible passages in the Beowulf manuscript, this new technology indeed shows us some things we had not seen before and forces us to reconsider some established readings.
Introduction to Digital Image Processing:
¢ Vision allows humans to perceive and understand the world surrounding us.
¢ Computer vision aims to duplicate the effect of human vision by electronically perceiving and understanding an image.
¢ Giving computers the ability to see is not an easy task - we live in a three dimensional (3D) world, and when computers try to analyze objects in 3D space, available visual sensors (e.g., TV cameras) usually give two dimensional (2D) images, and this projection to a lower number of dimensions incurs an enormous loss of information.
¢ In order to simplify the task of computer vision understanding, two levels are usually distinguished; low-level image processing and high level image understanding.
¢ Usually very little knowledge about the content of images
¢ High level processing is based on knowledge, goals, and plans of how to achieve those goals. Artificial intelligence (AI) methods are used in many cases. High-level computer vision tries to imitate human cognition and the ability to make decisions according to the information contained in the image.
¢ This course deals almost exclusively with low-level image processing, high level in which is a continuation of this course.
¢ Age processing is discussed in the course Image Analysis and Understanding, which is a continuation of this course.
History:
Many of the techniques of digital image processing, or digital picture processing as it was often called, were developed in the 1960s at the Jet Propulsion Laboratory, MIT, Bell Labs, University of Maryland, and few other places, with application to satellite imagery, wire photo standards conversion, medical imaging, videophone, character recognition, and photo enhancement. But the cost of processing was fairly high with the computing equipment of that era. In the 1970s, digital image processing proliferated, when cheaper computers Creating a film or electronic image of any picture or paper form. It is accomplished by scanning or photographing an object and turning it into a matrix of dots (bitmap), the meaning of which is unknown to the computer, only to the human viewer. Scanned images of text may be encoded into computer data (ASCII or EBCDIC) with page recognition software (OCR).
Basic Concepts:
¢ A signal is a function depending on some variable with physical meaning.
¢ Signals can be
o One-dimensional (e.g., dependent on time),
o Two-dimensional (e.g., images dependent on two co-ordinates in a plane),
o Three-dimensional (e.g., describing an object in space),
o Or higher dimensional.
Pattern recognition is a field within the area of machine learning. Alternatively, it can be defined as "the act of taking in raw data and taking an action based on the category of the data" [1]. As such, it is a collection of methods for supervised learning.
Pattern recognition aims to classify data (patterns) based on either a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate multidimensional space. Are to represent, for example, color images consisting of three component colors.
Image functions:
¢ The image can be modeled by a continuous function of two or three variables;
¢ Arguments are co-ordinates x, y in a plane, while if images change in time a third variable t might be added.
¢ The image function values correspond to the brightness at image points.
¢ The function value can express other physical quantities as well (temperature, pressure distribution, distance from the observer, etc.).
¢ The brightness integrates different optical quantities - using brightness as a basic quantity allows us to avoid the description of the very complicated process of image formation.
¢ The image on the human eye retina or on a TV camera sensor is intrinsically 2D. We shall call such a 2D image bearing information about brightness points an intensity image.
¢ The real world, which surrounds us, is intrinsically 3D.
¢ The 2D intensity image is the result of a perspective projection of the 3D scene.
¢ When 3D objects are mapped into the camera plane by perspective projection a lot of information disappears as such a transformation is not one-to-one.
¢ Recognizing or reconstructing objects in a 3D scene from one image is an ill-posed problem.
¢ Recovering information lost by perspective projection is only one, mainly geometric, problem of computer vision.
¢ The second problem is how to understand image brightness. The only information available in an intensity image is brightness of the appropriate pixel, which is dependent on a number of independent factors such as
o Object surface reflectance properties (given by the surface material, microstructure and marking),
o Illumination properties,
o And object surface orientation with respect to a viewer and light source.
Digital image properties:
Metric properties of digital images:
¢ Distance is an important example.
¢ The distance between two pixels in a digital image is a significant quantitative measure.
¢ The Euclidean distance is defined by Eq. 2.42

o City block distance

o Chessboard distance Eq. 2.44

¢ Pixel adjacency is another important concept in digital images.
¢ 4-neighborhood
¢ 8-neighborhood
¢ It will become necessary to consider important sets consisting of several adjacent pixels -- regions.
¢ Region is a contiguous set.
¢ Contiguity paradoxes of the square grid

¢ One possible solution to contiguity paradoxes is to treat objects using 4-neighborhood and background using 8-neighborhood (or vice versa).
¢ A hexagonal grid solves many problems of the square grids ... any point in the hexagonal raster has the same distance to all its six neighbors.
¢ Border R is the set of pixels within the region that have one or more neighbors outside R ... inner borders, outer borders exist.
¢ Edge is a local property of a pixel and its immediate neighborhood --it is a vector given by a magnitude and direction.
¢ The edge direction is perpendicular to the gradient direction which points in the direction of image function growth.
¢ Border and edge ... the border is a global concept related to a region, while edge expresses local properties of an image function.
¢ Crack edges ... four crack edges are attached to each pixel, which are defined by its relation to its 4-neighbors. The direction of the crack edge is that of increasing brightness, and is a multiple of 90 degrees, while its magnitude is the absolute difference between the brightness of the relevant pair of pixels. (Fig. 2.9)
Topological properties of digital images
¢ Topological properties of images are invariant to rubber sheet transformations. Stretching does not change contiguity of the object parts and does not change the number One such image property is the Euler--Poincare characteristic defined as the difference between the number of regions and the number of holes in them.
¢ Convex hull is used to describe topological properties of objects.
¢ r of holes in regions.
¢ The convex hull is the smallest region which contains the object, such that any two points of the region can be connected by a straight line, all points of which belong to the region.
Useses
A scalar function may be sufficient to describe a monochromatic image, while vector functions are to represent, for example, color images consisting of three component colors.
CONCLUSION
Further, surveillance by humans is dependent on the quality of the human operator and lot off actors like operator fatigue negligence may lead to degradation of performance. These factors may can intelligent vision system a better option. As in systems that use gait signature for recognition in vehicle video sensors for driver assistance.
Reply
#2
[attachment=2521]

DIGITAL IMAGE PROCESSING


INTRODUCTION

Pictures are the most common and convenient means of conveying or transmitting information.
A picture is worth a thousand words. Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects.
Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects.
Human beings are good at deriving information from such images, because of our innate visual and mental abilities. About 75% of the information received by human is in pictorial form.

DIGITAL IMAGE

A digital image is typically composed of picture elements (pixels) located at the intersection of each row i and column j in each K bands of imagery.
Each pixel is associated a number known as Digital Number (DN) or Brightness Value (BV), that depicts the average radiance of a relatively small area within a scene (Fig. 1)
A smaller number indicates low average radiance from the area and the high number is an indicator of high radiant properties of the area .

COLOR COMPOSITES

While displaying the different bands of a multispectral data set, images obtained in different bands are displayed in image planes (other than their own) the color composite is regarded as False Color Composite (FCC).
A color infrared composite Ëœstandard false color compositeâ„¢ is displayed by placing the infrared, red, green in the red, green and blue frame buffer memory (Fig. 2).

IMAGE RECTIFICATION

Geometric distortions manifest themselves as errors in the position of a pixel relative to other pixels in the scene and with respect to their absolute position within some defined map projection.
If left uncorrected, these geometric distortions render any data extracted from the image useless

REASONS OF DISTORTIONS

For instance distortions occur due to changes in platform attitude (roll, pitch and yaw), altitude, earth rotation, earth curvature, panoramic distortion and detector delay.
Rectification is a process of geometrically correcting an image so that it can be represented on a planar surface (Fig. 3).

IMAGE ENHANCEMENT TECHNIQUES

Image enhancement techniques improve the quality of an image as perceived by a human
Spatial Filtering Technique
Contrast Stretch

Contrast

Contrast generally refers to the difference in luminance or grey level values in an image and is an important characteristic. It can be defined as the ratio of the maximum intensity to the minimum intensity over an image.

Contrast Enhancement

Contrast enhancement techniques expand the range of brightness values in an image so that the image can be efficiently displayed in a manner desired by the analyst

Linear Contrast Stretch

The grey values in the original image and the modified image follow a linear relation in this algorithm.
. A density number in the low range of the original histogram is assigned to extremely black and a value at the high end is assigned to extremely white.

SPATIAL FILTERING

Low-Frequency Filtering in the Spatial Domain
Image enhancements that de-emphasize or block the high spatial frequency detail are low-frequency or low-pass filters.
The simple smoothing operation will, however, blur the image, especially at the edges of objects.

High-Frequency Filtering in the Spatial Domain
High-pass filtering is applied to imagery to remove the slowly varying components and enhance the high-frequency local variations
Thus, the high-frequency filtered image will have a relatively narrow intensity histogram


CONCLUSIONS

So, with the above said stages and techniques, digital image can be made noise free and it can be made available in any desired format. (X-rays, photo negatives, improved image, etc)
Reply
#3
Abstract

Digital image processing techniques are generally more versatile, reliable, and accurate; they have the additional benefit of being easier to implement than their analog counterparts. Digital computers are used to process the image. The image will be converted to digital form using a digitizer and then process it. Today, hardware solutions are commonly used in video processing systems. However, commercial image processing tasks are more commonly done by software running on conventional personal computers.

In this paper we have presented the stages of image processing, commonly used image processing techniques (two dimensional), Digital image editing and image editor features and some more.

Overall, Image processing is a good option that deserves a careful look. Thus the statement Image Processing has Revolutionized the world we live in exactly fits because of the diverse applications of the image processing in various fields. We hope that, by going through this paper one can get a brief idea of Image Processing
Reply
#4
[attachment=3275]

IMAGE PROCESSING

PRESENTED BY:
1. T.Krishna Kanth
2.N.V.Ram Kishore
D.V.R. College of Engineering and Technology.
Kandi, Hyderabad
Andhra Pradesh.


ABSTRACT:

An image is digitized to convert it to a form which can be stored in a computer's memory or on some form of storage media such as a hard disk or CD-ROM. This digitization procedure can be done by a scanner, or by a video camera connected to a frame grabber board in a computer. Once the image is digitized, two things are most important it is to be stored or transmitted with minimum bits and it should be restored with maximum clarity .This makes way to the various image processing operations.
Image processing operations are divided into three major categories, Image Compression, Image Enhancement and Restoration, and Measurement Extraction. Image compression is familiar to most people. It involves reducing the amount of memory needed to store a digital image. Where as the Image Enhancement and Restoration deals with the retrieval of the image back.
This paper deals with the Image Enhancement and Restoration which helps in the image restoration with maximum clarity and enhancement of the image quality.
The first section describes what Image Enhancement and Restoration is and the second section tells us about the techniques used for the Image Enhancement and Restoration and, final section describes the advantages and disadvantage of using these techniques.



A Short Introduction to Digital Image Processing
An image is digitized to convert it to a form which can be stored in a computer's memory or on some form of storage media such as a hard disk or CD-ROM. This digitization procedure can be done by a scanner, or by a video camera connected to a frame grabber board in a computer. Once the image has been digitized, it can be operated upon by various image processing operations.
Image processing operations can be roughly divided into three major categories
Image Compression
Image Enhancement and Restoration
Measurement Extraction
Image compression is familiar to most people. It involves reducing the amount of memory needed to store a digital image.
Image defects which could be caused by the digitization process or by faults in the imaging set-up (for example, bad lighting) can be corrected using Image Enhancement techniques. Once the image is in good condition, the Measurement Extraction operations can be used to obtain useful information from the image.
Some examples of Image Enhancement and Measurement Extraction are given below. The examples shown all operate on 256 grey-scale images. This means that each pixel in the image is stored as a number between 0 to 255, where 0 represents a black pixel, 255 represents a white pixel and values in-between represent shades of grey. These operations can be extended to operate on colour images.
The examples below represent only a few of the many techniques available for operating on images. Details about the inner workings of the operations have not been given, but some references to books containing this information are given at the end for the interested reader.
Image Enhancement and Restoration
The image at the left of Figure 1 has been corrupted by noise during the digitization process. The 'clean' image at the right of Figure 1 was obtained by applying a median filter to the image.
Figure 1. Application of the median filter
An image with poor contrast, such as the one at the left of Figure 2, can be improved by adjusting the image histogram to produce the image shown at the right of Figure 2.
Figure 2. Adjusting the image histogram to improve image contrast
The image at the top left of Figure 3 has a corrugated effect due to a fault in the acquisition process. This can be removed by doing a 2-dimensional Fast-Fourier Transform on the image (top right of Figure 3), removing the bright spots (bottom left of Figure 3), and finally doing an inverse Fast Fourier Transform to return to the original image without the corrugated background bottom right of Figure 3).
Figure 3. Application of the 2-dimensional Fast Fourier Transform
An image which has been captured in poor lighting conditions, and shows a continuous change in the background brightness across the image (top left of Figure 4) can be corrected using the following procedure. First remove the foreground objects by applying a 25 by 25 greyscale dilation operation (top right of Figure 4). Then subtract the original image from the background image (bottom left of Figure 4). Finally invert the colors and improve the contrast by adjusting the image histogram (bottom right of Figure 4)
Figure 4. Correcting for a background gradient
Image Measurement Extraction
The example below demonstrates how one could go about extracting measurements from an image. The image at the top left of Figure 5 shows some objects. The aim is to extract information about the distribution of the sizes (visible areas) of the objects. The first step involves segmenting the image to separate the objects of interest from the background. This usually involves thresholding the image, which is done by setting the values of pixels above a certain threshold value to white, and all the others to black (top right of Figure 5). Because the objects touch, thresholding at a level which includes the full surface of all the objects does not show separate objects. This problem is solved by performing a watershed separation on the image (lower left of Figure 5). The image at the lower right of Figure 5 shows the result of performing a logical AND of the two images at the left of Figure 5. This shows the effect that the watershed separation has on touching objects in the original image.
Finally, some measurements can be extracted from the image. Figure 6 is a histogram showing the distribution of the area measurements. The areas were calculated based on the assumption that the width of the image is 28 cm.
Figure 5. Thresholding an image and applying a Watershed Separation Filter
Figure 6. Histogram showing the Area Distribution of the Objects
Basic Enhancement and Restoration Techniques
¢ Unsharp masking
¢ Noise suppression
¢ Distortion suppression
The process of image acquisition frequently leads (inadvertently) to image degradation. Due to mechanical problems, out-of-focus blur, motion, inappropriate illumination, and noise the quality of the digitized image can be inferior to the original. The goal of enhancement is-- starting from a recorded image c[m,n]--to produce the most visually pleasing image â[m,n]. The goal of restoration is--starting from a recorded image c[m,n]--to produce the best possible estimate â[m,n] of the original image a[m,n]. The goal of enhancement is beauty; the goal of restoration is truth.
The measure of success in restoration is usually an error measure between the original a[m,n] and the estimate â[m,n]: E{â[m,n], a[m,n]}. No mathematical error function is known that corresponds to human perceptual assessment of error. The mean-square error function is commonly used because:
1. It is easy to compute;
2. It is differentiable implying that a minimum can be sought;
3. It corresponds to "signal energy" in the total error, and;
4. It has nice properties vis à vis Parseval's theorem, eqs. (22) and (23).
The mean-square error is defined by:
In some techniques an error measure will not be necessary; in others it will be essential for evaluation and comparative purposes.
Unsharp masking
A well-known technique from photography to improve the visual quality of an image is to enhance the edges of the image. The technique is called unsharp masking. Edge enhancement means first isolating the edges in an image, amplifying them, and then adding them back into the image. Examination of Figure 33 shows that the Laplacian is a mechanism for isolating the gray level edges. This leads immediately to the technique:
The term k is the amplifying term and k > 0. The effect of this technique is shown in Figure 48.
The Laplacian used to produce Figure 48 is given by eq. (120) and the amplification term k = 1.
Original Laplacian-enhanced
Figure 48: Edge enhanced compared to original
Noise suppression
The techniques available to suppress noise can be divided into those techniques that are based on temporal information and those that are based on spatial information. By temporal information we mean that a sequence of images {ap[m,n] | p=1,2,...,P} are available that contain exactly the same objects and that differ only in the sense of independent noise realizations. If this is the case and if the noise is additive, then simple averaging of the sequence:
Temporal averaging -
will produce a result where the mean value of each pixel will be unchanged. For each pixel, however, the standard deviation will decrease from to .
If temporal averaging is not possible, then spatial averaging can be used to decrease the noise. This generally occurs, however, at a cost to image sharpness. Four obvious choices for spatial averaging are the smoothing algorithms that have been described in Section 9.4 - Gaussian filtering (eq. (93)), median filtering, Kuwahara filtering, and morphological smoothing (eq. ).
Within the class of linear filters, the optimal filter for restoration in the presence of noise is given by the Wiener filter . The word "optimal" is used here in the sense of minimum mean-square error (mse). Because the square root operation is monotonic increasing, the optimal filter also minimizes the root mean-square error (rms). The Wiener filter is characterized in the Fourier domain and for additive noise that is independent of the signal it is given by:
where Saa(u,v) is the power spectral density of an ensemble of random images {a[m,n]} and Snn(u,v) is the power spectral density of the random noise. If we have a single image then Saa(u,v) = |A(u,v)|2. In practice it is unlikely that the power spectral density of the uncontaminated image will be available. Because many images have a similar power spectral density that can be modeled by Table 4-T.8, that model can be used as an estimate of Saa(u,v).
A comparison of the five different techniques described above is shown in Figure 49. The Wiener filter was constructed directly from eq. because the image spectrum and the noise spectrum were known. The parameters for the other filters were determined choosing that value (either or window size) that led to the minimum rms.

a) Noisy image (SNR=20 dB) b) Wiener filter c) Gauss filter ( = 1.0)
rms = 25.7 rms = 20.2 rms = 21.1

d) Kuwahara filter (5 x 5) e) Median filter (3 x 3) f) Morph. smoothing (3 x 3)
rms = 22.4 rms = 22.6 rms = 26.2
Figure 49: Noise suppression using various filtering techniques.
The root mean-square errors (rms) associated with the various filters are shown in Figure 49. For this specific comparison, the Wiener filter generates a lower error than any of the other procedures that are examined here. The two linear procedures, Wiener filtering and Gaussian filtering, performed slightly better than the three non-linear alternatives.
Distortion suppression
The model presented above--an image distorted solely by noise--is not, in general, sophisticated enough to describe the true nature of distortion in a digital image. A more realistic model includes not only the noise but also a model for the distortion induced by lenses, finite apertures, possible motion of the camera and/or an object, and so forth. One frequently used model is of an image a[m,n] distorted by a linear, shift-invariant system ho[m,n] (such as a lens) and then contaminated by noise [m,n]. Various aspects of ho[m,n] and [m,n] have been discussed in earlier sections. The most common combination of these is the additive model:
The restoration procedure that is based on linear filtering coupled to a minimum mean-square error criterion again produces a Wiener filter :
Once again Saa(u,v) is the power spectral density of an image, Snn(u,v) is the power spectral density of the noise, and o(u,v) = F{ho[m,n]}. Examination of this formula for some extreme cases can be useful. For those frequencies where Saa(u,v) >> Snn(u,v), where the signal spectrum dominates the noise spectrum, the Wiener filter is given by 1/o(u,v), the inverse filter solution. For those frequencies where Saa(u,v) << Snn(u,v), where the noise spectrum dominates the signal spectrum, the Wiener filter is proportional to o*(u,v), the matched filter solution. For those frequencies where o(u,v) = 0, the Wiener filter W(u,v) = 0 preventing overflow.
The Wiener filter is a solution to the restoration problem based upon the hypothesized use of a linear filter and the minimum mean-square (or rms) error criterion. In the example below the image a[m,n] was distorted by a bandpass filter and then white noise was added to achieve an SNR = 30 dB. The results are shown in Figure 50.

a) Distorted, noisy image b) Wiener filter c) Median filter (3 x 3)
rms = 108.4 rms = 40.9 Figure 50: Noise and distortion suppression using the Wiener filter, eq. and the median filter.
The rms after Wiener filtering but before contrast stretching was 108.4; after contrast stretching with eq. (77) the final result as shown in Figure 50b has a mean-square error of 27.8. Using a 3 x 3 median filter as shown in Figure 50c leads to a rms error of 40.9 before contrast stretching and 35.1 after contrast stretching. Although the Wiener filter gives the minimum rms error over the set of all linear filters, the non-linear median filter gives a lower rms error. The operation contrast stretching is itself a non-linear operation. The "visual quality" of the median filtering result is comparable to the Wiener filtering result. This is due in part to periodic artifacts introduced by the linear filter which are visible in Figure 50b.
CONCLUSION:
Audio stream contain extremely valuable data, whose contents is also very rich and diverse. The combination of audio and image techniques, will definitely generate interesting results, and very likely improve the quality of the present analysis
REFEREANCE:
[1] Computer Techniques in Image processing By Andrews 1970
[2] Article in the November issue of the journal , ELECTRONICS TODAY.
[3] Article in the January issue of journal, ELECTRONICS FOR YOU
[4] Digital image restoration By Andrews 1977
[5] Digital image processing By Rafael Gonzalez and Richard Woods 2003
Reply
#5
More Info About digital image processing full report




http://studentbank.in/report-image-processing--12931
Reply
#6

[attachment=5112]

DIGITAL IMAGE PROCESSING

Abstrac
t:
Over the past dozen years forensic and medical applications of technology first developed to record and transmit pictures from outer space have changed the way we see things here on earth, including Old English manuscripts. With their talents combined, an electronic camera designed for use with documents and a digital computer can now frequently enhance the legibility of formerly obscure or even invisible texts. The computer first converts the analogue image, in this case a videotape, to a digital image by dividing it into a microscopic grid and numbering each part by its relative brightness. Specific image processing programs can then radically improve the contrast, for example by stretching the range of brightness throughout the grid from black to white, emphasizing edges, and suppressing random background noise that comes from the equipment rather than the document. Applied to some of the most illegible passages in the Beowulf manuscript, this new technology indeed shows us some things we had not seen before and forces us to reconsider some established readings.
Introduction to Digital Image Processing:
¢ Vision allows humans to perceive and understand the world surrounding us.
¢ Computer vision aims to duplicate the effect of human vision by electronically perceiving and understanding an image.
¢ Giving computers the ability to see is not an easy task - we live in a three dimensional (3D) world, and when computers try to analyze objects in 3D space, available visual sensors (e.g., TV cameras) usually give two dimensional (2D) images, and this projection to a lower number of dimensions incurs an enormous loss of information.
¢ In order to simplify the task of computer vision understanding, two levels are usually distinguished; low-level image processing and high level image understanding.
¢ Usually very little knowledge about the content of images
¢ High level processing is based on knowledge, goals, and plans of how to achieve those goals. Artificial intelligence (AI) methods are used in many cases. High-level computer vision tries to imitate human cognition and the ability to make decisions according to the information contained in the image.
¢ This course deals almost exclusively with low-level image processing, high level in which is a continuation of this course.
¢ Age processing is discussed in the course Image Analysis and Understanding, which is a continuation of this course

Reference: http://studentbank.in/report-digital-ima...z11YJyBaWw
Reply
#7


Image processing

Monochrome black/white image
In electrical engineering and computer science, image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.
Image processing usually refers to digital image processing, but optical and analog image processing also are possible. This article is about general techniques that apply to all of them. The acquisition of images (producing the input image in the first place) is referred to as imaging.

for more :-\
http://en.wikipediawiki/Computer_science
Reply
#8
[attachment=5926]
Image Processing
presented by
Yunus..sonu (ece) Abhinav(ece)...
RAMAPPA
ENGINEERING COLLEGE

Introduction
This paper highlights the information regarding the “IMAGE PROCESSING” and drawback of password. This can be minimized by the usage of biometrics and applications of “BIOMETRICS” will be discussed.
Biometrics returns your body into your password, so it is going to be next generation’s powerful security tool…!



Reply
#9

PRESENTED BY:
T.VAMSHI

[attachment=6453]


Introduction

“Morphing” is an interpolation technique used to create a series of intermediate objects from two objects.
“The face - morphing algorithm” automatically extracts feature points on the face and morphing is performed.
This algorithm is proposed by Mr. M.Biesel within Bayesian framework to do automatic face morphing.

Pre – Processing


removing the noisy backgrounds

clipping to get a proper facial image, and

scaling the image to a reasonable size. 
Reply
#10
[attachment=6648]
digital image processing full report

PRESENTED BY
S.Sudeepthi
T.V.L.Anusha



DIGITAL IMAGE

Two dimensional representation of values.
These are called “PIXELS”.
Pixels are stored in computer memory.


DIGITAL IMAGE PROCESSING

The processing done by using Computer software.

Avoids build-up of noise and signal distortion


How can we process an Image?

Transfer image to a computer
Digitize the image
* Digitization – translating image into numerical code understood by computer.
Processing can be done through software programs in a “Digital Dark-room”
Image is broken down into thousands of pixels
Reply
#11
[attachment=7092]
image processing full report

INTRODUCTION:


Over the last two decades, we have witnessed an explosive growth in both the diversity of techniques and the range of applications of image processing. However, the area of color image processing is still not covered, despite having become common place, with consumers choosing the convenience of color imaging over traditional grayscale imaging. With advances in image sensors, digital TV, image databases, and video and multimedia systems, and with the proliferation of color printers, color image displays, DVD devices, and especially digital cameras and image-enabled consumer electronics, color image processing appears to have become the main focus of the image-processing research community. Processing color images or, more generally, processing multichannel images, such as satellite images, color filter array images, microarray images, and color video sequences, is a nontrivial extension of the classical grayscale processing. Recently, there have been many color image processing and analysis solutions, and many interesting results have been reported concerning filtering, enhancement, restoration, edge detection, analysis, compression, preservation, manipulation, and evaluation of color images. The surge of emerging applications, such as single-sensor imaging, color-based multimedia, digital rights management, art, and biomedical applications, indicates that the demand for color imaging solutions will grow considerably in the next decade[4].
Reply
#12


[attachment=7973]

By
Alok K. Watve


Applications of image processing
Gamma ray imaging
X-ray imaging
Multimedia systems
Satellite imagery
Flaw detection and quality control
And many more…….

Fundamental Steps in digital image processing
Image acquisition
Image enhancement(gray or color images)
Wavelet and multi-resolution processing
Compression
Morphological processing
Segmentation
Representation & description
Object recognition
Image enhancement in spatial domain

Binary images
Only two colors
Gray images
A range of colors(not more than 256) from black to white
Color images
Contain several colors(as many as 224)




Reply
#13
PRESENTED BY
M.VAMSI KRISHNA
S.BABAJAN

[attachment=9712]
ABSTRACT
In the era of multimedia and Internet, image processing is a key technology.
Image processing is any form of information processing for which the input is an image, such as photographs or frames of video; the output is not necessarily an image, but can be for instance a set of features of the image.
Image processing is of two types Analog image processing and digital image processing. Digital image processing has the same advantages over analog image processing as digital signal processing has over analog signal processing - it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing. But the cost of analog image processing was fairly high compared to digital image processing.
Analog image can be converted to a digital image which can be processed in greater aspects, having greater advantages affordably and the processes such as sampling, quantization, Image acquisition, Image Segmentation of converting analog image to a digital image is explained in this report.
Image processing has a very good scope in the fields of Signal-processing aspects of image processing, imaging systems, and image scanning, display and printing. Includes theory, algorithms, and architectures for image coding, filtering, enhancement, restoration, segmentation, and motion estimation; image formation in tomography, radar, sonar, geophysics, astronomy, microscopy, and crystallography; image scanning, digital half-toning and display, and color reproduction.
HISTORY
Many of the techniques of digital image processing, or digital picture processing as it was often called, were developed in the 1960s at the Jet Propulsion Laboratory, MIT, Bell Labs, University of Maryland, and a few other places, with application to satellite imagery, wirephoto standards conversion, medical imaging, videophone, character recognition, and photo enhancement. But the cost of processing was fairly high with the computing equipment of that era. In the 1970s, digital image processing proliferated, when cheaper computers and dedicated hardware became available. Images could then be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and compute-intensive operations.
With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest
INTRODUCTION
Digital image processing is the use of computer algorithms to perform image processing on digital images. Digital image processing has the same advantages over analog image processing as digital signal processing has over analog signal processing — it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing.
We will restrict ourselves to two-dimensional (2D) image processing although most of the concepts and techniques that are to be described can be extended easily to three or more dimensions.
We begin with certain basic definitions. An image defined in the "real world" is considered to be a function of two real variables, for example, a(x,y) with a as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y). An image may be considered to contain sub-images sometimes referred to as regions-of-interest, ROIs, or simply regions. This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. In a sophisticated image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image (region) might be processed to suppress motion blur while another part might be processed to improve color rendition.
The amplitudes of a given image will almost always be either real numbers or integer numbers. The latter is usually a result of a quantization process that converts a continuous range (say, between 0 and 100%) to a discrete number of levels. In certain image-forming processes, however, the signal may involve photon counting which implies that the amplitude would be inherently quantized. In other image forming procedures, such as magnetic resonance imaging, the direct physical measurement yields a complex number in the form of a real magnitude and a real phase.
IMAGE
It is a 2D function f(x, y) where x and y are spatial co-ordinates and f (Amplitude of function) is the intensity of the image at x, y. Thus, an image is a 2-dimensional function of the co-ordinates x, y.
DIGITAL IMAGE
If x, y and amplitude of f are all discrete quantities, then the image is called Digital Image. Digital image is a collection of elements called pixels, where each pixel has a specific co-ordinate value and a particular gray-level. Processing of this image using a digital computer is called Digital Image Processing. E.g. Fingerprint Scanning Handwriting Recognition System Face recognition system Biometric scanning used for authentication in Modern pen drives. The effect of digitization is shown in Figure 1.
The 2D continuous image a(x, y) is divided into N rows and M columns. The intersection of a row and a column is termed a pixel. The value assigned to the integer coordinates [m,n] with {m=0,1,2,...,M-1} and {n=0,1,2,...,N-1} is a[m,n]. In fact, in most cases a(x, y)--which we might consider to be the physical signal that impinges on the face of a 2D sensor--is actually a function of many variables including depth (z), color ( ), and time (t). Unless otherwise stated, we will consider the case of 2D, monochromatic, static images in this




Reply
#14
[attachment=10095]
IMAGE PROCESSING
ABSTRACT

In thispaper, the basics of capturing an image, image processing to modify and enhance the image are discussed. There are many applications for Image Processing like surveillance, navigation, and robotics. Robotics is a very interesting field and promises future development so it is chosen as an example to explain the various aspects involved in Image Processing .
The various techniques of Image Processing are explained briefly and the advantages and disadvantages are listed. There are countless different routines that can be used for variety of purposes. Most of these routines are created for specific operations and applications. However, certain fundamental techniques such as convolution masks can be applied to many classes of routines. We have concentrated on these techniques, which enable us to adapt, develop, and use other routines and techniques for other applications. The advances in technology have created tremendous opportunities for visual system and image processing. There is no doubt that the trend will continue into the future.
INTRODUCTION
Image Processing :

Image processing pertains to the alteration and analysis of pictorial information. Common case of image processing is the adjustment of brightness and contrast controls on a television set by doing this we enhance the image until its subjective appearing to us is most appealing. The biological system (eye, brain) receives, enhances, and dissects analyzes and stores mages at enormous rates of speed.
Basically there are two-methods for processing pictorial information. They are:
1. Optical processing
2. Electronic processing.
Optical processing uses an arrangement of optics or lenses to carry out the process. An important form of optical image processing is found in the photographic dark room.
Electronic image processing is further classified as:
1. Analog processing
2. Digital processing.
Analog processing:
These ple of this kind is the control of brightness and contrast of television image. The television signal is a voltage level that varies In amplitude to represent brightness through out the image by electrically altering these signals , we correspondingly alter the final displayed image appearance.
Digital image processing:
Processing of digital images by means of digital computer refers to digital image processing. Digital images are composed of finite number of element of which has a particular location value. Picture elements, image elements, and pixels are used as elements used for digital image processing.
Digital Image Processing is concerned with processing of an image. In simple words an image is a representation of a real scene, either in black and white or in color, and either in print form or in a digital form i.e., technically a image is a two-dimensional light intensity function. In other words it is a data intensity values arranged in a two dimensional form, the required property of an image can be extracted from processing an image. Image is typically by stochastic models. It is represented by AR model. Degradation is represented by MA model.
Other form is orthogonal series expansion. Image processing system is typically non-casual system. Image processing is two dimensional signal processing. Due to linearity Property, we can operate on rows and columns separately. Image processing is vastly being implemented by “Vision Systems” in robotics. Robots are designed, and meant, to be controlled by a computer or similar devices. While “Vision Systems” are most sophisticated sensors used in Robotics. They relate the function of a robot to its environment as all other sensors do.
“Vision Systems” may be used for a variety of applications, including manufacturing, navigation and surveillance.
Some of the applications of Image Processing are:
1.Robotics. 3.Graphics and Animations.
2.Medical Field. 4.Satellite Imaging.
Reply
#15
presented by:
Ranjith & Waquas

[attachment=10546]
Introduction to Image Processing
What is an Image?

An Image is an Array, or a Matrix, of square pixels (Picture elements) arranged in Columns and Rows.
 There are two groups of Images
 Vector Graphics (or line art)
 Bitmaps (Pixel based images)
 There are two groups of Colors
* RGB
 Fourier Transform : a Review
 Fourier Transform Basic Functions
Image Enhancements
 Image Enhancement techniques:
Spatial Domain Methods
Frequency Domain Methods
 Spatial (time) domain techniques are techniques that operate directly on pixels.
 Frequency domain techniques are based on the modifying the Fourier Transform of an Image.
Frequency Domain Filtering
 Edges and transitions (e.g., Noise) in an image contribute significantly to High – frequency content of Fourier Transform.
 Low frequency contents in the Fourier Transform are responsible to the general appearance of the image over smooth areas.
 Blurring (Smoothing) is achieved by attenuating range of High – frequency components of Fourier Transform.
Embedded Image Processing System on FPGA
Abstract
The Design of an Embedded Image Processing System (called DIPS) on FPGA is presented. DIPS is based on the Xilinix MicroBlaze 32 – bit soft processor core and implemented in Spartan – 3.
Introduction
Today, embedded systems can be Microcontroller-based, DSP based, ASIC based, or FPGA based Systems. Xilinix, a FPGA vendor has provided the MicroBlaze 32 – bit soft processor core which is licensed as part of Xilinix Embedded Development Kit.
Overview of the Xilinix MicroBlaze
 The MicorBlaze soft processor is a 32 – bit Architecture.
 The Backbone of the architecture is a single – issue, 3 stage pipeline with 32 general purpose registers, Arithmetic Logic Units (ALU), a shift units, and two levels of Interrupt.
 Two Memory interfaces of MicroBlaze Processor
 Local Memory Bus (LMB)
 Xilinix Cache Link (XCL)
 Fast Simplex Link (FSL)
 The Local Memory Bus is provides a Low latency storage such as interrupt and exception handler
 The Xilinix Cache Link is a High performance point – to – point connection to an external memory controller.
 The Fast Simplex Link is a simple, yet powerful, yet point – to – point interfaces that connects User – Developed co-processors to the MicroBlaze Processor pipeline.
Image Processing Vs Computer Graphics
 There generally is a bit of confusion in recognising the difference between the fields of Image processing and Computer graphics.
 This two topics will be entirely different, almost the opposite of each other. And a com. graphics is involved with image synthesis, and not recognition or Analysis, as in the case of Image processing.
 Morphing used in advertisements could be said to be the most commonly witnessed computer graphics technique.
 Input to an Image processing is always a real image formed via some physical phenomenon such as Scanning, filming, Etc.
Conclusions…
 Imaging professionals, scientists, and engineers who use image processing as a tool and wish to develop a deeper understanding and create custom solutions to imaging problems in their field.
 IT professionals wanting a self-study course featuring easily adaptable code and completely worked out examples enabling them to be productive right away.
 Image processing using all Programming Languages like,
C, C++, Java, Etc.
• It is used for all fields like, Medical, all Web standards, Etc.
• The visual system of a single human being does more image processing than the entire world’s supply of supercomputers.
Reply
#16
Presented by:
CH.SAHITHI
K.BHAGYA SAHITYA

[attachment=10645]
ABSTRACT:
Steganography is the art of hiding the fact that communication is taking place, by hiding information in other information. In image steganography the information is hidden exclusively in images. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the Internet. Generation of stego images containing hidden messages using LSB is a very common and most primitive method of steganography. In this method, the least significant bit of some or all of the bytes inside an image is changed. With a well-chosen image, one can even hide the message in the least as well as second to least significant bit and still not see the difference. The present paper compares these two schemes. Good conclusions are drawn from the experimental results.
1.INTRODUCTION:
Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. Unlike cryptography, where the existence of the message is clear, but the meaning is obscured, the steganographic technique
strives to hide the very presence of the message itself from an observer.
The word steganography is derived from the Greek words “stegos” meaning “cover” and “grafia” meaning “writing” defining it as “covered writing”.
Steganography simply takes one piece of information and hides it within another. Computer files (images, sounds recordings, even disks) contain unused or insignificant areas of data. Steganography takes advantage of these areas, replacing them with information. One can replace the least significance bit of the original file (audio/image) with the secret bits and the resultant cover will not be distorted. It is not to keep others from knowing the hidden information, but it is to keep others from thinking that the information even exists. If a steganography method causes someone to suspect that there is secret information in the carrier medium, then this method fails. The noise or any modulation induced by the message should not change the characteristics of the cover and should not produce any kind of distortion. The paper is organized as follows. The II section gives methodology. The III section gives the types of LSB techniques. IV section gives the experimental results followed by conclusions.
2. METHODOLOGY:
LSB is a simple approach for embedding information in an image. In this scheme the hidden message will be inserted in LSB’s of the image.
When using a 24-bit image, a bit of each of the red, green and blue colour components can be used, since they are each represented by a byte. In other words, one can store 3 bits in each pixel. An 800 × 600 pixel image, can thus store a total amount of 1,440,000 bits or 180,000 bytes of embedded data.
For example a grid for 3 pixels of a 24-bit image can be as follows:
(00101101 00011100 11011100)
(10100110 11000100 00001100)
(11010010 10101101 01100011)
When the number 200, which binary representation is 11001000, is embedded into the least significant bits of this part of the image, the resulting grid is as follows:
(00101101 00011101 11011100)
(10100110 11000101 00001100)
(11010010 10101100 01100011)
Although the number was embedded into the first 8 bytes of the grid, only the 3 underlined bits needed to be changed according to the embedded message. On average, only half of the bits in an image will need to be modified to hide a secret message using the maximum cover size. Since there are 256 possible intensities of each primary colour, changing the LSB of a pixel results in small changes in the intensity of the colours. These changes cannot be perceived by the human eye -thus the message is successfully hidden.
With a well-chosen image, one can even hide the message in the least as well as second to least significant bit and still not see the difference.
Reply
#17
presented by
AMEER BASHA
SANJEEVIAH

[attachment=11287]
ABSTRACT
Medical imaging is a field which researches and develops tools and technology to acquire, manipulate and archive digital images which are used by the dimensional function , f, that takes an input two spatial coordinates x and y and returns a value f(x, y). The value f(x, y) is a gray level of the image at that point. The gray level is also called the intensity. Digital images are a discretized partition of the spatial images into small cells which are referred to as pixels – picture elements. Digital image processing is a field for processing digital images using a digital computer. Processing of digital images include operations involving digital images such as acquisition, storage, retrieval, translation, compression, etc.
Quantifying disease progression of patients with early stage Rheumatoid Arthritis (RA) presents special challenges. Establishing a robust and reliable method that combines the ACR criteria with bone and soft-tissue measurement techniques, would make possible the diagnosis of early RA and/or the monitoring of the progress of the disease. In this paper and automated, reliable and robust system that combines the ACR criteria with radiographic absorptiometry based bone and soft tissue density measurement techniques is presented. The system is comprised of an image digitization component and automated image analysis component. Radiographs of the hands and the calibration wedges are acquired and digitized following a standardized procedure. The image analysis system segments the relevant joints into soft-tissue and bone regions and computes density values of each of these regions relative to the density of the reference wedges. Each of the joints is also scored by trained radiologists using the well established ACR criteria. The results of this work indicate that use of standardized imaging procedures and robust image analysis techniques can significantly improve the reliability of quantitative measurements for rheumatoid arthritis assessment. Furthermore, the methodology has the potential to be clinically used in assessing disease condition of early stage RA subjects.
Keywords: Automated hand image analysis, Hand image segmentation, Radiographic absorptiometry, Rheumatoid arthritis
1. INTRODUCTION
Conventional examination of the hand radiographs is well established as a diagnostic as well as an outcome measuree in Rheumatoid Arthritis (RA). It is readily available and has been correlated with measures of disease activity and function. X-ray changes are, however, historical rather than predictive, and there is significant observer variation in quantifying erosive changes. The earliest radiographic changes seen in the hand are soft-tissue swelling symmetrically around the joints involved, juxta-articular osteoporosis and erosion of the ‘bare’ areas of bone (i.e. areas lacking articular cartilage).These changes help to confirm the presence of an inflammatory process.
The presence of early soft-tissue swelling is easily recognized on plain radiographs but not readily quantified. Although the presence of early osteoporosis is recognized in the affected hand, a mild osteoporosis may be extremely subtle to the eyes. The recognition of the changes in soft-tissue and bone density is subjective and is known to vary from assessor to assessor. Therefore, attention has been focused on the more objective erosion and joint narrowing assessment. Use of magnetic resonance (MR) technique has been shown to sensitively detect early local edema and inflammation prior to a positive finding on plain film radiographs. However, MR is an expensive examination and may not be used as a routine technique.
Presently, radiographs of the hands and wrists are employed to assess disease progression. The parameters used to determine progression are the changes in erosions and joint-space narrowing observed on the radiographs. There are some problems with both of these parameters. First, both erosion and joint narrowing are not the earliest changes in RA and further they may be substantially irreversible. Second, these two changes may occur independently of each other. Third, there is a tremendous variability in erosive disease: some patients never develop erosions; some go into spontaneous remission of their erosive disease; and for some, the progression is relentless. Fourth, joint-space or cartilage loss may be caused by either the disease itself or by mechanical stress. Present scoring methods require that any degree of joint-space loss be recorded as a progressive change due to RA.
Quantitative techniques currently available may provide a new approach in monitoring disease progression in patients with RA. Adoption of these techniques may have implications for the management of patients with RA and for possible detection of the disease at an early stage.
1.1. Hand bone densitometry
Considerable advances have been made over the past two decades in developing radiological techniques for assessing bone density. However; all of these techniques have been utilized on aging-related osteoporosis, a pathological change involving general bone mineral reduction. Owing to the wide availability of DXA, recently published research describes the use of Bone Mineral Density (BMD) measurements in the hands of patients with chronic RA. Most published observations on RA have examined BMD changes, focusing on only the general bone loss around the joints. Quantification of the difference of bone loss between the juxta-articular bone and the shaft of tubular bones in the hands could be a sensitive index for quantitative analysis of RA patients. Hand BMD measurements offer an observer independent and reproducible means of quantifying the cumulative effects of local and systemic inflammation. The technique could be of use in the assessment of patients with early RA, in whom conventional measures of disease are not helpful until disease is (irreversibly) more advanced.
1.2. Hand radiographic absorptiometry
In conventional Radiographic Absorptiometry, radiographs of the hand are acquired with reference wedges placed on the films. The films are and subsequently analyzed using an optical densitometer. The resulting density values computed by the densitometer are calibrated relative to that of the reference wedge and are expressed in arbitrary units.
Recent improvements in hardware and software available for digital image processing have led to the quantitative assessment of radiological abnormalities in diagnostic radiology. Such improvements have also enabled introduction of several radiographic absorptiometry techniques. One such technique uses centralized analysis of hand radiographs and averages the BMD of the second to fourth middle phalanges. Another technique developed in Japan uses the diaphysis of the second metacarpal to determine BMD. A third technique developed in Europe measures the diaphysis and proximal metaphysics of the second middle phalanx. Based on published short-term precision errors, computer-assisted Radiographic Absorptiometry appears to be suitable for the measurements of the BMD of phalanges and metacarpals, and is used in several hundred centers worldwide.
In this work we present preliminary results of an ongoing research work aimed at developing an automated radiographic absorptiometry system for the assessment and monitoring of both BMD and soft tissue swelling in early stage RA. This paper focuses on the reproducibility and accuracy of the methodology being developed. The paper is organized as follows: the next section provides an overview of the image acquisition procedure. In section 3 the image analysis algorithms used in this work are presented. In section 4 we present results obtained by analyzing the data collected in a small reproducibility study involving 10 normal subjects.
2. IMAGE ACQUISITION
One key factor influencing the outcome of any radiographic absorptiometry technique is the standardization of the image acquisition technique. Variability in acquisition parameters can significantly affect the measured values. In order to carry out this work, a standard image acquisition protocol was defined. This protocol has been successfully used in earlier large scale multinational phase 3 clinical trials for Rheuatoid Arthrtis related drugs. Radiographs of the left and right hands are taken one
at a time. Templates were developed to guide the positioning of the hand with respect to the center of the x-ray beam. The X-ray beam was centered between the 2nd and 3rd metacarpo-phalangeal joints and angled at 90° to the film surface. This results in a tangential image of the joints. Improper beam centering generally results in overlapping joint margins. The X-ray exposure parameters were maintained constant for all subjects. All normal subjects were imaged at the same clinic at UCSF. In addition to providing a template for hand positioning, two sets of calibration wedges were also provided to the clinic. Each set of wedges consisted of one Acrylic wedge, for soft tissue and one Aluminum wedge for bone tissue. These wedges were custom designed for the purposes of this research work.
3.Enhancement Image and Restoration
The image at the left of Figure 1 has been corrupted by noise during the digitization process. The 'clean' image at the right of Figure 1 was obtained by applying a median filter to the image.
An image with poor contrast, such as the one at the left of Figure 2, can be improved by adjusting the image histogram to produce the image shown at the right of Figure 2.
3.IMAGEANALYSIS
One of the major difficulties in analyzing hand radiograph images is the high level of noise present in the images. Additionally the trabecular texture of the hand in the vicinity of the joints increases the noise in edge maps of this regions. Use of non-standard acquisition protocol can add additional challenges at it can result in further degradation of image quality. This last challenge is minimized in this work, as a standard image acquisition protocol is followed. Given a particular application varying degrees of accuracy in anatomy segmentation can be considered acceptable. For instance in detecting joint-space narrowing there is a need for accurate and reliable determination of the joint-space of any finger and the bone edges in this region. However, accurate delineation of the bone edges in the vicinity of the joint is not as relevant. Depending upon the application there can be additional constraints on performance issues as well. In an application for which off-line processing of the data is acceptable, more sophisticated algorithms can be employed. This particular application requires that the overall process be fast, accurate and reproducible enough for on-line processing. Accurate estimation of the bone edges in the middle shaft and in the joint vicinity of high relevance in this work. This is primarily because the disease progression follows different patterns in the joint area as compared to the middle phalange area. Also, the manifestations of the disease symptoms in its early stage have different effects on soft-tissue and bone as well, which require reliable segmentation of these two types of tissue at different time points. The algorithm for hand segmentation can be outlined into the following main stages:
• Hand outline delineation
• Joint identification
• Bone outline delineation
• Segmentation of soft tissue and bone
The first stage of the algorithm has been well studied in the literature and is not described here. The second stage can be more challenging, especially when dealing with hands of patients in advanced stage of disease progression. As this system will be applied to a patient population that is in their early disease stage it is expected that the joints will be well defined. The system provided to the radiologists allows them to adjust the location of the automatically identified joints. Results presented in this work were obtained by having the radiologists place control points to identify the joints, rather than having them automatically computed by the system.
3.1. Control point placement
A simple user interface was provided to enable placement of control points on the joints. This was primarily done to investigate the sensitivity of the system to the initial control point positioning, which in an automated system would invariably be the same for the same image. The user placed 16 control points on each image. These joints are show in Figure 4. In addition to placing the control points for the joints, the control points for the two wedges are also placed by the radiologists. For each wedge, six control points are placed with four at the corners and two in the middle. Once all the control points are placed, the remaining steps of the generalized algorithm stated above are carried out autonomously. The middle phalange or cortex control points are computed automatically and are located at the middle of straight line connecting the two joints, one above and one below the middle phalange. The diameters of the circular regions of interest placed around each joint are computed proportionally to the length
Reply
#18
[attachment=11808]
DIGITAL IMAGE PROCESSING
Abstract

By using Digital Image Processing we enhance the digital images and extracting information and features from the image. Digital Image Processing has become the most common form of image processing, and it is generally used because it is not only the most versatile method, but also the cheapest. This is one type of image processing and it used for editing the digital images which are taken from the digital cameras. This technology is more useful in the investigation in Crime Branch. Digital Image Processing has the advantages as a wider range of algorithm to be applied to the input data and can avoid the problems such as build-up of notice and signal distortion during processing. For this the NASA and WE military have developed advanced computer software. By using this software improve the clarity of and amount of detail visible in still and video images.
The main feature of this technology is Digital Image Editing. Image Editors are provide the means for altering and improving images in an all most endless number of time. They accept images in large variety of image formats. The other features of this technology are Image Size Alteration, Cropping on Image, Removal of Notice and unwanted elements, merging of images and finally color adjustments.
And in this paper we present categories of digital image processing, Image Compression, Image viewing and image types, and digital image editing, and finally advantages and disadvantages of digital image processing.
Introduction
Digital Image Processing is concerned with acquiring and processing of an image. In simple words an image is a representation of a real scene, either in black and white or in color, and either in print form or in a digital form i.e., technically a image is a two-dimensional light intensity function. In other words it is a data intensity values arranged in a two-dimensional form like an array, the required property of an image can be extracted from processing an image. Image is typically by stochastic models. It is represented by AR model. Degradation is represented by MA model.
Image Processing
Image processing is enhancing image or extracting information or features from an image. Any activity that transforms an input image into an output image. The manipulation and alteration of images using computer software.
Digital Image Processing
Digital image processing is the use of computer to perform on. Digital image processing has the same advantages (over analog image processing) as has (over analog signal processing) -- it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing.
Digital Image
A digital image is a representation of a two-dimensional a finite set of values, called picture elements or. Typically, the pixels are stored in computer memory as one or a two-dimensional array of small integers. These values are often transmitted or stored in a form.
Digital images can be by a variety of input devices and techniques, such as, scanners, coordinate-measuring machines, seismographic profiling, airborne radar, and more.
It is an image that was acquired through scanners or captured from digital cameras. The most common kind of digital image processing is digital image editing.
History
Because of the computational load of dealing with images containing millions of pixels, digital image processing was largely of academic interest until the 1970s, when dedicated hardware became available that could process images in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and compute-intensive operations.
With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest.
Digital Processing Of Camera Images
Images taken by popular digital cameras often need processing to improve their quality; distinct advantage digital cameras have over film cameras. The digital image processing is done by special software programs that manipulate the images is many ways. This process is performed in a "digital darkroom", which is not really a darkroom as it is accomplished via a computer and keyboard.
Reasons for Introducing Digital Image Processing
Figure 1: Polarization by filters
Few types of evidence are more incriminating than a photograph or videotape that places a suspect at a crime scene, whether or not it actually depicts the suspect committing a criminal act. Ideally, the image will be clear, with all persons, settings, and objects reliably identifiable. Unfortunately, though, that is not always the case, and the photograph or video image may be grainy, blurry, of poor contrast, or even damaged in some way.
In such cases, investigators may rely on computerized technology that enables digital processing and enhancement of an image. The U.S. government, and in particular, the military, the FBI, and the National Aeronautics and Space Agency (NASA), and more recently, private technology firms, have developed advanced computer software that can dramatically improve the clarity of and amount of detail visible in still and video images. NASA, for example, used digital processing to analyze the video of the Challenger incident.
How Can We Process An Image?
The first step in digital image processing is to transfer an image to a computer, digitizing the image and turning it into a computer image file that can be stored in a computer's memory or on a storage medium such as a hard disk or CD-ROM. Digitization involves translating the image into a numerical code that can be understood by a computer. It can be accomplished using a scanner or a video camera linked to a frame grabber board in the computer.
The computer breaks down the image in to thousands of pixels. Pixels are the smallest component of an image. They are the small dots in the horizontal lines across a television screen. Each pixel is converted into a number that represents the brightness of the dot. For a black-and-white image, the pixel represents different shades between total black and full white. The computer can then adjust the pixels to enhance image quality.
Categories Of Digital Image Processing:
The Three main categories of digital image processing are:
Image Compression is a mathematical technique used to reduce the amount of computer memory needed to store a digital image. The computer discards (rejects) some information, while retaining sufficient information to make the image pleasing to the human eye.
Enhancement Image enhancement techniques can be used to modify the brightness and contrast of an image, to remove blurriness, and to filter out some of the noise. Using mathematical equations called algorithms, the computer applies each change to either the whole image or targets a particular portion of the image.
For example, global contrast enhancement would affect the entire image, whereas local contrast enhancement would improve the contrast of small details, such as a face or a license plate on a vehicle. Some algorithms can remove background noise without disturbing the key components of the image.
Measurement Extraction is used to gather useful information from an enhanced image.
Image Viewing
The user can utilize different program to see the image. The GIF, JPEG and PNG images can be seen simply using a web browser because they are the standard internet image formats. The SVG format is more and more used in the web and is a standard W3C format.
Image Types
Digital images can be classified according to the number and nature of those samples: The term digital image is also applied to data associated to points scattered over a three-dimensional region, such as produced by tomography equipment. In that case, each datum is called a voxel.
Types Of Images
1. Binary Image: A binary image is a digital image that has only two possible values for each pixel. Binary images are also called bi-level or two-level. A binary image is usually stored in memory as a bitmap, a packed array of bits. A binary image is also a compiled version of source code in Linux and Unixes
2. Gray Scale: In computing, a grayscale or grayscale digital image is an image in which the value of each pixel is a single sample. Grayscale images are distinct from black-and-white images, which in the context of computer imaging are images with only two colors, black and white; grayscale images have many shades of gray in between. In most contexts other than digital imaging, however, the term "black and white" is used in place of "grayscale";
For example, photography in shades of gray is typically called "black-and-white photography". The term monochromatic in some digital imaging contexts is synonymous with grayscale, and in some contexts synonymous with black-and-white.
3. Color Image: A (digital) color image is a digital image that includes color information for each pixel. For visually acceptable results, it is necessary (and almost sufficient) to provide three samples (color channels) for each pixel, which are interpreted as coordinates in some color space. The RGB color space is commonly used in computer displays, but other spaces such as YUV, HSV, and are often used in other contexts. Color Image Representation: A color image is usually stored in memory as a raster map, a two-dimensional array of small integer triplets; or (rarely) as three separate raster maps.
Reply
#19
Presented by
Arunachalam. PL
Nagaraj.K.N

[attachment=12103]
INTRODUCTION
Image processing involves processing or altering an existing image in a desired manner.
The next step is obtaining an image in a readable format.
The Internet and other sources provide countless images in standard formats.
Image processing are of two aspects..
improving the visual appearance of images to a human viewer
preparing images for measurement of the features and structures present.
Since the digital image is “invisible” it must be prepared for viewing on one or more output device (laser printer,monitor,etc)
The digital image can be optimized for the application by enhancing or altering the appearance of structures within it (based on: body part, diagnostic task, viewing preferences,etc)
It might be possible to analyze the image in the computer and provide cues to the radiologists to help detect important/suspicious structures (e.g.: Computed Aided Diagnosis, CAD)
Scientific instruments commonly produce images to communicate results to the operator, rather than generating an audible tone or emitting a smell.
Space missions to other planets and Comet Halley always include cameras as major components, and we judge the success of those missions by the quality of the images returned.
Image-to-image transformations
Image-to-information transformations
Information-to-image transformations
Enhancement (make image more useful, pleasing)
Restoration
Egg. deblurring ,grid line removal
Geometry
(scaling, sizing , Zooming, Morphing one object to another).
Image statistics (histograms)
Histogram is the fundamental tool for analysis and image processing
Image compression
Image analysis (image segmentation, feature extraction, pattern recognition)
computer-aided detection and diagnosis (CAD)
Decompression of compressed image data.
Reconstruction of image slices from CT or MRI raw data.
Computer graphics, animations and virtual reality (synthetic objects).
The process of obtaining an high resolution (HR) image or a sequence of HR images from a set of low resolution (LR) observations.
HR techniques are being applied to a variety of fields, such as obtaining
improved still images
high definition television,
high performance color liquid crystal display (LCD) screens,
video surveillance,
remote sensing, and
medical imaging.
Conversion from RGB (the brightness of the individual red, green, and blue signals at defined wavelengths) to YIQ/YUV and to the other color encoding schemes is straightforward and loses no information.
Y, the “luminance” signal, is just the brightness of a panchromatic monochrome image that would be displayed by a black-and-white television receiver
COLOR DISPLAYS
• Most computers use color monitors that have much higher resolution than a television set but operate on essentially the same principle.
• Smaller phosphor dots, a higher frequency scan, and a single progressive scan (rather than interlace) produce much greater sharpness and color purity.
MULTIPLE IMAGES
• Multiple images may constitute a series of views of the same area, using different wavelengths of light or other signals.
• Examples include the images produced by satellites, such as
– the various visible and infrared wavelengths recorded by the Landsat Thematic Mapper™, and
– images from the Scanning Electron Microscope (SEM) in which as many as a dozen different elements may be represented by their X-ray intensities.
– These images may each require processing.
HARDWARE REQUIREMENTS
A general-purpose computer to be useful for image processing, four key demands must be met: high-resolution image display, sufficient memory transfer bandwidth, sufficient storage space, and sufficient
computing power.
A 32-bit computer can address
up to 4GB of memory(RAM).
SOFTWARE REQUIREMENTS
• Adobe Photoshop
• Corel Draw
• Serif Photoplus
Reply
#20
[attachment=12178]
1.1 Introduction:
A digital image is a representation of a two-dimensional signal using ones and zeros (binary). Depending on whether or not the image resolution is fixed, it may be of vector or raster type. Without qualifications, the term "digital image" usually refers to raster images also called bitmap images
An image may be defined as a two-dimensional function f(x ,y ) where x and y are spatial(plane) co-ordinates, and the amplitude of f at any pair of coordinates(x, y) called the intensity or gray level of the image at that point. When x, y and the amplitude values of f are all finite, discrete quantities, We call the image a digital image.
Raster image types:
Each pixel of a raster image is typically associated to a specific 'position' in some 2D region, and has a value consisting of one or more quantities (samples) related to that position. Digital images can be classified according to the number and nature of those samples:
• binary
• grayscale
• 1color
• false-color
• multi-spectral
• thematic
• picture function
The term digital image is also applied to data associated to points scattered.
1.2 Digital Image Processing:
The field of digital image processing refers to processing digital images by Means of a digital computer. A digital image is composed of a finite number of elements, each of which has a particular location and value. This elements are referred to as picture elements, image elements, and pixels. Pixels is the term most widely used to denote the elements of digital image.
1.2.1 Fundamental Steps In Digital Image Processing:
Image Enhancement:

Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured or simply to highlight certain features of interest in an image.
A familiar example of enhancement is when we increase the contrast of an image because,it is important to keep in mind that enhancement is a very subjective area of image processing.
Image Restoration:
Image Restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration techniques tend to be based on mathematical or probabilistic models of image Degradation.
Color Image Processing:
Color image processing is an area that has been gaining in importance because of the significant Increase In the use of digital image over the internet.
Wavelets:
Wavelets are the foundation for representing images in various degrees of resolution. In particular, this will be used for image data compression and for pyramidal representation, in which images are subdivided successively into smaller Regions.
Compression:
As the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it. Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. Image
compression is used familiar to most users of computers in the form of image file extensions, Such as the file extension used I the JPEG(joint photographic experts group).
1.2.2 Applications of Digital Image Processing:
• Remote sensing via satellites & space crafts
• Image transmission &storage for business applications
• Medical processing
• Radar & sonar image processing
• Robotics
Exampl1es of fields that use digital image processing
• gama ray imaging
• X-ray imaging
• Imaging in ultra violet band
• Imaging in visible & infrared bands
• Imaging in the microwave bands
• Imaging in radio bands
1.3 Aim of the project:
The main aim of any encoder scheme is to compress the number of data bits that could be transmitted into the channel. Mainly jpeg encoder tries to remove three types of redundancies that occur in an image namely coding redundancy, psychovisual redundancy and inter pixel redundancies.
1.4 Problem statement:
The main problem with jpeg encoder is that it can be applied to stationary images. How ever we need in certain circumstances to apply it for non stationary images where we need to lie upon discrete wavelet transforms.
1.5 ORGANIZATION OF THE PROJECT:
Chapter 1 presents introduction to image processing, fundamental steps, applications and example areas image processing. Finally aim and problem statements are discussed.
Chapter 2 provides the literature survey on fourier transform, discrete fourier transform and fast fourier transform and their properties.
Chapter 3 provides the various concepts of image compression, types of image compression, proposed model block diagram and explanation of that block diagram.
Chapter 4 focuses the Implementation of jpeg encoder module wise such as DCT, quantiser, entropy encoding .
Chapter 5 presents the some of results obtained after application of input images and corresponding reconstructed images.
Chapter 6 gives Applications and usage of JPEG Encoder
Chapter 7 gives summary of the work carried, this includes conclusions, performance analysis.
Chapter 8 represents scope for future work.
Chapter 9 References.
Appendix A focuses on the VLSI Design Flow.
Appendix B Source code
2. LITERATURE SURVEY
2.1 Introduction to Fourier Transform:

In, the Fourier transform (often abbreviated FT) is an operation that transforms one complex-valued function of a real variable into another. In such applications as signal processing, the domain of the original function is typically time and is accordingly called the time domain. That of the new function is frequency, and so the Fourier transform is often called the frequency domain representation of the original function. It describes which frequencies are present in the original function. This is analogous to describing a chord of music in terms of the notes being played. In effect, the Fourier transform decomposes a function into oscillatory functions. The term Fourier transform refers both to the frequency domain representation of a function, and to the process or formula that "transforms" one function into the other.
The Fourier transform and its generalizations are the subject of Fourier analysis. In this specific case, both the time and frequency domains are unbounded linear continua. It is possible to define the Fourier transform of a function of several variables, which is important for instance in the physical study of wave motion and optics. It is also possible to generalize the Fourier transform on discrete structures such as finite groups, efficient computation of which through a fast Fourier transform is essential for high-speed computing.
Definition:
There are several common conventions for defining the Fourier transform of an integrable function ƒ : R → C (Kaiser 1994). This article will use the definition:
for every real number ξ.
When the independent variable x represents time (with SI unit of seconds), the transform variable ξ represents frequency (in hertz). Under suitable conditions, ƒ can be reconstructed from by the inverse transform
for every real number x.
Introduction: The motivation for the Fourier transform comes from the study of Fourier series. In the study of Fourier series, complicated periodic functions are written as the sum of simple waves mathematically represented by sines and cosines. Due to the properties of sine and cosine it is possible to recover the amount of each wave in the sum by an integral. In many cases it is desirable to use Euler's formula, which states that e2πiθ = cos 2πθ + i sin 2πθ, to write Fourier series in terms of the basic waves e2πiθ. This has the advantage of simplifying many of the formulas involved and providing a formulation for Fourier series that more closely resembles the definition followed in this article. This passage from sines and cosines to complex exponentials makes it necessary for the Fourier coefficients to be complex valued. The usual interpretation of this complex number is that it gives you both the amplitude (or size) of the wave present in the function and the phase (or the initial angle) of the wave. This passage also introduces the need for negative "frequencies". If θ were measured in seconds then the waves e2πiθ and e−2πiθ would both complete one cycle per second, but they represent different frequencies in the Fourier transform. Hence, frequency no longer measures the number of cycles per unit time, but is closely related.
Reply
#21
[attachment=12612]
Abstract:
This paper presents a digital image processing based finite element method for the two-dimensional mechanical analysis of geo-materials by actually taking into account their material inhomogeneities and microstructures. The proposed method incorporates the theories and techniques of digital image processing, the principles of geometry vectorization and the techniques of automatic finite element mesh generation into the conventional finite element methods. Digital image techniques are used to acquire the inhomogeneous distributions of geo-materials including soils,rocks, asphalt concrete and cement concrete in digital format. Digital
image processing algorithms are developed to identify and classify the main homogeneous material types and their distribution structures that form the inhomogeneity of a geomaterial in the image. The vectorized digital images are used as inputs for finite element mesh generations using automatic mesh generation techniques. Lastly, the conventional finite element methods are
employed to carry out the computation and analysis of geomechanical problems by taking into account the actual internal inho-mogeneity of the geomaterial. Using asphalt concrete as an example, the paper gives a detailed explanation of the proposed digital image processing based finite element method.
*Introduction
Digital image processing (DIP) is the term applied to convert video pictures into a digital form, and apply various mathematical algorithms to extract significant information from the picture. This information may be characteristics of cracks on a material surface, the microstructure of inhomogeneous soils and rocks and other man-made geo-materials, texture of sea ice or the
angularties and shapes of granular materials. While digital image processing has been widely used in a range of engineering topics in recent years, a literature survey indicates that the incorporation of digital image pro-cessing into computational geomechanical methods
such as finite element methods (FEM) is very limited .This paper is intended to present an innovative digital image processing based finite element method for the mechanical analysis of geomaterials by taking into account their actual inhomogeneities and micro-structures. It is noted that the DIP based FEM pro-posed in this paper is for two-dimensional finite element analysis. Using the stereoscopic logical alternation principle, it is believed that the proposed method can be
extended to three-dimensional finite element analysis .
*Digital images and discrete function
microstructure of a geomaterial. A cylindrical asphalt concrete (AC) sample is used for the illustration. In general, field cores or laboratory pre-pared AC or rock samples can be cut with a circular masonry saw into multiple vertical or horizontal plane cross-sections. The fresh cross-sections are then photo-graphed with eithe a conventional came a o a digital camera. A scale is placed beneath the section fo DIP
scaling and calibration. If a conventional camera is used, the photographs can be digitized using a scanner and digital image stored in a desktop computer.The digital image consists of a rectangula a ray of image elements or pixels. Each pixel is the intersection area of any horizontal scanning line with the vertical scanning line. These lines all have an equal width h.At each pixel, the image brightness is sensed and assigned with an integer value that is named as the gray level.For the mostly used 56 gray images and binary images,their gray levels have the integer interval from 0 to 55 and from 0 to 1 respectively. As a result, the digital image can be expressed as a discrete function f(i,j)in the i and j Cartesian coordinate system below.Digital image
Reply
#22
Submitted by:
PC SHERJEEL BIN AAMIR
NS NADEEM KHAN
NS TAUQEER ANJUM SATTI
NS KAMRAN SHAFQAT
NS KHAWAR ALI
NS MEHTAB AHMED

[attachment=14097]
INTRODUCTION:
The purpose of our project is to identify a tumor from a given MRI scan of a brain using digital image processing techniques.
ABSTRACT:
The part of the image that has the tumor has more intensity in that portion and we can make our assumptions about the radius of the tumor in the image,these are the basic things considered in the algorithm.First of all some image enhancement and noise reduction techniques are used to enhance the image quality,after that some morphological operations are applied to detect the tumor in the image.The morphological operations are basically applied on some assumptions about the size and shape of the tumor and in the end the tumor is mapped onto the original gray scale image with 255 intensity to make visible the tumor in the image.The algorithm has been tried on a number of different images from different angles and has always given the correct desired result.
TUMOR:
A tumor or tumor is the name for a neoplasm or a solid lesion formed by an abnormal growth of cells (termed neoplastic) which looks like a swelling.Tumor is not synonymous with cancer. A tumor can be benign, pre-malignant or malignant, whereas cancer is by definition malignant.
TYPES OF TUMOR:
BENIGN TUMOR :

A benign tumor is a tumor that lacks all three of the malignant properties of a cancer. Thus, by definition, a benign tumor does not grow in an unlimited, aggressive manner, does not invade surrounding tissues, and does not spread to non-adjacent tissues (metastasize). Common examples of benign tumors include moles and uterine fibroids.
MALIGNANT :
Malignancy (from the Latin roots mal- = "bad" and -ignis = "fire") is the tendency of a medical condition, especially tumors, to become progressively worse and to potentially result in death. It is characterized by the properties of anaplasia, invasiveness, and metastasis. Malignant is a corresponding adjectival medical term used to describe a severe and progressively worsening disease. The term is most familiar as a description of cancer.
PREMALIGNANT :
A precancerous condition (or premalignant condition) is a disease, syndrome, or finding that, if left untreated, may lead to cancer. It is a generalized state associated with a significantly increased risk of cancer.
MRI:
Magnetic resonance imaging (MRI), or nuclear magnetic resonance imaging (NMRI), is primarily a medical imaging technique used in radiology to visualize detailed internal structure and limited function of the body. MRI provides much greater contrast between the different soft tissues of the body than computed tomography (CT) does, making it especially useful in neurological (brain), musculoskeletal, cardiovascular, and oncological (cancer) imaging. Unlike CT, MRI uses no ionizing radiation. Rather, it uses a powerful magnetic field to align the nuclear magnetization of (usually) hydrogen atoms in water in the body. Radio frequency (RF) fields are used to systematically alter the alignment of this magnetization. This causes the hydrogen nuclei to produce a rotating magnetic field detectable by the scanner. This signal can be manipulated by additional magnetic fields to build up enough information to construct an image of the body.
METHODOLOGY:
The part of the image containing the tumor normally has more intensity then the other portion and we can assume the area, shape and radius of the tumor in the image. We have used these basic conditions to detect tumor in our code and the code goes through th following steps:
Reply
#23
hi, I am searchig for a topic for computer vision project, I can't select it. I want to it be related with biological signal processing, coulde naybody help me?Undecided
Reply
#24
ABSTRACT:
Image processing operations deals with the storage, transmission and restoration of the image using minimum number of bits without any noticeable tradeoff in the clarity of the image. Image processing operations are divided into three major categories, Image Compression, Image Enhancement and Restoration, and Measurement Extraction. Image compression is familiar to most people. It involves reducing the amount of memory needed to store a digital image. Where as the Image Enhancement and Restoration deals with the retrieval of the image back.
This paper deals with the Image Enhancement and Restoration which helps in the image restoration with maximum clarity and enhancement of the image quality. The first section describes what Image Enhancement and Restoration is and the second section tells us about the techniques used for the Image Enhancement and Restoration and, final section describes the advantages and disadvantage of using these techniques.
Reply
#25
[attachment=15447]
INTRODUCTION
Image processing is one of the most powerful technologies that will shape science and engineering in the twenty first century. In the broadest sense, image processing is any form of information processing for which both the input and output are images, such as photographs or frames of video. Most image processing techniques involve treating the image as a two-dimensional signal and applying standard signal processing techniques to it.
SIGNAL PROCESSING
Signal processing is the processing, amplification and interpretation of signals and deals with the analysis and manipulation of signals.
SOLUTION METHODS
A few decades ago, image processing was done largely in the analog domain, chiefly by optical devices. These optical methods are still essential to applications such as holography because they are inherently parallel; however, due to the significant increase in computer speed, these techniques are increasingly being replaced by digital image processing methods.
Digital image processing techniques are generally more versatile, reliable, and accurate; they have the additional benefit of being easier to implement than their analog counterpart. Today, hardware solutions are commonly used in video processing systems. However, commercial image processing tasks are more commonly done by software running on conventional personal computers.
IMAGE RESOLUTION
Image resolution describes the detail an image holds. The term applies equally to digital images, film images, and other types of images. Higher resolution means more image detail.
Image resolution can be measured in various ways. Basically, resolution quantifies how close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes or to the overall size of a picture. Furthermore, line pairs are often used instead of lines. A line pair is a pair of adjacent dark and light lines, while lines count both dark lines and light lines.
PIXEL RESOLUTION
The term resolution is often used as a pixel count in digital imaging None of these pixel resolutions are true resolutions, but they are widely referred to as such; they serve as upper bounds on image resolution.
Below is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly rendered as sharp squares (normally, a smooth image reconstruction from pixels would be preferred, but for illustration of pixels, the sharp squares make the point better?)
EDGE DETECTION
The goal of edge detection is to mark the points in a digital image at which the luminous intensity changes sharply. Edge detection is a research field within image processing and computer vision, in particular within the area of feature extraction. Edge detection of an image reduces significantly the amount of data and filters out information that may be regarded as less relevant, preserving the important structural properties of an image.
TYPICAL PROBLEMS
The red, green, and blue color channels of a photograph. The fourth image is a composite.
• Geometric transformations such as enlargement, reduction, and rotation
• Color corrections such as brightness and contrast adjustments, quantization, or conversion to a different color space
• Registration (or alignment) of two or more images
• Combination of two or more images, e.g. into an average, blend, difference, or image composite
• Interpolation, demosaicing, and recovery of a full image from a RAW image format.
• Segmentation of the image into regions
• Image editing and digital retouching
• Extending dynamic range by combining differently exposed images.
and many more.
Besides static two-dimensional images, the field also covers the processing of time-varying signals such as video and the output of tomographic equipment. Some techniques, such as morphological image processing, are specific to binary or grayscale images.
IMAGE COMPRESSION
The objective of compression is to reduce the data volume and achieve reproduction of the original data without any perceived loss in data quality. The neighbouring pixels are correlated and therefore contain redundant information in most images. The foremost task then is to find less correlated representation of the image. Two fundamental concepts of compression are redundancy and irrelevancy reduction. Reduction is a characteristic related to the factors such as predictability, randomness and smoothness in the data. Redundancy reduction aims at removing duplication from the image while irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver. In general, three types of redundancy can be identified: spatial redundancy that exists between adjacent frames in a sequence of images. Image compression aims removing the spatial and spectral redundancies as much as possible.
Digital imaging depends have been continuously going up both due to size of the image and its resolution. Storage of picture data has become a growing need in any application. A simple gray scale image of 512x512 pixels will need a storage array of 256 bytes assuming that the pixel information is 8 bit wide (0-255 representing white to black on a 256 discrete scale)
A 35mm slide if digitized with a solution of about 12 microns will need 18 megabytes of data storage. In general, picture frame data compression when can be separated into
• Lossy compression
• Lossless compression
LOSSY COMPRESSION
In lossy compression schemes, the compressed image contains degradation relative to the original image while the compression achieved much higher compression than the lossless compression because it completely discards redundant information. Lossy encoding is base don the concept of compromising the accuracy of the reconstructed image in exchange for increased compression. If the resulting distortion (which may or may not be visually apparent) can be tolerated, the increase in the compression can be significant.
Lossy image compression is useful in applications such as broad cast television, video conferencing and facsimile transmission, in which a certain amount of error is an acceptable tradeoff for increased compression performances. Lossy compression usually prohibited for legal reasons
LOSSLESS COMPRESSION
In lossless compression schemes, the compressed image is numerically identical to the original image while the compression can only achieve a modest amount. In numerous applications error free compressions is the only acceptable means of data reductions. The need for error free compression is motivated by the intended use or nature of the image under consideration. They normally provide compression ratios of 2-10. Moreover they are equally applicable to both binary and gray scale images. This technique generally is composed of two relatively independent operations:
(1) Devising an alternative representation of the image in which its interpixel redundancies are reduced.
(2) Coding the representation to eliminate coding redundancies.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: trusses joints, kaiser phlebotomy, download the full report for seminar in cse for image processing, apa yg dimaksud figure, st andrews seminary quezon, lossy amull imge, topic for digital image processing,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  wearable biosensors full report computer science technology 4 13,395 07-10-2017, 02:13 AM
Last Post: DanielRes
  software defined radio full report computer science technology 15 13,868 19-10-2015, 02:51 PM
Last Post: seminar report asees
  synthetic aperture radar system full report computer science technology 11 13,474 25-03-2015, 11:07 AM
Last Post: seminar report asees
  satrack full report computer science technology 8 17,039 21-07-2013, 08:32 AM
Last Post: Guest
  Power Point Tracking for Photovoltaic Systems full report computer science technology 1 4,455 19-01-2013, 12:51 PM
Last Post: seminar details
  robotics and its applications full report computer science technology 5 14,318 21-12-2012, 11:58 AM
Last Post: seminar details
  embedded configurable operating system full report project reporter 1 4,996 11-12-2012, 01:32 PM
Last Post: seminar details
  adaptive missle guidance full report computer science technology 1 4,539 10-12-2012, 03:28 PM
Last Post: seminar details
  Wireless Battery Charger Chip for Smart-Card Applications full report project topics 6 6,961 09-11-2012, 11:53 AM
Last Post: seminar details
  automatic speech recognition full report project report tiger 3 4,932 13-03-2012, 12:34 PM
Last Post: seminar paper

Forum Jump: