computer graphics and image processing full report
#1

[attachment=2669]
ABSTRACT: COMPUTER GRAPHICS& IMAGE PROCESSING

Authors:
*L.HARSHINI III/IV B-TECH
*V.GOWRY SAILAJA III/IV B-TECH
ABSTRACT
This paper will cover broad issues pertaining to Computer Graphics and Image Processing
This paper is divided into different sections:- Firstly we have the brief introduction of Computer Graphics and its History. The next section of the paper gives us the branches of Computer Graphics and itâ„¢s Applications. One of the most important applications is the Image Processing.
The next section deals with the commonly used signal processing techniques. The most important techniques are the One Dimensional and Two-dimensional Techniques.
One-dimensional technique deals with the Image resolution and the Resolution of Digital images
¢ Pixel Resolution
¢ Spatial Resolution
¢ Spectral Resolution
¢ Temporal Resolution
The further section deals with the Typical problems and the Application of Computer Graphics
¢ Photography and Printing
¢ Medical image Processing
¢ Face Detection
¢ Micro image Processing
Finally the paper ends with Conclusion and the Recent Challenges


Overview
Computer graphics is a subfield of computer science and is concerned with digitally synthesizing and manipulating visual content. Although the term often refers to three-dimensional computer graphics, it also encompasses two-dimensional graphics and image processing. Graphics is often differentiated from the field of visualization, although the two have many similarities. Entertainment (in the form of animated movies and video games) is perhaps the most well-known application of graphics.
History
William Fetter was credited with coining the term Computer Graphics in 1960, to describe his work at Boeing. One of the first displays of computer animation was Futureworld (1976), which included an animation of a human face and hand ” produced by Ed Catmull and Fred Parke at the University of Utah.
The most significant results in computer graphics are published annually in a special edition of ACM Transactions on Graphics and presented at SIGGRAPH
Branches of Computer Graphics
Modeling : Modeling describes the shape of an object. The two most common sources of 3D models are those created by an artist using some kind of 3D modeling tool, and those scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation.
Because the appearance of an object depends largely on the exterior of the object, boundary representations are most common in computer graphics. Two dimensional surfaces are a good analogy for the objects used in graphics, though quite often these objects are non-manifold. Since surfaces are not finite, a discrete digital approximation is required: polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have been gaining some popularity in recent years. Level sets are a useful representation for deforming surfaces which undergo many topological changes such as fluids.
Subfields
¢ Subdivision surfaces
¢ Digital geometry processing - surface reconstruction, mesh simplification, mesh repair, parameterization, remeshing, mesh generation, mesh compression, and mesh editing all fall under this heading.
¢ Discrete differential geometry - DDG is a recent topic which defines geometric quantities for the discrete surfaces used in computer graphics.
¢ Point-based graphics - a recent field which focuses on points as the fundamental representation of surfaces.
Shading
Texturing, or more generally, shading is the process of describing surface appearance. This description can be as simple as the specification of a color in some colorspace or as elaborate as a shader program which describes numerous appearance attributes across the surface. The term is often used to mean texture mapping, which maps a raster image to a surface to give it detail. A more generic description of surface appearance is given by the bidirectional scattering distribution function, which describes the relationship between incoming and outgoing illumination at a given point.
Animation
Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. There are numerous ways to describe these motion, many of which are used in conjunction with each-other. Popular methods include keyframing, inverse kinematics, and motion capture. As with modeling, physical simulation is another way of specifying motion.
Rendering
Rendering converts a model into an image either by simulating light transport to get physically-based photorealistic images, or by applying some kind of style as in non-photorealistic rendering.
Subfields
¢ physically-based rendering - concerned with generating images according to the laws of geometric optics
¢ real time rendering - focuses on rendering for interactive applications, typically using specialized hardware like GPUs
¢ non-photorealistic rendering
¢ relighting - recent area concerned with quickly re-rendering scenes
Applications
¢ Computer vision
¢ Image processing: In the broadest sense, image processing is any form of information processing for which both the input and output are images, such as photographs or frames of video. Most image processing techniques involve treating the image as a two-dimensional signal and applying standard signal processing techniques to it.


UPIICSA IPN - Binary image UPIICSA IPN - Edge detection
A few decades ago, image processing was done largely in the analog domain, chiefly by optical devices. These optical methods are still essential to applications such as holography because they are inherently parallel; however, due to the significant increase in computer speed, these techniques are increasingly being replaced by digital image processing methods.
Digital image processing techniques are generally more versatile, reliable, and accurate; they have the additional benefit of being easier to implement than their analog counterparts. Specialized hardware is still used for digital image processing: computer architectures based on pipelining have been the most commercially successful. There are also many massively parallel architectures that have been developed for the purpose. Today, hardware solutions are commonly used in video processing systems. However, commercial image processing tasks are more commonly done by software running on conventional personal computers.
Commonly-used signal processing techniques
Most of the signal processing concepts that apply to one-dimensional signals also extend to the two-dimensional image signal. Some of these one-dimensional signal processing concepts become significantly more complicated in two-dimensional processing. Image processing brings some new concepts, such as connectivity and rotational invariance, that are meaningful only for two-dimensional signals.
The Fourier transform, using coherent optics or digital fast fourier transform, is often used for image processing operations involving large-area correlation.
One-dimensional techniques
Image resolution: Image resolution describes the detail an image holds. The term applies equally to digital images, film images, and other types of images. Higher resolution means more image detail.
Image resolution can be measured in various ways. Basically, resolution quantifies how close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch) or to the overall size of a picture (lines per picture height, also known simply as lines, or TV lines). Furthermore, line pairs are often used instead of lines. A line pair is a pair of adjacent dark and light lines, while lines counts both dark lines and light lines. A resolution of 10 lines per mm means 5 dark lines alternating with 5 light lines, or 5 line pairs per mm. Photographic lens and film resolution are most often quoted in line pairs per mm.
Below is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly rendered as sharp squares (normally, a smooth image reconstruction from pixels would be preferred, but for illustration of pixels, the sharp squares make the point better).
Resolution of digital images
The resolution of digital images can be described in many different ways.
Pixel resolution: The term resolution is often used as a pixel count in digital imaging, even though American, Japanese, and international standards specify that it should not be so used, at least in the digital camera field. An image of N pixels high by M pixels wide can have any resolution less than N lines per picture height, or N TV lines. But when the pixel counts are referred to as resolution, the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns (width) and the second is the number of pixel rows (height), for example as 640 by 480. Other conventions include describing pixels per length unit or pixels per area unit, such as pixels per inch or per square inch. None of these pixel resolutions are true resolutions, but they are widely referred to as such; they serve as upper bounds on image resolution.
Spatial resolution: The measure of how closely lines can be resolved in an image is called spatial resolution, and it depends on properties of the system creating the image, not just the pixel resolution in pixels per inch (ppi). For practical purposes the clarity of the image is decided by its spatial resolution, not the number of pixels in an image.
The spatial resolution of computer monitors is generally 72 to 100 lines per inch, corresponding to pixel resolutions of 72 to 100 ppi.
Spectral resolution: Color images distinguish light of different spectrum. Multi-spectral images resolve even finer differences of spectrum or wavelength than is needed to reproduce color. That is, they can have higher spectral resolution.
Temporal resolution: Movie cameras and high-speed cameras can resolve events at different points in time. The time resolution used for movies is usually 15 to 30 frames per second (fps), while high-speed cameras may resolve 100 to 1000 fps, or even more.
Resolution in various media
¢ DVDs are 720 by 480 (NTSC) pixels or 720 by 576 (PAL) pixels
¢ High definition television is 1920 by 1080 pixels or 1280 by 720 pixels
¢ 35 mm film is scanned for release on DVD at 1080 or 2000 lines as of 2005.
¢ 35 mm optical camera negative motion picture film can resolve up to 6,000 lines.
¢ 35 mm projection positive motion picture film has about 2,000 lines which results from the analogue printing from the camera negative of an interpositive, and possibly an internegative, then a projection positive.
¢ Sequences from newer films are scanned at 2,000, 4,000 or even 8,000 columns (line measured the other directions), called 2K, 4K and 8K, for quality visual effects editing on computers.
Two-dimensional techniques
Connectivity: Properties and parameters based on the idea of connectedness often involve the word connectivity. For example, in graph theory, a connected graph is one from which we must remove at least one vertex to create a disconnected graph. In recognition of this, such graphs are also said to be 1-connected. Similarly, a graph is 2-connected if we must remove at least two vertices from it, to create a disconnected graph. A 3-connected graph requires the removal of at least three vertices, and so on. The connectivity of a graph is the minimum number of vertices that must be removed, to disconnect it. Equivalently, the connectivity of a graph is the greatest integer k for which the graph is k-connected.
While terminology varies, noun forms of connectedness-related properties often include the term connectivity. Thus, when discussing simply connected topological spaces, it is far more common to speak of simple connectivity than simple connectedness. On the other hand, in fields without a formally defined notion of connectivity, the word may be used as a synonym for connectedness.
3-connectivity in a triangular tiling
4-connectivity in a square tiling
6-connectivity in a hexagonal tiling
8-connectivity in a square tiling
Typical problems
The red, green, and blue color channels of a photograph by Sergei Mikhailovich Prokudin-Gorskii. The fourth image is a composite.
¢ Geometric transformations such as enlargement, reduction, and rotation
¢ Color corrections such as brightness and contrast adjustments, quantization, or conversion to a different color space
¢ Registration (or alignment) of two or more images
¢ Combination of two or more images, e.g. into an average, blend, difference, or image composite
¢ Interpolation, demosaicing, and recovery of a full image from a RAW image format like a Bayer filter pattern
¢ Segmentation of the image into regions
¢ Image editing and digital retouching
¢ Extending dynamic range by combining differently exposed images (generalized signal averaging of Wyckoff sets)
Besides static two-dimensional images, the field also covers the processing of time-varying signals such as video and the output of tomographic equipment. Some techniques, such as morphological image processing, are specific to binary or grayscale images.
Applications
Photography and printing: The camera or camera obscura is the image-forming device and photographic film or a digital storage card is the recording medium, although other methods are available. For instance, the photocopy or xerography machine forms permanent images but uses the transfer of static electrical charges rather than photographic film, hence the term electrophotography. Rayographs published by Man Ray and others are images produced by the shadows of objects cast on the photographic paper, without the use of a camera. Objects can also be placed directly on the glass of a scanner to produce digital pictures.
Medical image processing: Medical imaging designates the ensemble of techniques and processes used to create images of the human body (or parts thereof) for clinical purposes (medical procedures seeking to reveal, diagnose or examine disease) or medical science (including the study of normal anatomy and function). Measurement and recording techniques which are not primarily designed to produce images, such as electroencephalography (EEG) and magnetoencephalography (MEG) and others, but which produce data susceptible to be represented as maps (i.e. containing positional information), can be seen as forms of medical imaging.
Face detection: Face detection is a computer technology that determines the locations and sizes of human faces in arbitrary (digital) images. It detects facial features and ignores anything else, such as buildings, trees and bodies.
Face detection is used in biometrics, often as a part of (or together with) a facial recognition system. It is also used in video surveillance, human computer interface and image database management.
Microscope image processing: Microscope image processing is a broad term that covers the use of digital image processing techniques to process, analyze and present images obtained from a microscope. Such processing is now commonplace in a number of diverse fields such as medicine, biological research, cancer research, drug testing, metallurgy, etc. A number of manufacturers of microscopes now specifically design in features that allow the microscopes to interface to an image processing system.
Conclusion
You have seen a few of the features of a good introductory image-processing program. There are many more complex modifications you can make to the images. For example, you can apply a variety of filters to the image. The filters use mathematical algorithms to modify the image. Some filters are easy to use, while others require a great deal of technical knowledge. The software also will calculate the ra, dec, and magnitude of all objects in the field if you have a star catalog such as the Hubble Guide Star Catalog (although this feature requires the purchase of an additional CD-ROM).
The standard tricolor images produced by the SDSS are very good images. If you are looking for something specific, you can frequently make a picture that brings out other details. The "best" picture is a very relative term. A picture that is processed to show faint asteroids may be useless to study the bright core of a galaxy in the same field.
Research Challenges
Research Challenge 1. Search for an object that is in both the SDSS database and the 2MASS database. Retrieve the images from 2MASS and the SDSS. Make a tri-color image using the J, H, or K filters for 2MASS data and filters of your choice for SDSS data. Compare and contrast your images. What information do they give you about the object What interpretations can you make by studying the two images
Research Challenge 2. Scientists are very interested in distant quasars, objects that have very red colors. Retrieve the i and z images for a field. Use the blink command to look for objects that are visible in the z filter but not visible in the i filter. These objects might be distant quasars or very small, cool stars. Either way, you will be finding something very interesting!
Research Challenge 3. Iris can obtain images from a web cam. If you have access to a web cam and a small telescope, mount the web cam looking into the eyepiece. Click on the Web cam menu in Iris and click Image Acquisition. Obtain and process an image of a bright object such as a moon or a planet. Although you cannot see very faint objects with a web cam, many amateur astronomers produce nice images of bright objects using webcams.
Reply
#2
[attachment=6832]
Computer Graphics


note: dates are approximate
15-463
Paul Heckbert



Big Bang - 1960
-300, Euclid: geometry codified
1400, Brunelleschi et al: perspective illustration rediscovered
1600, Rene Descartes: analytic geometry, xyz
1660, Leibniz, Newton: calculus
Gauss, Fourier, Hermite…
1850, Sylvester: matrix notation
1930’s, Schoenberg: B-splines for applied mathematics
1940’s: first computer
1950’s: SAGE air defense system (CRT’s & light pens)
Reply
#3

Image Processing: Basics, Challenges & Perspectives


Outline
•Digital Image Processing
•Image Generation
•Image Perception
•Image Acquisition
•Color Images

A. Digital Image Processing
•Digital image: f(x,y)•More than human perception (or EM spectrum):–Ultrasound, electron-microscopy, computer-generated images•Related fields: computer vision, image analysis / understanding•Better characterization:–Low-level: both input and output are images •Noise reduction, contrast enhancement, image sharpening–Mid-level: input -image, output –extracted attributes•Segmentation, classification–High level: “making sense”of the images•Vision-related tasks


A bit of history
•1960’s: computers powerful enough + space program
•1964: pictures from the Moon (JPL, Pasadena, CA)
•Other fields:
–Remote Earth resources observation
–Medical Image processing: 1970’s CAT / CT (Hounsfield, Cormack–1979 Nobel Prize)
–Aerial and satellite images
–Archeology
–Physics
–Machine perception: inspection, product assembly, character recognition, ...etc


B. Image generation
•Electromagnetic (EM) energy spectrum
•Acoustic energy
•Ultrasonic energy
•Electric energy (e.g.: electron microscope)

Gamma ray imaging
•Nuclear Medicine–Radioactive isotope injected to emit gamma rays while decaying
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: resolve hostname, moving car in computer graphics, report of computer graphics and visualisation, w3 ddg, seminar report on computer graphics and multimedia, who is sue sylvester on, sage emarketing for,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 41,808 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,528 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  imouse full report computer science technology 3 24,772 17-06-2016, 12:16 PM
Last Post: ashwiniashok
  Implementation of RSA Algorithm Using Client-Server full report seminar topics 6 26,484 10-05-2016, 12:21 PM
Last Post: dhanabhagya
  Optical Computer Full Seminar Report Download computer science crazy 46 66,152 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  ethical hacking full report computer science technology 41 74,291 18-03-2016, 04:51 PM
Last Post: seminar report asees
  broadband mobile full report project topics 7 23,184 27-02-2016, 12:32 PM
Last Post: Prupleannuani
  steganography full report project report tiger 15 41,191 11-02-2016, 02:02 PM
Last Post: seminar report asees
  Digital Signature Full Seminar Report Download computer science crazy 20 43,492 16-09-2015, 02:51 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,364 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya

Forum Jump: