Fractal Image Compression
#1

Introduction

The subject of this work is image compression with fractals. Today JPEG has become an industrial standard in image compression. Further researches are held in two areas, wavelet based compression and fractal image compression. The fractal scheme was introduced by Michael F Barnsley in the year 1945.His idea was that images could be compactly stored as iterated functions which led to the development of the IFS scheme which forms the basis of fractal image compression. Further work in this area was conducted by A.Jacquin, a student of Barnsley who published several papers on this subject. He was the first to publish an efficient algorithm based on local fractal system.

Fractal image compression has the following features:

" Compression has high complexity.
" Fast image decoding
" High compression ratios can be achieved.
These features enable applications such as computer encyclopedias, like the Microsoft Atlas that came with this technology. The technology is relatively new.

Overview Of Image Compression

Images are stored as pixels or picture forming elements. Each pixel requires a certain amount of computer memory to be stored on. Suppose we had to store a family album with say a hundred photos. To store this on a computer memory would require say a few thousands of dollars.

This problem can be solved by image compression. The number of pixels involved in the picture can be drastically reduced by employing image compression techniques. The human eye is insensitive to a wide variety of information loss. The redundancies in images cannot be easily detected and certain minute details in pictures can also be eliminated while storing so as to reduce the number of pixels. These can be further incorporated while reconstructing the image for minimum error. This is the basic idea behind image compression. Most of the image compression techniques are said to be lossy as they reduce the information being stored.

The present method being employed consists of storing the image by eliminating the high frequency Fourier co-efficients and storing only the low frequency coefficients. This is the principle behind the DCT transformation which forms the basis of the JPEG scheme of image compression.

Barnsley suggested that storing of images as iterated functions of parts of itself leads to efficient image compression.In the middle of the 80's this concept of IFS became popular. Barnsley and his colleagues in the Georgia University first observed the use of IFS in computerized graphics applications. They found that IFS was able to cress colour images upto 10000 times. The compression contained two phases. First the image was partitioned to segments that were self-similar as possible. Then each part was described as IFS with probabilities
Reply
#2
[attachment=12679]
An Introduction to Image Compression
Compressing an image is significantly different than compressing raw binary data. Of course, general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image can be sacrificed for the sake of saving a little more bandwidth or storage space. This also means that lossy compression techniques can be used in this area.
Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages.
There are several different ways in which image files can be compressed. For Internet use, the two most common compressed graphic image formats are the JPEG format and the GIF format. The JPEG method is more often used for photographs, while the GIF method is commonly used for line art and other images in which geometric shapes are relatively simple.
Other techniques for image compression include the use of fractals and wavelets. These methods have not gained widespread acceptance for use on the Internet as of this writing. However, both methods offer promise because they offer higher compression ratios than the JPEG or GIF methods for some types of images. Another new method that may in time replace the GIF format is the PNG format.
Lossless compression involves with compressing data which, when decompressed, will be an exact replica of the original data. This is the case when binary data such as executables, documents etc. are compressed. They need to be exactly reproduced when decompressed. On the other hand, images (and music too) need not be reproduced 'exactly'. An approximation of the original image is enough for most purposes, as long as the error between the original and the compressed image is tolerable.
A text file or program can be compressed without the introduction of errors, but only up to a certain extent. This is called lossless compression. Beyond this point, errors are introduced. In text and program files, it is crucial that compression be lossless because a single error can seriously damage the meaning of a text file, or cause a program not to run. In image compression, a small loss in quality is usually not noticeable. There is no "critical point" up to which compression works perfectly, but beyond which it becomes impossible. When there is some tolerance for loss, the compression factor can be greater than it can when there is no loss tolerance. For this reason, graphic images can be compressed more than text files or programs.
Theory
The theoretical background of compression is provided by information theory (which is closely related to algorithmic information theory) for lossless compression, and by rate–distortion theory for lossy compression. These fields of study were essentially created by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Cryptography and coding theory are also closely related. The idea of data compression is deeply connected with statistical inference.
Many lossless data compression systems can be viewed in terms of a four-stage model. Lossy data compression systems typically include even more stages, including, for example, prediction, frequency transformation, and quantization.
Classifying image data
An image is represented as a two-dimentional array of coefficients, each coefficient representing the brightness level in that point. When looking from a higher perspective, we can't differentiate between coefficients as more important ones, and lesser important ones. But thinking more intuitively, we can. Most natural images have smooth colour variations, with the fine details being represented as sharp edges in between the smooth variations. Technically, the smooth variations in colour can be termed as low frequency variations and the sharp variations as high frequency variations.
The low frequency components (smooth variations) constitute the base of an image, and the high frequency components (the edges which give the detail) add upon them to refine the image, thereby giving a detailed image. Hence, the smooth variations are demanding more importance than the details.
Separating the smooth variations and details of the image can be done in many ways. One such way is the decomposition of the image using a Discrete Wavelet Transform (DWT).
The DWT of an image
The procedure goes like this. A low pass filter and a high pass filter are chosen, such that they exactly halve the frequency range between themselves. This filter pair is called the Analysis Filter pair. First, the low pass filter is applied for each row of data, thereby getting the low frequency components of the row. But since the lpf is a half band filter, the output data contains frequencies only in the first half of the original frequency range. So, by Shannon's Sampling Theorem, they can be subsampled by two, so that the output data now contains only half the original number of samples. Now, the high pass filter is applied for the same row of data, and similarly the high pass components are separated, and placed by the side of the low pass components. This procedure is done for all rows.
Reply
#3
to get information about the topic image compression full report ppt and related topic refer the page link bellow

http://studentbank.in/report-fractal-ima...ars-report

http://studentbank.in/report-fractal-image-compression

http://studentbank.in/report-fractal-ima...sion--2354

http://studentbank.in/report-image-compression

http://studentbank.in/report-the-jpeg-im...ull-report

http://studentbank.in/report-image-compr...-transform

http://studentbank.in/report-image-proce...ort?page=2

http://seminarsprojects.in/attachment.php?aid=476
Reply
#4

Fractal Image Compression


.doc   Fractal Image Compression.DOC (Size: 236.5 KB / Downloads: 5)

Introduction

1.1 Compression


With the advent of multimedia, the necessity for the storage of large numbers of high quality images is increasing. One obstacle that has to be overcome is the large size of image files. For example, a single 800- by 600-pixel true-colour image requires three bytes per pixel, plus a header, which amounts to over 1.37 Mb of disk space, thus almost filling a 1.4Mb high-density diskette. Clearly, some form of compression is necessary. As well as saving storage space, compressed files take less time to transmit via modem, so money can be saved on both counts.

The choice of compression algorithm involves several conflicting considerations. These include degree of compression required, and the speed of operation. Obviously if one is attempting to run programs direct from their compressed state, decompression speed is paramount. The other consideration is size of compressed file versus quality of decompressed image.

There are essentially two sorts of data compression. 'Lossless' compression works by reducing the redundancy in the data. The decompressed data is an exact copy of the original, with no loss of data. Huffman Encoding and LZW are two examples of lossless compression algorithms. There are times when such methods of compression are unnecessarily exact. 'Lossy' compression sacrifices exact reproduction of data for better compression. It both removes redundancy and creates an approximation of the original. The JPEG standard is currently the most popular method of lossy compression. Obviously, a lossless compression method must be used with programs or text files, while lossy compression is really only suitable for graphics or sound data, where an exact reproduction is not necessary. Fractal image compression is a lossy compression method.

1.2 Fractals

The French mathematician Benoit B. Mandelbrot first coined the term fractal in 1975. He derived the word from the Latin fractus, which means "broken", or "irregular and fragmented". In fact, the birth of fractal geometry is usually traced to Mandelbrot and the 1977 publication of his seminal book The Fractal Geometry of Nature. Mandelbrot claimed that classical Euclidean geometry was inadequate at describing many natural objects, such as clouds, mountains, coastlines and trees. So he conceived and developed fractal geometry.

There are two main groups of fractals: linear and nonlinear. The latter are typified by the popular Mandelbrot set and Julia sets, which are fractals of the complex plane. However, the fractals used in image compression are linear, and of the real plane. So, the fractals used are not chaotic; in other words, they are not sensitive to initial conditions. They are the fractals from Iterated Function Theory. An Iterated Function System (IFS) is simply a set of contractive affine transformations. IFSs may efficiently produce shapes such as ferns, leaves and trees.

This presented an intriguing possibility; since fractal mathematics is good for generating natural looking images, could it not, in the reverse direction, be used to compress images? The possibility of using fractals for image encoding rests on two suppositions:
1. many natural scenes possess this detail within detail structure (e.g. clouds);
2. an IFS can be found that generates a close approximation of a scene using only a few transformations.
The first point is true, but excludes many real world images, while the second leads to the so called "inverse problem", which is explained later.

1.3 A Brief History of Fractal Image Compression

1977 Benoit Mandelbrot finishes the first edition of "The Fractal Geometry of Nature"

1981 John E. Hutchinson publishes "Fractals and Self Similarity"

1983 Revised edition of "The Fractal Geometry of Nature" is published

1985 Michael F. Barnsley and Stephen Demko introduce Iterated Function Theory in their paper "Iterated Function Systems and the Global Construction of Fractals"

1987 Iterated Systems Incorporated is founded in Georgia, US

1988 Michael Barnsley and Alan D. Sloan wrote the article "A Better Way to Compress Images" published in Byte, claiming fractal compression ratios of 10 000 to 1

Barnsley publishes the book "Fractals Everywhere", which presents the mathematics of Iterated Function Systems, and proves a result known as the Collage Theorem

1990 Barnsley's first patent is granted (US Patent # 4 941 193)

1991 M. Barnsley and A. Sloan are granted US Patent # 5 065 447 "Method and Apparatus for Processing Digital Data"

J.M. Beaumont publishes a paper "Image Data Compression Using Fractal Techniques"

1992 Arnaud E. Jacquin publishes an article that describes the first practical fractal image compression method

Iterated Systems Inc finally break even and begin to make money

Microsoft Corporation publish the multimedia encyclopedia "Microsoft Encarta" which uses fractal image compression to great effect

1993 The book "Fractal Image Compression" by Michael Barnsley and Lyman P. Hurd is published

The Iterated Systems' product line matures



The Multiple Reduction Photocopy Machine

The Multiple Reduction Photocopy Machine is an imaginary image duplicator, which may be used to find successive approximations for the attractor of an IFS. It functions like a normal photocopy machine with the following exceptions:

1. The machine has several lenses, to create multiple copies of the original.
2. Each lens reduces the size of the original.
3. The copier operates in a feedback loop; the output of one stage becomes the input to the next.

The lenses can be set for different reduction factors, and the reduced (output) images can be positioned at any desired location. Thus, the image may undergo any contractive affine transformation. The initial input can be anything. When the machine is run, the output tends towards a limiting image, which is independent of the initial image. The limiting image is the fixed point, or the attractor.


The Sierpinski Triangle

A classical example of a self-similar shape is the Sierpinski triangle, or gasket. It was named after the Polish mathematician Waclaw Sierpinski, who first described it in 1916. The Sierpinski triangle may be created using a Multiple Reduction Photocopy Machine in the following manner. An image is placed on the machine, reduced by one half and copied three times, once onto each vertex of a triangle. If the corresponding contractive affine transformations are written in the form:

Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: smooth, sar based image compression documentation, quadtree fractal image compression ppt, digital image compression, variations, atlas l4c, fractal image desktop backgrounds,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Low Memory Color Image Zero Tree Coding computer science crazy 2 3,614 03-01-2013, 11:52 AM
Last Post: seminar details
  Image Authentication Techniques computer science crazy 2 4,191 15-11-2012, 12:24 PM
Last Post: seminar details
  Data Compression Techniques electrical engineering 1 2,949 01-03-2012, 11:41 AM
Last Post: seminar paper
  Super wideband fractal antenna design project report helper 1 2,262 16-01-2012, 10:33 AM
Last Post: seminar addict
  FPGA BASED IMAGE SECURITY AND AUTHENTICATION IN DIGITAL CAMERA USING INVISIBLE WATERM seminar surveyer 0 1,686 10-01-2011, 01:59 PM
Last Post: seminar surveyer
  Image Deblurring in the Presence of Impulsive Noise seminar presentation 0 1,581 24-05-2010, 11:56 AM
Last Post: seminar presentation
  Speech Compression - a novel method computer science crazy 0 1,452 21-09-2008, 10:37 AM
Last Post: computer science crazy
  Data Compression Techniques computer science crazy 0 3,107 21-09-2008, 10:05 AM
Last Post: computer science crazy

Forum Jump: