Compressive Sensing Full report
#1


ABSTRACT

This report present a novel idea to capture and represent compressible signals for transmission and storage at a rate below the Nyquist rate. This method is known as compressive sampling/sensing employs non adaptive linear projections that preserve the structure of the signal. The signal is reconstruced from these projections using optimization techniques.


[attachment=7051]

TABLE OF CONTENTS
1 Introduction 1
2 Data Compression 3
2.1 Types of Data Compression . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Lossy Compression . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Lossless Compression . . . . . . . . . . . . . . . . . . . . 6
2.2 Different Types of Data Compression . . . . . . . . . . . . . . . . 7
2.2.1 Variable Length Coding(VLC) . . . . . . . . . . . . . . . . 7
2.2.2 Run-length encoding (RLE) . . . . . . . . . . . . . . . . . 7
2.2.3 Differential coding . . . . . . . . . . . . . . . . . . . . . . 7
2.2.4 Predictive coding . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.5 Dictionary based coding . . . . . . . . . . . . . . . . . . . 8
2.2.6 Transform coding . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.7 Wavelet coding . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Transform Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Compressive Sensing 11
3.1 Steps in Compressive Sensing . . . . . . . . . . . . . . . . . . . . 11
3.2 Sparse Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Compressible Signals . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Designing a Stable Measurement Matrix . . . . . . . . . . . . . . . 15
i
3.5 Designing a Signal Reconstruction Algorithm . . . . . . . . . . . . 16
3.5.1 Different Methods of Reconstruction . . . . . . . . . . . . 17
3.5.2 Geometrical Interpretation . . . . . . . . . . . . . . . . . . 18
4 Application 20
4.1 Single Pixel Camera . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.1 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1.2 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Magnetic Resonance Imaging . . . . . . . . . . . . . . . . . . . . . 23
4.2.1 Rapid 3D Angiography . . . . . . . . . . . . . . . . . . . . 23
4.2.2 Whole Heart Coronary Imaging . . . . . . . . . . . . . . . 24
4.2.3 Brain Imaging . . . . . . . . . . . . . . . . . . . . . . . . 25
5 Conclusion 26
ii
LIST OF FIGURES
3.1 Block diagram representation of compressive sampling [1]. . . . . . 13
3.2 (a) Compressive sensing measurement process with a random Gaussian
matrix  and discrete cosine transform (DCT) matrix (b) Measurement
process with  =  [1]. . . . . . . . . . . . . . . . . . . . . 16
3.3 Geometrical interpretation (a) The subspaces containing two sparse vectors
in R3 lie close to the coordinate axes. (b) Visualization of the l2
minimization © Visualization of the l1 minimization [1]. . . . . . . 18
4.1 Structure of camera [3]. . . . . . . . . . . . . . . . . . . . . . . . . 21
iii
CHAPTER 1
Introduction
As our modern technology-driven civilization acquires and exploits ever-increasing
amounts of data, ‘everyone’ now knows that most of the data we acquire ‘can be thrown
away’ with almost no perceptual loss - witness the broad success of lossy compression
formats for sounds, images and specialized technical data. The phenomenon of ubiquitous
compressibility raises very natural questions: why go to so much effort to acquire
all the data when most of what we get will be thrown away? Can’t we just directly
measure the part that won’t end up being thrown away?
Compressive sensing, also known as compressive sampling is a technique for acquiring
and reconstructing a signal utilizing the prior knowledge that it is sparse or
compressible. In this method a signal of length N is taken and converted to a signal of
length M where M << N prior to storage or transmission.
The ideas behind compressed sensing came together in or about 2004 when David
Donoho, Emmanuel Candes, Justin Romberg and Terence Tao discovered important
results on the minimum number of data needed to reconstruct an image even though
the number of data would be deemed insufficient by the Nyquist-Shannon criterion.
The main idea behind compressed sensing is to exploit that there is some structure and
redundancy in the majority of interesting signals-they are not pure noise. In particular,
most signals are sparse, that is, they contain many coefficients close to or equal to zero,
when represented in some domain.
In the report that is presented a complete analysis of compressive sensing or compressive
sampling is explained. As we know, the basis behind compressive sampling
is data compression. Therefore we have started with explaining what data compression
is. It is followed by different methods of data compression. The most commonly used
method is transform coding. Therefore there is a detailed explanation of transform coding
and the inefficiencies associated with it . Then we have entered into what exactly
compressive sensing is and what are the different steps involved in compressive sensing.
It is followed by a detailed mathematical explanation of this method. Finally we
have also explained two applications of compressive sensing that have been developed
recently- the single pixel camera and the magnetic resonance imaging.
2
CHAPTER 2
Data Compression

Data compression is often referred to as coding, where coding is a very general term encompassing
any special representation of data which satisfies a given need. Information
theory is defined to be the study of efficient coding and its consequences, in the form of
speed of transmission and probability of error. Data compression may be viewed as a
branch of information theory in which the primary objective is to minimize the amount
of data to be transmitted.
A simple characterization of data compression is that it involves transforming a
string of characters in some representation (such as ASCII) into a new string (of bits,
for example) which contains the same information but whose length is as small as possible.
Data compression has important application in the areas of data transmission and
data storage. Many data processing applications require storage of large volumes of
data, and the number of such applications is constantly increasing as the use of computers
extends to new disciplines. At the same time, the proliferation of computer
communication networks is resulting in massive transfer of data over communication
links. Compressing data to be stored or transmitted reduces storage and/or communication
costs. When the amount of data to be transmitted is reduced, the effect is that of
increasing the capacity of the communication channel. Similarly, compressing a file to
half of its original size is equivalent to doubling the capacity of the storage medium. It
may then become feasible to store the data at a higher, thus faster, level of the storage
hierarchy and reduce the load on the input/output channels of the computer system.
One area in which data compression is of great importance is image representation
and processing. There are two major reasons for this. The first is that digitized images
contain a large amount of local redundancy. An image is usually captured in the form
of an array of pixels whereas methods which exploit the tendency for pixels of like
color or intensity to cluster together may be more efficient. The second reason for the
abundance of research in this area is volume. Digital images usually require a very large
number of bits, and many uses of digital images involve large collections of images.
Image compression deals with reducing the amount of data required to represent
a digital image by removing of redundant data. Images can be represented in digital
format in many ways. Encoding the contents of a 2-D image in a raw bitmap (raster)
format is usually not economical and may result in very large files. Since raw image
representations usually require a large amount of storage space (and proportionally
long transmission times in the case of file uploads/ downloads), most image file formats
employ some type of compression. The need to save storage space and shorten transmission
time, as well as the human visual system tolerance to a modest amount of loss,
have been the driving factors behind image compression techniques.
The general problem of image compression is to reduce the amount of data required
to represent a digital image or video and the underlying basis of the reduction process
is the removal of redundant data. Mathematically, visual data compression typically
involves transforming (encoding) a 2-D pixel array into a statistically uncorrelated data
set. This transformation is applied prior to storage or transmission. At some later time,
the compressed image is decompressed to reconstruct the original image information
(preserving or lossless techniques) or an approximation of it (lossy techniques). Several
common measures of compression have been suggested: redundancy , average message
length , and compression ratio.
Image compression and coding techniques explore three types of redundancies: coding
redundancy, interpixel (spatial) redundancy, and psychovisual redundancy.
i Coding Redundancy
Consists in using variable-length codewords selected as to match the statistics of the
original source, in this case, the image itself or a processed version of its pixel values.
This type of coding is always reversible and usually implemented using look-up tables
(LUTs). Examples of image coding schemes that explore coding redundancy are the
Huffman codes and the arithmetic coding technique.
ii Interpixel Redundancy
This type of redundancy - sometimes called spatial redundancy, interframe redundancy,
or geometric redundancy - exploits the fact that an image very often contains
4
strongly correlated pixels, in other words, large regions whose pixel values are the same
or almost the same. This redundancy can be explored in several ways, one of which is
by predicting a pixel value based on the values of its neighboring pixels. Examples of
compression techniques that explore the interpixel redundancy include: Constant Area
Coding (CAC), (1-D or 2-D) Run-Length Encoding (RLE) techniques, and many predictive
coding algorithms such as Differential Pulse Code Modulation (DPCM).
iii Psychovisual Redundancy
Many experiments on the psychophysical aspects of human vision have proven that
the human eye does not respond with equal sensitivity to all incoming visual information;
some pieces of information are more important than others. The knowledge of
which particular types of information are more or less relevant to the final human user
have led to image and video compression techniques that aim at eliminating or reducing
any amount of data that is psychovisually redundant.
The end result of applying these techniques is a compressed image file, whose size
and quality are smaller than the original information, but whose resulting quality is still
acceptable for the application at hand.
2.1 Types of Data Compression
2.1.1 Lossy Compression
Lossy or perceptual coding, is possible if some loss of fidelity is acceptable. Generally,
a lossy data compression will be guided by research on how people perceive the
data in question. For example, the human eye is more sensitive to subtle variations in
luminance than it is to variations in color. JPEG image compression works in part by
"rounding off" some of this less-important information. Lossy data compression provides
a way to obtain the best fidelity for a given amount of compression. In some cases,
transparent (unnoticeable) compression is desired; in other cases, fidelity is sacrificed
to reduce the amount of data as much as possible. Lossy image compression is used
in digital cameras, to increase storage capacities with minimal degradation of picture
quality. Similarly,DVDs use the lossy MPEG-2 Video codec for video compression. In
lossy audio compression, methods of psychoacoustics are used to remove non-audible
5
(or less audible) components of the signal. Compression of human speech is often performed
with even more specialized techniques, so that speech compression" or "voice
coding" is sometimes distinguished as a separate discipline from "audio compression".
Different audio and speech compression standards are listed under audio codecs. Voice
compression is used in Internet telephony for example, while audio compression is used
for CD ripping and is decoded by audio players.
A lossy compression is acceptable mainly for three reasons:
 The human visual system is often able to tolerate loss in image quality without
compromising the ability to perceive the contents of the scene.
 It happens very often that the digital input of the lossy compression algorithm is,
in its turn, an imperfect representation of the real image. It should be considered,
indeed, that the image is reconstructed starting from a finite number of samples,
which can take only discrete values.
 A lossless compression would never be able to grant the high levels of compression
that can be obtained by using a lossy compression and that are necessary for
applications in storage and/or distribution of images.
2.1.2 Lossless Compression
Lossless compression algorithms usually exploit statistical redundancy in such a
way as to represent the sender’s data more concisely without error. Lossless compression
is possible because most real-world data has statistical redundancy. For example, in
English text, the letter ‘e’ is much more common than the letter ’z’, and the probability
that the letter ’q’ will be followed by the letter ‘z’ is very small. Lossless compression
schemes are reversible so that the original data can be reconstructed.
Lempel-Ziv (LZ) compression methods are among the most popular algorithms for
lossless storage. DEFLATE is a variation on LZ which is optimized for decompression
speed and compression ratio, therefore compression can be slow. DEFLATE is used in
PKZIP, gzip and PNG. LZW (Lempel-Ziv-Welch) is used in GIF images. Also noteworthy
are the LZR (LZ-Renau) methods, which serve as the basis of the Zip method. LZ
methods utilize a table-based compression model where table entries are substituted for
repeated strings of data. For most LZ methods, this table is generated dynamically from
earlier data in the input. The table itself is often Huffman encoded (e.g. SHRI, LZX). A
6
current LZ-based coding scheme that performs well is LZX, used in Microsoft’s CAB
format.
Lossless compression is preferable when
 In medical applications, when it is difficult to choose an acceptable error level for
image representation.
 In those cases when we deal with highly structured images (typically non-natural
ones) like texts or graphics, which are usually more easily handleable by the
means of lossless compression techniques.
 In all those applications when images are frequently modified or re-compressed:
using a lossy coding, there would be too many errors in the image that would
make the quality absolutely unacceptable.
2.2 Different Types of Data Compression
2.2.1 Variable Length Coding(VLC)
Most entropy-based encoding techniques rely on assigning variable-length codewords
to each symbol, whereas the most likely symbols are assigned shorter codewords.
In the case of image coding, the symbols may be raw pixel values or the numerical values
obtained at the output of the mapper stage (e.g., differences between consecutive
pixels, run-lengths, etc.). The most popular entropy-based encoding technique is the
Huffman code. It provides the least amount of information units (bits) per source symbol.
2.2.2 Run-length encoding (RLE)
RLE is one of the simplest data compression techniques. It consists of replacing a
sequence (run) of identical symbols by a pair containing the symbol and the run length.
It is used as the primary compression technique in the 1-D CCITT Group 3 fax standard
and in conjunction with other techniques in the JPEG image compression standard.
2.2.3 Differential coding
Differential coding techniques explore the interpixel redundancy in digital images.
The basic idea consists of applying a simple difference operator to neighboring pixels to
calculate a difference image, whose values are likely to follow within a much narrower
7
range than the original gray-level range. As a consequence of this narrower distribution
and consequently reduced entropy-Huffman coding or other VLC schemes will produce
shorter codewords for the difference image.
2.2.4 Predictive coding
Predictive coding techniques constitute another example of exploration of interpixel
redundancy, in which the basic idea is to encode only the new information in each pixel.
This new information is usually defined as the difference between the actual and the
predicted value of that pixel.
2.2.5 Dictionary based coding
Dictionary-based coding techniques are based on the idea of incrementally building
a dictionary (table) while receiving the data. Unlike VLC techniques, dictionary-based
techniques use fixed-length codewords to represent variable-length strings of symbols
that commonly occur together. Consequently, there is no need to calculate, store, or
transmit the probability distribution of the source, which makes these algorithms extremely
convenient and popular. The best-known variant of dictionary-based coding
algorithms is the LZW (Lempel-Ziv-Welch) encoding scheme, used in popular multimedia
file formats such as GIF, TIFF, and PDF.
2.2.6 Transform coding
The techniques discussed so far work directly on the pixel values and are usually
called spatial domain techniques. Transform coding techniques use a reversible, linear
mathematical transform to map the pixel values onto a set of coefficients, which are
then quantized and encoded. The key factor behind the success of transform-based
coding schemes many of the resulting coefficients for most natural images have small
magnitudes and can be quantized (or discarded altogether) without causing significant
distortion in the decoded image. Different mathematical transforms, such as Fourier
(DFT), Walsh-Hadamard (WHT), and Karhunen-Loeve (KLT), have been considered
for the task.
8
2.2.7 Wavelet coding
Wavelet coding techniques are also based on the idea that the coefficients of a transform
that decorrelates the pixels of an image can be coded more efficiently than the
original pixels themselves. The main difference between wavelet coding and DCTbased
coding is the omission of the first stage. Because wavelet transforms are capable
of representing an input signal with multiple levels of resolution, and yet maintain the
useful compaction properties of the DCT, the subdivision of the input image into smaller
subimages is no longer necessary.
2.3 Transform Coding
Transform coding is a type of data compression for "natural" data like audio signals
or photographic images. The transformation is typically lossy, resulting in a lower
quality copy of the original input. In transform coding, knowledge of the application is
used to choose information to discard, thereby lowering its bandwidth. The remaining
information can then be compressed via a variety of methods. When the output is decoded,
the result may not be identical to the original input, but is expected to be close
enough for the purpose of the application.
Transform coding is used to convert spatial image pixel values to transform coefficient
values. Since this is a linear process and no information is lost, the number of
coefficients produced is equal to the number of pixels transformed. The desired effect
is that most of the energy in the image will be contained in a few large transform coefficients.
If it is generally the same few coefficients that contain most of the energy in
most pictures, then the coefficients may be further coded by lossless entropy coding. In
addition, it is likely that the smaller coefficients can be coarsely quantized or deleted
(lossy coding) without doing visible damage to the reproduced image. The energy of
a pixel may be defined as the square of its value times some scaling factor. Similarly,
the energy of a transform coefficient may be defined as the square of its value times
some scaling factor. With the proper scaling factor, the total energy of the pixels in a
picture will always equal the total energy in the transform coefficients. The transform
process simply concentrates the energy into particular coefficients, generally the "low
9
frequency" ones.
Many types of transforms have been tried for picture coding, including for example
Fourier, Karhonen-Loeve,Walsh-Hadamard, lapped orthogonal, discrete cosine (DCT),
and recently, wavelets. The various transforms differ among themselves in three basic
ways that are of interest in picture coding:
1. The degree of concentration of energy in a few coefficients;
2. The region of influence of each coefficient in the reconstructed picture;
3. The appearance and visibility of coding noise due to coarse quantization of the
coefficients.
Fourier transforms (discrete) have good energy concentration characteristics, but become
unwieldy when dealing with large images requiring large numbers of coefficients.
Block transforms, which work on a small portion of the image at a time, are therefore
preferred. The discrete Fourier transform may be applied to a block of pixels. Other
transforms which fall in this category are Walsh-Hadamard, and the DCT. The lapped
orthogonal transforms are a special case in which the coefficients’ influence is confined
to a few adjacent blocks, with a tapering-off influence toward the edges.
Because of ease of hardware computation and generally very good energy concentration
for a wide range of natural images, the DCT has become the transform of choice
for many image coding schemes, including MPEG. The DCT, unlike the Fourier transform,
is spatially variant. A portion of a sine wave coded with a Fourier transform has
all the energy concentrated at the same frequency coefficients regardless of the phase
of the sinusoid (although the energy will be apportioned differently between the sine
and cosine components). The DCT, on the other hand, is sensitive to phase, so that an
object moving across the screen will have different frequency content from frame to
frame. This also means that the visibility of coding artifacts due to coefficient quantization
will vary somewhat depending on the position of an object (edge) in the image.
Also, because the DCT is a strictly bounded block transform, lossy coding will produce
block-edge mismatch which will be visible at some level of quantization even if there
is only low frequency content in that area.
10
CHAPTER 3
Compressive Sensing
One of the central tenets of signal processing is the Nyquist/Shannon sampling theory:
the number of samples needed to reconstruct a signal without error is dictated by its
bandwidth - the length of the shortest interval which contains the support of the spectrum
of the signal under study. Compressive sensing shows that super-resolved signals
and images can be reconstructed from far fewer data/measurements than what is usually
considered necessary. Compressed sensing is a new rapidly growing research field
emerging which investigates ways in which we can sample signals at roughly the "information
rate" rather than the Nyquist rate. For example compressed sensing theory
has already enabled a 4-fold reduction in acquisition time for MRI images by allowing
the under-sampling of k-space. It is potentially a very disruptive technology and provides
a new way of thinking about how to acquire and code signals in the most efficient
manner. It has a growing number of applications that are being explored across a range
of disciplines, including:
1. Medical imaging
2. Seismic imaging
3. Distributed and remote sensing
4. Analogue to Information (Digital) Conversion
3.1 Steps in Compressive Sensing
There are two main components to compressed sensing: the sampling strategy and
the reconstruction algorithm.
i Sampling
conventional sampling involves measuring a quantity at regular intervals (so as to
satisfy Nyquist), the concept of sampling in compressed sensing is much more general.
Sampling in compressed sensing consists of making a random linear projection
of the signal into a low dimensional space. While this is essentially what is required
for the theory researchers have found empirically that the same ideas can often be used
in much more conventional sampling scenarios. For example MRI scanners sample
lines within the spatial Fourier domain of the image and there are already initial examples
of compressed sensing techniques for MRI using randomized trajectories and even
deterministic trajectories (sampling a small number of radial lines) in the Fourier space.
ii Reconstruction
The key difference between conventional sampling and compressed sensing is that
the reconstruction operator is nonlinear. Essentially this selects the significant coefficients
for the data in some sparse representation and then calculates a least squares
approximation using the associated basis functions. While this sounds relatively easy it
should be noted that finding the significant coefficients is a combinational search problem
and in practice cannot be solved directly. Instead much of the theory of compressed
sensing has concentrated on proving that near optimal performance is possible by using
either a convex relaxation that boils down to solving a linear or quadratic program or
greedy algorithms that iteratively select the coefficients in a greedy way one at a time
or in groups.
3.2 Sparse Signals
Since the 1990s, modeling signals through sparsity has emerged as an important
and widely applicable technique in signal processing. Its most well-known success is in
image processing, where great advances in compression and estimation have come from
modeling images as sparse in a wavelet domain. Consider an N-dimensional vector x
that can be represented as x = V u, where V is some orthogonal N-by-N matrix and
u 2 RN has only K nonzero entries. In this case, we say that u is K-sparse and that x is
K-sparse with respect to V . The set of positions of nonzeros coefficients in u is called
the sparsity pattern, and we call = K=N the sparsity ratio Knowing that x is K-
sparse with respect to a given basis V can be extremely valuable for signal processing.
For example, in compression, x can be represented by the K positions and values of the
nonzero elements in u, as opposed to the N elements of x.
When the sparsity ratio is small, the compression gain can be significant. Simi-
12
Figure 3.1: Block diagram representation of compressive sampling [1].
larly, in estimating x in the presence of noise, one only has to estimate K as opposed to
N real parameters.Another important property of sparse signals has recently been uncovered:
they can be recovered in a computationally tractable manner from a relatively
small number of random samples. The method, known as compressive sampling. A
basic model for compressive sampling is shown in Figure 3.1.
The N-dimensional signal x is assumed to beK-sparse with respect to some orthogonal
matrix V . The "sampling" x is represented as a linear transformation by a matrix
 yielding a sample vector y = x. Let the size of  beM-by-N, so y hasM elements;
we call each element of y a measurement of x. A decoder must recover the signal x
from y knowing V and , but not necessarily the sparsity pattern of the unknown signal
u. Since u is K-sparse, x must belong to one of NCK subspaces in RN. Similarly, y
must belong to one of NCK subspaces in RM. For almost all s with M = K + 1,
an exhaustive search through the subspaces can determine which subspace x belongs to
and thereby recover the signal’s sparsity pattern and values. Therefore, in principle, a
K sparse signal can be recovered from as few as M = K + 1 random samples. Unfortunately,
the exhaustive search described above is not tractable for interesting sizes of
problems since the number of subspaces to search, NCK , can be enormous; if is held
constant as N is increased, the number of subspaces grows exponentially with N. The
remarkable main result of compressive sampling is to exhibit recovery methods that are
computationally feasible, numerically stable, and robust against noise while requiring a
number of measurements not much larger than K.
13
3.3 Compressible Signals
Consider a real-valued, finite-length, one-dimensional, discrete-time signal x, which
can be viewed as an N by 1 column vector in RN with elements x[n], n = 1; 2; : : : ;N.
(We treat an image or higher-dimensional data by vectorizing it into a long one-dimensional
vector.) Any signal in RN can be represented in terms of a basis of N1 vectors i
i = 1; : : : ;N. For simplicity, assume that the basis is orthonormal. Using the N by
N basis matrix = [ 1j 2j : : : j N] with the vectors i as columns, a signal x can be
expressed as
x =
XN
i=1
si i )
x = s (3.1)
where s is the N by 1 column vector of weighting coefficients si =< x; i >= T
i
x and T denotes transposition. Clearly, x and s are equivalent representations of the
signal, with x in the time or space domain and s in the domain. The signal x is
K-sparse if it is a linear combination of only K basis vectors; that is, only K of the si
coefficients in (3.1) are nonzero and (N

CHAPTER 4
Application

4.1 Single Pixel Camera
A conventional digital camera works by capturing light from an object and focusing
it through a lens onto an array of sensors. Cameras used by the general public operate in
the visible range, 400 to 750 nm, with CMOS sensors. They can be modified to image
in a slightly wider range, from 280 to 1200 nm, if their infrared (IR) filters are removed.
The bulk of the cost of an IR camera is in its sensor array and the cooling system
that regulates the array’s temperature. The array alone accounts for at least one-third of
any IR camera’s price. The cooling system that helps reduce background noise and the
effects of pixel bleeding is similarly expensive. Unfortunately, there is no Moore’s law
for these sort of components. Unlike transistors, IR sensors and their cooling systems
are not likely to decrease in size or cost anytime in the near future. Thus, while the
camera’s image processing chips will probably become cheaper in the next few years,
no such reduction in cost can be predicted for the sensor components. In fact, as the
general trend in the camera industry is toward more pixels, it is likely that soon the cost
of an IR camera will be almost solely determined by the price of its sensor array and this
array’s cooling system. Therefore, to create an economical IR camera, the number of
sensors and cost of their cooling system would have to be dramatically reduced, without
severely damaging the camera’s resolution. In principle, a compressive sensing device
needs only a single sensor-a single pixel. This camera that captures several thousand
points of light in rapid succession We can take a picture with potentially millions of
pixels, but using just a single detector element. Not only does the use of a single pixel
remove the price of a sensor array from the picture, but also it has the potential to reduce
the cost of the sensor cooling system. Since a single pixel device does not suffer from
the pixel-bleeding problem that necessitates a high-quality cooling system in regular IR
cameras, in a single-pixel camera an inexpensive cooling alternative just for removing
background noise could be used. The basic operation of a compressive sensing camera
is relatively simple. Light from an object is focused by a lens onto a digital micromirror
Figure 4.1: Structure of camera [3].
device (DMD). A DMD is an array of tiny mirrors that can be tilted 10o to 12o, from
"on" to "off" positions. In the "on" state, these mirrors reflect light from the object to a
secondary lens, which focuses this light to the sensor. A spatial light modulator (SLM)
modulates the intensity of a light beam according to a control signal. A simple example
of a transmissive SLM that either passes or blocks parts of the beam is an overhead
transparency. The DMD consists of an array of bacterium-sized, electrostatically actuated
micromirrors, where each mirror in the array is suspended above an individual
static random access memory (SRAM) cell . Each mirror rotates about a hinge and can
be positioned in one of two states (+10 and -10 from horizontal)according to which bit
is loaded into the SRAM cell; thus light falling on the DMD can be re flected in two
directions depending on the orientation of the mirrors. Data from this sensor is sent
to a signal processing unit, which manipulates data points to construct an image of the
original object. To generate a set of useful data points for image reconstruction, the
mirrors of the DMD are flipped into a series of random configurations, in each of which
approximately half of the mirrors are in their on position.
4.1.1 Advantages
The key benefit of the camera is that it needs much less information to assemble an
image. Massive CCD arrays collect millions of pixels worth of data, which are typically
compressed to keep file sizes manageable. The single pixel camera, on the other
hand, compresses the image data via its hardware before the pixels are recorded. As a
21
result, it’s able to capture an image with only thousands of pieces of information rather
than millions. In addition to requiring fewer measurements, this camera can image at
wavelengths where is difficult or expensive to create a large array of sensors. It can also
acquire data over time to enable video reconstructionWith this sort of research is that it
may make conventional digital cameras much better. If a single pixel can do the job of
an array of pixels, then we could potentially get each of the pixels in a megapixel camera
to do extra duty as well. Effectively, the resolution of your camera with the techniques
developed in a single pixel camera can be multiplied. The technology could make cameras
much cheaper by reducing the number of pixels needed, or perhaps lead (some
day) to gigapixel resolution from megapixel cameras. In addition, the researchers say,
the single pixel detector can be replaced with devices that register other wavelengths of
light, potentially leading to images collected with light outside the ranges that CCD and
CMOS detectors can handle.
The single-pixel camera is an optical computer that sequentially measures the inner
products y[m] = hx; mi between an N pixel sampled version x of the incident lightfield
from the scene under view and a set of two-dimensional (2-D) test functions fmg.
Each mirror corresponds to a particular pixel in x and m and can be independently oriented
either towards Lens 2 (corresponding to a one at that pixel in m) or away from
Lens 2 (corresponding to a zero at that pixel in m). The reflected light is then collected
by biconvex Lens 2 and focused onto a single photon detector (the single pixel) that
integrates the product x[n]m[n] to compute the measurement y[m] = hx; mi as its
output voltage. This voltage is then digitized by an A/D converter. Values of m between
zero and one can be obtained by dithering the mirrors back and forth during the
photodiode integration time. To obtain m with both positive and negative values , we
estimate and subtract the mean light intensity from each measurement, which is easily
measured by setting all mirrors to the full-on one position. To compute CS randomized
measurements y = x , we set the mirror orientations m randomly using a pseudorandom
number generator, measure y[m], and then repeat the process M times to obtain
the measurement vector y. Since the DMD array is programmable, we can also employ
test functions m drawn randomly from a fast transform such as aWalsh, Hadamard, or
noiselet transform.
22
4.1.2 Disadvantages
The prototype model requires about five minutes to take a single picture. That is
because the camera must blink several thousand times to capture a complete image.
4.2 Magnetic Resonance Imaging
Magnetic Resonance Imaging (MRI) is an essential medical imaging tool burdened
by an inherently slow data acquisition process. The application of CS to MRI has
the potential for significant scan time reductions, with benefits for patients and health
care economics. MRI obeys two key requirements for successful application of CS: (1)
medical imagery is naturally compressible by sparse coding in an appropriate transform
domain (e.g., by wavelet transform); (2) MRI scanners naturally acquire samples of the
encoded image in spatial frequency, rather than direct pixel samples. The transform
sparsity of MR images and the coded nature of MR acquisition are two key properties
enabling CS in MRI. In MRI, we look at a special case of CS where the sampled linear
combinations are simply individual Fourier coefficients (k-space samples). CS has been
able to make accurate reconstructions from a small subset of k-space, rather than an
entire k-space grid. Four potential applications of CS in MRI:
1. Rapid Angiography
2. Whole-Heart Coronary Imaging
3. Enhanced Brain Imaging
4.2.1 Rapid 3D Angiography
Angiography is increasingly popular for diagnosis of vascular disease. It attempts
to image blood vessels in the body and helps to detect aneurysms, vascular occlusions,
stenotic disease and tumor feeder vessels. It also serves to guide surgical procedures and
to monitor the treatment of vascular disease . Often, a contrast agent is injected, significantly
increasing the blood signal and enabling rapid data acquisition. In angiography,
a significant portion of the diagnostic information comes from imaging the dynamics
of the contrast agent bolus. This requires high spatial and temporal resolution with a
very large FOV - obviously a very difficult task. Today MR angiography scans are often
undersampled obtaining improved spatial resolution and temporal resolution at the
23
expense of undersampling artifacts. CS is particularly suitable for angiography. Angiograms
are inherently sparse images - already sparse in the pixel representation, and
are sparsified even better by spatial finite-differences The need for rapid high temporal
and spatial resolution imaging means that undersampling is almost inevitable. CS offers
to improve current strategies by significantly reducing the artifacts that result from
undersampling. CS is able to significantly accelerate angiography imaging, enabling
better temporal resolution or alternatively improving the resolution of current imagery
without compromising scan time. The nonlinear reconstruction in CS avoids most of
the artifacts that appear in linear reconstruction from undersampled data.
4.2.2 Whole Heart Coronary Imaging
X-ray coronary angiography is the gold standard for evaluating coronary artery disease,
but it is invasive. Multi-slice x-ray CT is a non-invasive alternative, but generates
high doses of ionizing radiation. MRI is emerging as a non-invasive, non-ionizing alternative
. Coronary arteries are constantly subject to heart and respiratory motion;
high-resolution imaging is therefore a challenging task. Heart motion can be handled
by synchronizing acquisitions to the cardiac cycle (cardiac gating). Respiratory motion
can be mitigated by long scans with navigated breathing compensation , or simply
by short breath-held acquisitions . However, breath-held cardiac triggered collection
schemes face strict timing constraints and very short imaging windows. The number of
acquisitions is limited to the number of cardiac cycles in the breath-hold period. The
number of heartbeats per period is itself limited - patients in need of coronary diagnosis
cannot be expected to hold their breath for long! Also, each acquisition must be very
short to avoid motion blurring. On top of this, many slices need to be collected to cover
the whole volume of the heart. Because of these constraints, traditionally breath-held
cardiac triggered acquisitions have limited spatial resolution and only partial coverage
of the heart . Compressed sensing can accelerate data acquisition, allowing the entire
heart to be imaged in a single breath-hold . To meet the strict timing requirements, the
hardware efficient spiral k-space trajectory is used. For each cardiac trigger, a single
spiral in k-space is acquired for each slice. The heart does move considerably during
the imaging period, but because each acquisition is very short, each slice is relatively
immune to motion and inter-slice motion is manifested as geometric distortion across
24
the slices rather than blurring. Geometric distortion has little effect on the clinical diagnostic
value of the image. Even though spirals are very efficient, the strict timing
limitations make is necessary to undersample k-space 2-fold. To do so, undersampled
variable density spirals are used. Such spirals have an incoherent PSF. When used with
linear gridding reconstruction undersampling artifacts are incoherent and appear simply
as added noise. Coronary images are generally piece-wise smooth and are sparsified
well by finite-differences. CS reconstruction can suppress undersampling-induced interference
without degrading the image quality.
4.2.3 Brain Imaging
Brain scans are the most common clinical application of MRI; most such scans
are 2D Cartesian multi-slice. For scan time reasons and SNR reasons, the slices are
often quite thick, often with large gaps between slices. The ideas of CS promise to
reduce collection time while improving the resolution of current imagery. Indeed, by
significantly undersampling the existing k-space, some of the saved collection time
could be used to collect data from the missing slices, and still leave a shorter collection
overall. Brain images exhibit transform sparsity in the wavelet domain. If incoherence
can be obtained, the application may succeed.

CHAPTER 5
Conclusion

Signal acquisition based on compressive sensing can be far more efficient than traditional
sampling for sparse or compressible signals. Moreover compressive sensing
measurements are random in that the same random matrix works simultaneously many
sparstifying bases with high probability. It is also robust in that the measurements have
equal priority. Thus one or more measurements can be lost without corrupting the entire
reconstruction.
REFERENCES
[1] Compressive Sensing, Richard G. Baraniuk IEEE Signal Processing Magazine July
2007
[2] D. Donoho, "Compressed sensing," IEEE Trans. Inform. Theory, vol. 52, no. 4, pp.
1289-1306, Apr. 2006
[3] D. Takhar, V. Bansal, M. Wakin, M. Duarte, D. Baron, J. Laska, K.F. Kelly, and
R.G. Baraniuk, "A compressed sensing camera: New theory and an implementation
using digital micromirrors," in Proc. Comput. Imaging IV SPIE Electronic Imaging,
San Jose, Jan. 2006.
[4] R.G. Baraniuk, M. Davenport, R. DeVore, and M.B. Wakin, "A simple proof of the
restricted isometry principle for random matrices (aka the Johnson-Lindenstrauss
lemma meets compressed sensing)," Constructive Approximation, 2007
[5] "Compressed Sensing MRI" Michael Lustig, Student Member, IEEE, David L.
Donoho Member, IEEE Juan M. Santos Member, IEEE, and John M. Pauly, Member,
IEEE
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: compressive strength of brick, vlc plaer, compressive strength of coal, orthonormal, ppts on compressive testing machine on concrete, transforms, lossy amull imge,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Transparent electronics full report seminar surveyer 8 24,539 04-04-2018, 07:54 AM
Last Post: Kalyani Wadkar
  wireless charging through microwaves full report project report tiger 90 70,944 27-09-2016, 04:16 AM
Last Post: The icon
  Wireless Power Transmission via Solar Power Satellite full report project topics 32 50,411 30-03-2016, 03:27 PM
Last Post: dhanabhagya
  surge current protection using superconductors full report computer science technology 13 26,976 16-03-2016, 12:03 AM
Last Post: computer science crazy
  paper battery full report project report tiger 57 61,781 16-02-2016, 11:42 AM
Last Post: Guest
  IMOD-Interferometric modulator full report seminar presentation 3 11,433 18-07-2015, 10:14 AM
Last Post: [email protected]
  digital jewellery full report project report tiger 36 66,677 27-04-2015, 01:29 PM
Last Post: seminar report asees
  Implementation Issues in Spectrum Sensing for Cognitive Radios seminar surveyer 3 3,648 16-03-2015, 02:23 PM
Last Post: seminar report asees
  LOW POWER VLSI On CMOS full report project report tiger 15 22,275 09-12-2014, 06:31 PM
Last Post: seminar report asees
  eddy current brake full report project report tiger 24 33,499 14-09-2014, 08:27 AM
Last Post: Guest

Forum Jump: