MPEG seminars report
#1

[attachment=1104][attachment=1105]
MPEG-VIDEO COMPRESSION TECHNIQUE
ABSTRACT
MPEG is a famous four letter word which stands for the moving pictures Experts Group.
To the real world, MPEG is a generic means of compactly representing digital video and audio signals for consumer distribution. The basic idea is to transform a steam of discrete samples into a bit stream of tokens which takes less space, but is just as filling to the eye or near). This transformation or better representing exploits perceptual and even some actual statistic redundancies. The orthogonal dimensions of video & audio streams can be further linked with the systems layer MPEGâ„¢s own means of keeping data multiplexed in a common serial bit stream.


Staff in charge Presented By

Asst. Prof. Mr. Gylson Thomas Vinod.V.P
Roll No:37
S7, EEE
1. MPEG-INTROUDUCTION
MPEG is the famous four-letter word which stands for the "Moving Pictures Experts Groups.
To the real word, MPEG is a generic means of compactly representing digital video and audio signals for consumer distributionThe essence of MPEG is its syntax: the little tokens that make up the bitstream. MPEG's semantics then tell you (if you happen to be a decoder, that is) how to inverse represent the compact tokens back into something resembling the original stream of samples. These semantics are merely a collection of rules (which people like to called algorithms, but that would imply there is a mathematical coherency to a scheme cooked up by trial and error¦.). These rules are highly reactive to combinations of bitstream elements set in headers and so forth.
MPEG is an institution unto itself as seen from within its own universe. When (unadvisedly) placed in the same room, its inhabitants a blood-letting debate can spontaneously erupt among, triggered by mere anxiety over the most subtle juxtaposition of words buried in the most obscure documents. Such stimulus comes readily from transparencies flashed on an overhead projector. Yet at the same time, this gestalt will appear to remain totally indifferent to critical issues set before them for many months. It should therefore be no surprise that MPEG's dualistic chemistry reflects the extreme contrasts of its two founding fathers: the fiery Leonardo Chairiglione (CSELT, Italy) and the peaceful Hiroshi Yasuda (JVC, Japan). The excellent byproduct of the successful MPEG Processes became an International Standards document safely administered to the public in three parts: Systems (Part), Video (Part 2), and Audio (Part 3).
Pre MPEG
Before providence gave us MPEG, there was the looming threat of world domination by proprietary standards cloaked in syntactic mystery. With lossy compression being such an inexact science (which always boils down to visual tweaking and implementation tradeoffs), you never know what's really behind any such scheme (other than a lot of the marketing hype).
Seeing this threat¦ that is, need for world interoperability, the Fathers of MPEG sought help of their colleagues to form a committee to standardize a common means of representing video and audio (a la DVI) onto compact discs¦. and maybe it would be useful for other things too.
MPEG borrowed a significantly from JPEG and, more directly, H.261. By the end of the third year (1990), a syntax emerged, which when applied to represent SIF-rate video and compact disc-rate audio at a combined bitrate of 1.5 Mbit/sec, approximated the pleasure-filled viewing experience offered by the standard VHS format.
After demonstrations proved that the syntax was generic enough to be applied to bit rates and sample rates far higher than the original primary target application ("Hey, it actually works!"), a second phase (MPEG-2) was initiated within the committee to define a syntax for efficient representation of broadcast video, or SDTV as it is now known (Standard Definition Television), not to mention the side benefits: frequent flier miles, impress friends, job security, obnoxious party conversations.
Yet efficient representation of interlaced (broadcast) video signals was more challenging than the progressive (non-interlaced) signals thrown at MPEG-1. Similarly, MPEG-1 audio was capable of only directly representing two channels of sound (although Dolby Surround Sound can be mixed into the two channels like any other two channel system).
MPEG-2 would therefore introduce a scheme to decorrelate mutlichannel discrete surround sound audio signals, exploiting the moderately higher redundancy factor in such a scenario. Of course, propriety schemes such as Dolby AC-3 have become more popular in practice.
Need for a third phase (MPEG-3) was anticipated way back in 1991 for High Definition Television, although it was later discovered by late 1992 and 1993 that the MPEG-2 syntax simply scaled with the bit rate, obviating the third phase. MPEG-4 was launched in late 1992 to explore the requirements of a more diverse set of applications (although originally its goal seemed very much like that of the ITU-T SG15 group, which produced the new low-birate videophone standard---H.263).
Today, MPEG (video and systems) is exclusive syntax of the United States Grand Alliance HDTV specification, the European Digital Video Broadcasting group, and the Digital Versital Disc (DVD).


2. MPEG VIDEO SYNTAX
MPEG video syntax provides an efficient way to represent image sequences in the form of more compact coded data. The language of the coded bits is the "syntax." For example, a few tokens amounting to only, say, 100 bits can represent an entire block of 64 samples rather transparently ("you can't tell the difference") which otherwise normally consume (64*8), or, 512 bits. MPEG also describes a decoding (reconstruction) process where the coded bits are mapped from the compact representation into the original, "raw" format of the image sequence. For example, a flag in the coded bitstream signals whether the following bits are to be decoded with a DCT algorithm or with a prediction algorithm. The algorithms comprising the decoding process are regulated by the semantics defined by MPEG. This syntax can be applied to exploit common video characteristics such as spatial redundancy, temporal redundancy, uniform motion, spatial masking, etc.

3. MPEG MYTHS
Because it's new and sometimes hard to understand, many myths plague perception about MPEG.
1. Compression Ratios over 100:1
As discussed elsewere, articles in the press and marketing literature will often make the claim that MPEG can achieve high quality video with compression ratios over 100:1. These figures often include the oversampling factors in the source video. In reality, the coded sample rate specified in an MPEG image sequence is usually not much larger than 30 times the specified bit rate. Pre-compression through subsampling is chiefly responsible for 3 digit ratios for all video coding methods, including those of the non-MPEG variety ("yuck, blech!").
2. MPEG-1 is 352x240
Both MPEG-1 and MPEG-2 video syntax can be applied at a wide range of bitrates and sample rates. The MPEG-1 that most people are familiar with has parameters of 30 SIF pictures (352 pixels x 240 lines) per second and a coded bitrate less than 1.86 megabits/sec----a combination known as "Constrained Parameters Bitstreams". This popular interoperability point is promoted by Compact Disc Video (White Book).
In fact, it is syntactically possible to encode picture dimensions as high as 4095 x 4095 and a bitrates up to 100 Mbit/sec. This number would be orders of magnitude higher, maybe even infinite, if not for the need to conserve bits in the headers!
With the advent of the MPEG-2 specification, the most popular combinations have coagulated into "Levels," which are described later in this text. The two most common levels are affectionately known as:
¢ Source Input Format (SIF), with 352 pixels x 240 lines x 30 frames/sec, also known as Low Level (LL), ¦and ¦
¢ "CCIR 601" (e.g. 720 pixels/line x 480 lines x 30 frames/sec), or Main Level.
3. Motion Compensation displaces macroblocks from previous pictures
Macroblock predictions are formed out of arbitrary 16x16 pixel (or 16x8 in MPEG-2) areas from previously reconstructed pictures. There are no boundaries which limit the location of a macroblock prediction within the previous picture, other than the edges of the picture of course (but that doesn't always stop some people).

Reference pictures (from which you form predictions) are for conceptual purposes a grid of samples with no resemblence to their coded form. Once a frame has been reconstructed, it is important, psychologically speaking, that you let go of your original understanding of these frames as a collection of coded macroblocks and regard them like any other big collection of coplanar samples.

4. Display picture size is the same as the coded picture size
In MPEG, the display picture size and frame rate may differ from the size ("resolution") and frame rate encoded into the bitstream. For example, a regular pattern of pictures in a source image sequence may be dropped (decimated), and then each picture may itself be filtered and subsampled prior to encoding. Upon reconstruction, the picture may be interpolated and upsampled back to the source size and frame rate.
In fact, the three fundamental phases (Source Rate, Coded Rate, and Display Rate) may differ by several parameters. The MPEG syntax can separately describe Coded and Display Rates through sequence_headers, but the actual Source Rate is a secret known only by the encoder. This is why MPEG-2 introduced the display_horizontal_size and display_vertical_size header elements----the display-domain companions to the coded-domain horizontal_size and vertical_size elements from the old MPEG-1 days.
5. Picture coding types (I, P, B) all consist of the same macroblocks types ("Ha!").
All (non-scalable) macroblocks within an I picture must be coded Intra (like a baseline JPEG picture). However, macroblocks within a P picture may either be coded as Intra or Non-intra (temporally predicted from a previously reconstructed picture). Finally, macroblocks within the B picture can be independently selected as either Intra, Forward predicted, Backward predicted, or both forward and backward (Interpolated) predicted. The macroblock header contains an element, called macroblock_type, which can flip these modes on and off like switches.

macroblock_type is possibly the single most powerful element in the whole of video syntax. It's buddy motion_type, introduced in MPEG-2, is perhaps the second most powerful element. Picture types (I, P, and B) merely enable macroblock modes by widening the scope of the semantics. The component switches are:
1. Intra or Non-intra
2. Forward temporally predicted (motion_forward)
3. Backward temporally predicted (motion_backward) (switches 2+3 in combination represent "Interpolated", i.e. "Bi-Directionally Predicted.")
4. conditional replenishment (macroblock_pattern)---affectiionaly known as "digital spackle for your prediction.".
5. adaptation in quantization (macroblock_quantizer_code).
6. temporally predicted without motion compensation
The first 5 switches are mostly orthogonal (the 6th is a special trick case in P pictures marked by the 1st and 2nd switch set to off "predicted, but not motion compensated.").
Without motion compensation:

With motion compensation:

Naturally, some switches are non-applicable in the presence of others. For example, in an Intra macroblock, all 6 blocks by definition contain DCT data, therefore there is no need to signal either the macroblock_pattern or any of the temporal prediction switches. Likewise, when there is no coded prediction error information in a Non-intra macroblock, the macroblock_quantizer signal would have no meaning. This proves once again that MPEG requires the reader to interpret things closely.
Skipped macroblocks in P pictures:

Skipped macroblocks in B pictures:

6. Sequence structure is fixed to a specific I,P,B frame pattern.
A sequence may consist of almost any pattern of I, P, and B pictures (there are a few minor semantic restrictions on their placement). It is common in industrial practice to have a fixed pattern (e.g. IBBPBBPBBPBBPBB), however, more advanced encoders will attempt to optimize the placement of the three picture types according to local sequence characteristics in the context of more global characteristics. (or at least they claim to because it makes them sound more advanced).
Naturally, each picture type carries a rate penalty when coupled with the statistics of a particular picture (temporal masking, occlusion, motion activity, etc.). This is when your friends start to drop the phrase "constrained entropy" at parties.
The variable length codes of the macroblock_type switch provide a direct clue, but it is the full scope of semantics of each picture type spell out the real overall costs-benefits. For example, if the image sequence changes little from frame-to-frame, it is sensible to code more B pictures than P. Since B pictures by definition are never fed back into the prediction loop (i.e. not used as prediction for future pictures), bits spent on the picture are wasted in a sense (B pictures are like temporal spackle at the frame granularity, not macroblock granularity or layer.).
Application requirements also have their say in the temporal placement of picture coding types: random access points, mismatch/drift reduction, channel hopping, program source sequence at the 30 Mbit/sec stage just prior to encoding, which is also the actual specified sample rate in the MPEG bitstream (sequence_header()), and the reconstructed sequence produced from the 1.15 Mbit/sec coded bitstream. If you can achieve compression through subsampling alone, it means you never really needed the extra samples in the first place.
Step 6. Don't forget 3:2 pulldown!
A majority of high budget programs originate from film, not video. Most of the movies encoded onto Compact Disc Video were in fact captured and edited at 24 frames/sec. So, in such an image sequence, 6 out of the 30 frames displayed on a television monitor (30 frame/sec or 60 field/sec is standard NTSC rate in North America and Japan) are in fact."

4. THE MPEG DOCCUMENT
The MPEG-1 specification (official title: ISO/IEC 11172 "Information technology - Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s", Copyright 1993.) consists of five parts. Each document is a part of the ISO/IEC standard number 11172. The first three parts reached International Standard status in early 1993 (no coincidence to the nuclear weapons reduction treaty signed back then). Part 4 reached IS in 1994. In mid 1995, Part 5 will go IS.
Part 1---Systems: The first part of the MPEG standard has two primary purposes: 1). a syntax for transporting packets of audio and video bitstreams over digital channels and storage mediums (DSM), 2). a syntax for synchronizing video and audio streams.
Part 2---Video: describes syntax (header and bitstream elements) and semantics (algorithms telling what to do with the bits). Video breaks the image sequence into a series of nested layers, each containing a finer granularity of sample clusters (sequence, picture, slice, macroblock, block, sample/coefficient). At each layer, algorithms are made available which can be used in combination to achieve efficient compression. The syntax also provides a number of different means for assisting decoders in synchronization, random access, buffer regulation, and error recovery. The highest layer, sequence, defines the frame rate and picture pixel dimensions for the encoded image sequence.
Part 3---Audio: describes syntax and semantics for three classes of compression methods. Known as Layers I, II, and III, the classes trade increased syntax and coding complexity for improved coding efficiency at lower bitrates. The Layer II is the industrial favorite, applied almost exclusively in satellite broadcasting (Hughes DSS) and compact disc video (White Book). Layer I has similarities in terms of complexity, efficiency, and syntax to the Sony MiniDisc and the Philips Digitial Compact Cassette (DCC). Layer III has found a home in ISDN, satellite, and Internet audio applications. The sweet spots for the three layers are 384 kbit/sec (DCC), 224 kbit/sec (CD Video, DSS), and 128 Kbits/sec (ISDN/Internet), respectively.
Part 4---Conformance: (circa 1992) defines the meaning of MPEG conformance for all three parts (Systems, Video, and Audio), and provides two sets of test guidelines for determining compliance in bitstreams and decoders. MPEG does not directly address encoder compliance.
Part 5---Software Simulation: Contains an example ANSI C language software encoder and compliant decoder for video and audio. An example systems codec is also provided which can multiplex and demultiplex separate video and audio elementary streams contained in computer data files.
As of March 1995, the MPEG-2 volume consists of a total of 9 parts under ISO/IEC 13818. Part 2 was jointly developed with the ITU-T, where it is known as recommendation H.262. The full title is: "Information Technology--Generic Coding of Moving Pictures and Associated Audio." ISO/IEC 13818. The first five parts are organized in the same fashion as MPEG-1(System, Video, Audio, Conformance, and Software). The four additional parts are listed below:
Part 6 Digital Storage Medium Command and Control (DSM-CC): provides a syntax for controlling VCR-style playback and random-access of bitstreams encoded onto digital storage mediums such as compact disc. Playback commands include Still frame, Fast Forward, Advance, Goto.
Part 7 Non-Backwards Compatible Audio (NBC): addresses the need for a new syntax to efficiently de-correlate discrete mutlichannel surround sound audio. By contrast, MPEG-2 audio (13818-3) attempts to code the surround channels as an ancillary data to the MPEG-1 backwards-compatible Left and Right channels. This allows existing MPEG-1 decoders to parse and decode only the two primary channels while ignoring the side channels (parse to /dev/null). This is analogous to the Base Layer concept in MPEG-2 Scalable video ("decode the base layer, and hope the enhancement layer will be a fad that goes away."). NBC candidates included non-compatible syntax's such as Dolby AC-3. The final NBC document is not expected until 1996.
Part 8 10-bit video extension. Introduced in late 1994, this extension to the video part (13818-2) describes the syntax and semantics for coded representation of video with 10-bits of sample precision. The primary application is studio video (distribution, editing, archiving). Methods have been investigated by Kodak and Tektronix which employ Spatial scalablity, where the 8-bit signal becomes the Base Layer, and the 2-bit differential signal is coded as an Enhancement Layer. Final document is not expected until 1997 or 1998.
[Part 8 has been withdrawn due to lack of interest by industry]
Part 9 Real-time Interface (RTI): defines a syntax for video on demand control signals between set-top boxes and head-end servers.
5. CONSTANT AND VARIABLE BITRATE STREAMS

Constant bitrate streams are buffer regulated to allow continuos transfer of coded data across a constant rate channel without causing an overflow or underflow to a buffer on the receiving end. It is the responsibility of the Encoder's Rate Control stage to generate bitstreams which prevent buffer overflow and underflow. The constant bit rate encoding can be modeled as a reservoir: variable sized coded pictures flow into the bit reservoir, but the reservoir is drained at a constant rate into the communications channel.
The most challenging aspect of a constant rate encoder is, yes, to maintain constant channel rate (without overflowing or underflow a buffer of a fixed depth) while maintaining constant perceptual picture quality.
In the simplest form, variable rate bitstreams do not obey any buffer rules, but will maintain constant picture quality. Constant picture quality is easiest to achieve by holding the macroblock quantizer step size constant, e.g. quantiser_scale_code of 8 (linear) or 12 (non-linear MPEG-2).. In its most advanced form, variable bitrate streams may be more difficult to generate than constant bitrate streams. In "advanced" variable bitrate streams, the instantaneous bit rate (piece-wise bit rate) may be controlled by factors such as:
1. local activity measured against activity over large time intervals (e.g. the full span of a movie as is the case of DVD), or¦
2. instantaneous bandwidth availability of a communications channel (as is the case of Direct Broadcast Satellite).
Summary of bitstream types
Bitrate type Applications
constant-rate fixed-rate communications channels like the original Compact Disc, digital video tape, single channel-per-carrier broadcast signal, hard disk storage
simple variable-rate software decoders where the bitstream buffer (VBV) is the storage medium itself (very large). macroblock quantization scale is typically held constant over large number of macroblocks.
complex variable-rate Statistical muliplexing (multiple-channel-per-carrier broadcast signals), compact discs and hard disks where the servo mechanisms can be controlled to increase or decrease the channel delivery rate, networked video where overall channel rate is constant but demand is variably share by multiple users, bitstreams which achieve average rates over very long time averages
6. STATISTICAL MULTIPLEXING
In the simplest coded bitstream, a PCM (Pulse Coded Modulated) digital signal, all samples have an equal number of bits. Bit distribution in a PCM image sequence is therefore not only uniform within a picture, (bits distributed along zero dimensions), but is also uniform across the full sequence of pictures.
Audio coding algorithms such as MPEG-1's Layer I and II are capable of distributing bits over a one dimensional space, spanned by a "frame." In block-based still image compression methods which employ 2-D transform coding methods, bits are distributed over a 2 dimensional space (horizontal and vertical) within the block. Further, blocks throughout the picture may contain a varying number of bits as a result, for example, of adaptive quantization. For example, background sky may contain an average of only 50 bits per block, whereas complex areas containing flowers or text may contain more than 200 bits per block. In the typical adaptive quantization scheme, more bits are allocated to perceptually more complex areas in the picture. The quantization stepsizes can be selected against an overall picture normalization constant, to achieve a target bit rate for the whole picture. An encoder which generates coded image sequences comprised of independently coded still pictures, such as JPEG Motion video or MPEG Intra picture sequences, will typically generate coded pictures of equal bit size.
MPEG non-intra coding introduces the concept of the distribution of bits across multiple pictures, augmenting the distribution space to 3 dimensions. Bits are now allocated to more complex pictures in the image sequence, normalized by the target bit size of the group of pictures, while at a lower layer, bits within a picture are still distributed according to more complex areas within the picture. Yet in most applications, especially those of the Constant Bitrate class, a restriction is placed in the encoder which guarantees that after a period of time, e.g. 0.25 seconds, the coded bitstream achieves a constant rate (in MPEG, the Video Buffer Verifier regulates the variable-to-constant rate mapping). The mapping of an inherently variable bitrate coded signal to a constant rate allows consistent delivery of the program over a fixed-rate communications channel.
Statistical multiplexing takes the bit distribution model to 4 dimensions: horizontal, vertical, temporal, and program axis. The 4th dimension is enabled by the practice of mulitplexing multiple programs (each, for example, with respective video and audio bitstreams) on a common data carrier. In the Hughes' DSS system, a single data carrier is modulated with a payload capacity of 23 Mbits/sec, but a typical program will be transported at average bit rate of 6 Mbit/sec each. In the 4-D model, bits may be distributed according the relative complexity of each program against the complexities of the other programs of the common data carrier. For example, a program undergoing a rapid scene change will be assigned the highest bit allocation priority, whereas the program with a near-motionless scene will receive the lowest priority, or fewest bits.

7. MPEG COMPRESSION
Here are some typical statistical conditions addressed by specific syntax and semantic tools:
1. Spatial correlation: transform coding with 8x8 DCT.

2. Human Visual Response---less acuity for higher spatial frequencies: lossy scalar quantization of the DCT coefficients.

3. Correlation across wide areas of the picture: prediction of the DC coefficient in the 8x8 DCT block.
4. Statistically more likely coded bitstream elements/tokens: variable length coding of macroblock_address_increment, macroblock_type, coded_block_pattern, motion vector prediction error magnitude, DC coefficient prediction error magnitude.
5. Quantized blocks with sparse quantized matrix of DCT coefficients: end_of_block token (variable length symbol).
6. Spatial masking: macroblock quantization scale factor.
7. Local coding adapted to overall picture perception (content dependent coding): macroblock quantization scale factor.
8. Adaptation to local picture characteristics: block based coding, macroblock_type, adaptive quantization.
9. Constant stepsizes in adaptive quantization: new quantization scale factor signaled only by special macroblock_type codes. (adaptive quantization scale not transmitted by default).
10. Temporal redundancy: forward, backwards macroblock_type and motion vectors at macroblock (16x16) granularity.
11. Perceptual coding of macroblock temporal prediction error: adaptive quantization and quantization of DCT transform coefficients (same mechanism as Intra blocks).
12. Low quantized macroblock prediction error: "No prediction error" for the macroblock may be signaled within macroblock_type. This is the macroblock_pattern switch.
13. Finer granularity coding of macroblock prediction error: Each of the blocks within a macroblock may be coded or not coded. Selective on/off coding of each block is achieved with the separate coded_block_pattern variable-length symbol, which is present in the macroblock only of the macroblock_pattern switch has been set.
14. Uniform motion vector fields (smooth optical flow fields): prediction of motion vectors.
15. Occlusion: forwards or backwards temporal prediction in B pictures. Example: an object becomes temporarily obscured by another object within an image sequence. As a result, there may be an area of samples in a previous picture (forward reference/prediction picture) which has similar energy to a macroblock in the current picture (thus it is a good prediction), but no areas within a future picture (backward reference) are similar enough. Therefore only forwards prediction would be selected by macroblock type of the current macroblock. Likewise, a good prediction may only be found in a future picture, but not in the past. In most cases, the object, or correlation area, will be present in both forward and backward references. macroblock_type can select the best of the three combinations.
16. Sub-sample temporal prediction accuracy: bi-linearly interpolated (filtered) "half-pel" block predictions. Real world motion displacements of objects (correlation areas) from picture-to-picture do not fall on integer pel boundaries, but on irrational . Half-pel interpolation attempts to extract the true object to within one order of approximation, often improving compression efficiency by at least 1 dB.

17. Limited motion activity in P pictures: skipped macroblocks. When the motion vector is zero for both the horizontal and vertical vector components, and no quantized prediction error for the current macroblock is present. Skipped macroblocks are the most desirable element in the bitstream since they consume no bits, except for a slight increase in the bits of the next non-skipped macroblock.
18. Co-planar motion within B pictures: skipped macroblocks. When the motion vector is the same as the previous macroblock's, and no quantized prediction error for the current macroblock is present.

CONCLUSION
The importance of a widely accepted standard for video compression is apparent from the manufactures of computer games ,cd rom-movies,digital television,and digital recorders ( among others) implemented and started using MPEG-1 even before it was finally approved by international committee.
Mpeg standard is having international acceptance and it created a revolution in the vector field and are still maintaining
REFERENCES
? IEEE Transactions on consumer electronics.
? IEEE Transactions on broad casting
? IEEE Transactions on acoustics,speech and signal processing
? MPEG.ORG
? berkeley.org

CONTENTS

1 INTROUDUCTION 1
2 MPEG-VIDEO SYNTAX 5
3 MPEG-MYTHS 6
4 MPEG-DOCCUMENT 15
5 CONSTANT AND VARIABLE RATE BITSTREAMS 19
6 STATISTICAL MULTIPLEXING 21
7 MPEG-COMPRESSION 24
8 CONCLUSION 28
9 REFERENCES 29

ABSTRACT
MPEG-is a famous four letter word which stands for the Moving Pictures Experts Group To the real world, MPEG is a generic means of compactly representing digital video and audio for consumer distribution .The basic idea is to transform a stream of descrete samples in to a bitstream of tokens which takes less space ,(but is just as filling to the eye or ear¦) This transformation or better representing exploits perceptual and even some actual statistical redundancies .The orthogonal diamensions of video and audio streams can be further linked with the systems layer MPEG`s own means of keeping data multiplexed in a common serial bitsream.

Submitted by
ABINS ABBAS


ACKNOWLEDGEMENT
I express my sincere gratitude to Reenu Joseph, Prof. and Head, Department of Computer Engineering, Government Polytechnic colleage Muttom for his cooperation and encouragement.
I would also like to thank my seminar guide Asst. Prof. Jose James. (Department of CTE) for their invaluable advice and wholehearted cooperation without which this seminar would not have seen the light of day.
Gracious gratitude to all the faculty of the department of and friends for their valuable advice and encouragement.
Reply
#2
[attachment=5701]
[attachment=5703]
MPEG Encoding Basics

abstract

A compression encoder works by identifying the useful part of a signal which is called the entropy and sending this to the decoder. The remainder of the signal is called the redundancy because it can be worked out at the decoder from what is sent. Video compression relies on two basic assumptions. The first is that human sensitivity to noise in the picture is highly dependent on the frequency of the noise. The second is that even in moving pictures there is a great deal of commonality between one picture and the next. Data can be conserved both by raising the noise level where it is less visible and by sending only the difference between one picture and the next. In a typical picture, large objects result in low spatial frequencies whereas small objects result in high spatial frequencies. Human vision detects noise at low spatial frequencies much more readily than at high frequencies. The phenomenon of large-area flicker is an example of this. Spatial frequency analysis also reveals that in many areas of the picture, only a few frequencies dominate and the remainder are largely absent. For example if the picture contains a large, plain object, high frequencies will only be present at the edges. In the body of a plain object, high spatial frequencies are absent and need not be transmitted at all. In MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform (DCT). An array of pixels, typically 8 x 8, is converted into an array of coefficients. The magnitude of each coefficient represents the amount of a particular spatial frequency which is present. Fig.1 shows that in the resulting coefficient block, the coefficient in the top left corner represents the DC component or average brightness of the pixel block. Moving to the right the coefficients represent increasing horizontal spatial frequency. Moving down, the coefficients represent increasing vertical spatial frequency. The coefficient in the bottom right hand corner represents the highest diagonal frequency.
Reply
#3


[attachment=7497]



MPEG-INTROUDUCTION
MPEG is the famous four-letter word which stands for the "Moving Pictures Experts Groups.
To the real word, MPEG is a generic means of compactly representing digital video and audio signals for consumer distribution the essence of MPEG is its syntax: the little tokens that make up the bitstream. MPEG's semantics then tell you (if you happen to be a decoder, that is) how to inverse represent the compact tokens back into something resembling the original stream of samples. These semantics are merely a collection of rules (which people like to called algorithms, but that would imply there is a mathematical coherency to a scheme cooked up by trial and error….). These rules are highly reactive to combinations of bitstream elements set in headers and so forth.
MPEG is an institution unto itself as seen from within its own universe. When (unadvisedly) placed in the same room, its inhabitants a blood-letting debate can spontaneously erupt among, triggered by mere anxiety over the most subtle juxtaposition of words buried in the most obscure documents. Such stimulus comes readily from transparencies flashed on an overhead projector. Yet at the same time, this gestalt will appear to remain totally indifferent to critical issues set before them for many months. It should therefore be no surprise that MPEG's dualistic chemistry reflects the extreme contrasts of its two founding fathers: the fiery Leonardo Chairiglione (CSELT, Italy) and the peaceful Hiroshi Yasuda (JVC, Japan). The excellent byproduct of the successful MPEG Processes became an International Standards document safely administered to the public in three parts: Systems (Part), Video (Part 2), and Audio (Part 3).

Pre MPEG:
Before providence gave us MPEG, there was the looming threat of world domination by proprietary standards cloaked in syntactic mystery. With lossy compression being such an inexact science (which always boils down to visual tweaking and implementation tradeoffs), you never know what's really behind any such scheme (other than a lot of the marketing hype).
Seeing this threat… that is, need for world interoperability, the Fathers of MPEG sought help of their colleagues to form a committee to standardize a common means of representing video and audio (a la DVI) onto compact discs…. and maybe it would be useful for other things too.
MPEG borrowed a significantly from JPEG and, more directly, H.261. By the end of the third year (1990), a syntax emerged, which when applied to represent SIF-rate video and compact disc-rate audio at a combined bitrate of 1.5 Mbit/sec, approximated the pleasure-filled viewing experience offered by the standard VHS format.
After demonstrations proved that the syntax was generic enough to be applied to bit rates and sample rates far higher than the original primary target application ("Hey, it actually works!"), a second phase (MPEG-2) was initiated within the committee to define a syntax for efficient representation of broadcast video, or SDTV as it is now known (Standard Definition Television), not to mention the side benefits: frequent flier miles, impress friends, job security, obnoxious party conversations.
Yet efficient representation of interlaced (broadcast) video signals was more challenging than the progressive (non-interlaced) signals thrown at MPEG-1. Similarly, MPEG-1 audio was capable of only directly representing two channels of sound (although Dolby Surround Sound can be mixed into the two channels like any other two channel system).
MPEG-2 would therefore introduce a scheme to decorrelate mutlichannel discrete surround sound audio signals, exploiting the moderately higher redundancy factor in such a scenario. Of course, propriety schemes such as Dolby AC-3 have become more popular in practice.
Need for a third phase (MPEG-3) was anticipated way back in 1991 for High Definition Television, although it was later discovered by late 1992 and 1993 that the MPEG-2 syntax simply scaled with the bit rate, obviating the third phase. MPEG-4 was launched in late 1992 to explore the requirements of a more diverse set of applications (although originally its goal seemed very much like that of the ITU-T SG15 group, which produced the new low-birate videophone standard---H.263).
Today, MPEG (video and systems) is exclusive syntax of the United States Grand Alliance HDTV specification, the European Digital Video Broadcasting group, and the Digital Versital Disc (DVD).

MPEG VIDEO SYNTAX
MPEG video syntax provides an efficient way to represent image sequences in the form of more compact coded data. The language of the coded bits is the "syntax." For example, a few tokens amounting to only, say, 100 bits can represent an entire block of 64 samples rather transparently ("you can't tell the difference") which otherwise normally consume (64*8), or, 512 bits. MPEG also describes a decoding (reconstruction) process where the coded bits are mapped from the compact representation into the original, "raw" format of the image sequence. For example, a flag in the coded bitstream signals whether the following bits are to be decoded with a DCT algorithm or with a prediction algorithm. The algorithms comprising the decoding process are regulated by the semantics defined by MPEG. This syntax can be applied to exploit common video characteristics such as spatial redundancy, temporal redundancy, uniform motion, spatial masking, etc.

MPEG MYTHS
Because it's new and sometimes hard to understand, many myths plague perception about MPEG.

1. Compression Ratios over 100:1
As discussed elsewere, articles in the press and marketing literature will often make the claim that MPEG can achieve high quality video with compression ratios over 100:1. These figures often include the oversampling factors in the source video. In reality, the coded sample rate specified in an MPEG image sequence is usually not much larger than 30 times the specified bit rate. Pre-compression through subsampling is chiefly responsible for 3 digit ratios for all video coding methods, including those of the non-MPEG variety ("yuck, blech!").

2. MPEG-1 is 352x240
Both MPEG-1 and MPEG-2 video syntax can be applied at a wide range of bitrates and sample rates. The MPEG-1 that most people are familiar with has parameters of 30 SIF pictures (352 pixels x 240 lines) per second and a coded bitrate less than 1.86 megabits/sec----a combination known as "Constrained Parameters Bitstreams". This popular interoperability point is promoted by Compact Disc Video (White Book).
In fact, it is syntactically possible to encode picture dimensions as high as 4095 x 4095 and a bitrates up to 100 Mbit/sec. This number would be orders of magnitude higher, maybe even infinite, if not for the need to conserve bits in the headers!
With the advent of the MPEG-2 specification, the most popular combinations have coagulated into "Levels," which are described later in this text. The two most common levels are affectionately known as:
• Source Input Format (SIF), with 352 pixels x 240 lines x 30 frames/sec, also known as Low Level (LL), …and …
• "CCIR 601" (e.g. 720 pixels/line x 480 lines x 30 frames/sec), or Main Level.

3. Motion Compensation displaces macroblocks from previous pictures :
Macroblock predictions are formed out of arbitrary 16x16 pixel (or 16x8 in MPEG-2) areas from previously reconstructed pictures. There are no boundaries which limit the location of a macroblock prediction within the previous picture, other than the edges of the picture of course (but that doesn't always stop some people).

Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: mpeg compression seminar report, seminar on mpeg, syntax bliss, advantages of mpeg 7, homomorhic tokens, seminar report on mpeg 7, sdtv ep 1,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  network security seminars report computer science technology 14 20,384 24-11-2018, 01:19 AM
Last Post:
  Modular Computing seminars report computer science crazy 4 21,444 08-10-2013, 04:32 PM
Last Post: Guest
  tele immersion seminars report computer science technology 9 14,812 20-12-2012, 11:20 AM
Last Post: seminar details
  computer science seminars topics computer science crazy 1 10,061 16-03-2012, 10:38 AM
Last Post: seminar paper
  GSM Security And Encryption (download seminars report) Computer Science Clay 14 14,263 07-03-2012, 07:35 PM
Last Post: kushi.8
  wireless lan security seminars report computer science technology 8 11,750 24-02-2012, 12:21 PM
Last Post: seminar paper
  wi-max seminars report tanaya padhee 9 10,593 23-02-2012, 10:58 AM
Last Post: seminar paper
  computer science seminars topics 2012-2011 project topics 2 19,982 21-02-2012, 04:38 PM
Last Post: chethana mallya
  2011 seminars topics computer science project topics 1 2,176 06-02-2012, 09:53 AM
Last Post: seminar addict
  HYPER THREADING seminars report computer science crazy 5 8,680 04-02-2012, 11:15 AM
Last Post: seminar addict

Forum Jump: