Introduction to Data Compression, Fourth Edition (The Morgan Kaufmann Series in Multimedia Information and Systems)
Format: PDF / Kindle (mobi) / ePub
Each edition of Introduction to Data Compression has widely been considered the best introduction and reference text on the art and science of data compression, and the fourth edition continues in this tradition. Data compression techniques and technology are ever-evolving with new applications in image, speech, text, audio, and video. The fourth edition includes all the cutting edge updates the reader will need during the work day and in class.
Khalid Sayood provides an extensive introduction to the theory underlying today’s compression techniques with detailed instruction for their applications using several examples to explain the concepts. Encompassing the entire field of data compression, Introduction to Data Compression includes lossless and lossy compression, Huffman coding, arithmetic coding, dictionary techniques, context based compression, scalar and vector quantization. Khalid Sayood provides a working knowledge of data compression, giving the reader the tools to develop a complete and concise compression package upon completion of his book.
- New content added to include a more detailed description of the JPEG 2000 standard
- New content includes speech coding for internet applications
- Explains established and emerging standards in depth including JPEG 2000, JPEG-LS, MPEG-2, H.264, JBIG 2, ADPCM, LPC, CELP, MELP, and iLBC
- Source code provided via companion web site that gives readers the opportunity to build their own algorithms, choose and implement techniques in their own applications
Simplified form of the LMS algorithm: (67) (68) where (69) (70) The coefficients are updated using the following equation: (71) Notice that in the adaptive algorithms we have replaced products of reconstructed values and products of quantizer outputs with products of their signs. This is computationally much simpler and does not lead to any significant degradation of the adaptation process. Furthermore, the values of the coefficients are selected such that multiplication with these.
With all four rows, we get the block shown in Table 19.3. Table 19.3 After filtering the rows. Now repeat the filtering operation along the columns. The final block is shown in Table 19.4. Notice how much more homogeneous this last block is compared to the original block. This means that it will most likely not introduce any sharp variations in the difference block, and the high-frequency coefficients in the transform will be closer to zero, leading to compression. ♦ Table 19.4 Final.
Sequence starting at, or close to, some arbitrary point in the sequence. A similar situation exists in broadcast situations. Viewers do not necessarily tune into a program at the beginning. They may do so at any random point in time. In H.261 each frame, after the first frame, may contain blocks that are coded using predictions from the previous frame. Therefore, to decode a particular frame in the sequence, it is possible that we may have to decode the sequence starting at the first frame. One.
Prediction, 593 Mixed Raster Content (MRC), 211 MMR. See Modified modified READ Model-based coding, 649 AU, 650 global motion and local motion, 650 three-dimensional, 650 Modified discrete cosine transform (MDCT), 439, 577, 587 frames, 580 reconstructed sequence from 10 DCT coefficients, 579 source output sequence, 578 transformed sequence, 578 window function, 577–579 Modified Huffman (MH), 200 Modified modified READ (MMR), 203 Modified quantization mode, 663 Modified READ (MR),.
Picking the best result out of eight for the old JPEG. In practice, this would mean trying all eight JPEG predictors and picking the best. On the other hand, both CALIC and the new JPEG standard are single-pass algorithms. Furthermore, because of the ability of both CALIC and the new standard to function in multiple modes, both perform very well on compound documents, which may contain images along with text. 7.5 Prediction Using Conditional Averages Both of the predictive schemes we have.