Data compression substitutes repetitive data in a bit stream with fewer bits that will be interpreted,
or uncompressed, on the other device. Later in this book, we will present a more detailed
example of data compression; for this introduction, it is sufficient to know that compression
will allow fewer bits of data to represent the total number of bits needed to reconstruct the message
accurately. One of the more common compression systems today is V.42bis, which is based
on the theoretical works of Professors Jacob Ziv and Abraham Lempel at Technion University
in Israel. We visited Technion in 1984 and were extremely impressed with their facilities and the
technical capabilities of their students. At that time, they had perfected systems that could convert
English text to Hebrew text, and they could integrate both texts into a single document. To
better understand how impressive this was, consider that this was happening the same year as
the first Apple Macintosh release.
The work of Ziv and Lempel was used by Englishman Terry Welch to develop the
, named to honor the three men. The LZW process uses two steps to parse character
sequences into a table of strings; these strings are then represented with one of 256 codes. The
parsing process works by constantly trying to find longer sequences that aren’t part of the current
256 values. This enables the compression process to substitute longer and longer strings,
which subsequently increases the benefits of the compression.
V.44 is the latest compression standard approved by the ITU and is included with the V.92
standard. V.42bis was created about 10 years ago, so it wasn’t designed with the Internet in
mind. V.44 was, and it is therefore much more efficient at compressing web pages—up to
100 percent more efficient in some cases.
450 times read
Did you enjoy this article?
(total 0 votes)