Abstract- (that is, sequences in which the same

Abstract-

In computer technology in various fields has
simplified the job of human being but has also resulted in large amount of
digital data. The challenge is managing the large amount of data, i.e. storing
and retrieving it. People are sharing, transmitting and storing millions of
images every moment. In image compression, we can reduce the quantity of pixels used in
image demonstration without extremely change image Visualization.
Although data compression is mostly done to avoid residence of more memory, and
increase capacity of memory devices, The procedure of reducing data size
without losing the crucial information is known as data compression. There is
various data compression technique which can use. These techniques can be
classified into two types i.e. Lossy and Lossless compression. In this paper
some of the lossless image compression is discussed in detail.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Introduction

Digital images have become admired for transferring,
sharing, storing and visual information and hence high speed compression
techniques are needed. The most important one is to reduce the time taken in
transmission of images. Data compression, particularly image compression play a
very crucial role in the field of multimedia computer services and other
telecommunication applications. The field of image compression has a wide
spectrum of methods ranging from classical lossless techniques and admired
transform approaches to the more recent segmentation based coding methods.LOSSLESS
COMPRESSION TECHNIQUESIn
lossless compression there will no loss of data, (i. e) after decompression the
image will be retrieved without any loss of data. Below
mentioned techniques consists in the lossless compression:1.      Run length encoding 2.      Huffman encoding 3.      Arithmetic coding 4.      Area coding5.     
SCZ coding – Simple Compression Utilities and Library6.     
Entropy Encoding7.     
Delta Encoding algorithm8.      Dictionary Techniques   LZW coding – (Lempel-Ziv–Welch) is a
dictionary based coding.a)      LZ77 b)      LZ78 c)      LZW 9.     
Bit Plane coding  Run
length encodingRun-length
encoding (RLE) is a very simple form of data compression in which runs of data
(that is, sequences in which the same data value occurs in many consecutive
data elements) are stored as a single data value and count, rather than as the
original run. This is most useful on data that contains many such runs: for
example, simple graphic images such as icons, line drawings, and animations. It
is not useful with files that don’t have many runs as it could greatly increase
the file size. Run-length encoding performs lossless data compression and is
well suited to palette-based iconic images. It does not work well at all on
continuous-tone images such as photographs, although JPEG uses it quite
effectively on the coefficients that remain after transforming and quantizing
image blocks.Huffman Algorithm The general idea in the Huffman encoding algorithm is to allocate
the very short code-words to those blocks of input along with the high
possibilities and the long code-words are allocated to those which are having
the low probabilities. The Huffman code process is dependent on the two observations
mentioned below: a.      
Very frequently found
symbols will have the shorter code-words as compare to the symbol which found
less frequently. b.     
Two symbols which found
least frequently may have the equal length.The Huffman code is prepared by combining together two least
possible characters and that are repeating in this process as far as there is
only the one character is remaining. A code-tree is hence prepared and then a
Huffman code is generated from the labeling of code tree. It is the best prefix
code that is generated from the set of the probabilities and which has been
used in the different applications of the compression. These generated codes are of different length of code which is
using integral number of the bits. This concept results in a decrease in
average length of the code and hence the whole size of the compressed data is
become smaller as compare to the original one. The Huffman’s algorithm is the
first that provides the solution to the issue of constructing the codes with
less redundancy.Entropy Based Encoding:
In
this compression process the algorithm first counts the frequency of occurrence
of each pixel in the image. Then the compression technique replaces the pixels
with the algorithm generated pixel. These generated pixels are fixed for a
certain pixel of the original image; and doesn’t depend on the content of the
image. The length of the generated pixels is variable and it varies on the
frequency of the certain pixel in the original image.Arithmetic
Coding: Arithmetic
coding is a form of entropy encoding used in lossless data compression.
Normally, a string of characters such as the words “hello there” is
represented using a fixed number of bits per character, as in the ASCII code.
When a string is converted to arithmetic encoding, frequently used characters
will be stored with little bits and not-so-frequently occurring characters will
be stored with more bits, resulting in fewer bits used in total. Arithmetic
coding differs from other forms of entropy encoding such as Huffman coding in
that rather than separating the input into component symbols and replacing each
with a code, arithmetic coding encodes the entire message into a single number.Delta encoding: Delta encoding represents stream of compressed pixels as the
difference between the current pixel and the previous pixel. The first pixel in
the delta encoded file is the same as the first pixel in the original image.
All the following pixels in the encoded file are equal to the difference
(delta) between the corresponding value in the input image, and the previous
value in the input image. In other words, delta encoding has increased the
probability that each pixel value will be near zero, and decreased the
probability that it will be far from zero. This uneven probability is just the
thing that Huffman encoding needs to operate. If the original signal is not
changing, or is changing in a straight line, delta encoding will result in runs
of samples having the same value.Area encoding: This method is a superior form of run
length encoding method. There is some major advantage of using this technique
over other lossless methods. In constant area coding special code words are
used to identify large areas of closest 1’s and 0’s. Here the image is
segmented in to blocks and then the segments are classified as blocks which
only contains black or white pixels or blocks with mixed intensity. Another
variant of constant area coding is to use an iterative approach in which the
binary image is decomposed into successively smaller and smaller and smaller
block. A hierarchical tree is built from these blocks. The section stops when
the block reaches certain predefined size or when all pixels of the block have
the same value. The nodes of this tree are then coded. For compressing white
text a simpler approach is used. This is known as white block skipping. In this
block containing solid white areas are coded to 0 and all other areas are coded
to 1. They are followed by bit pattern.Lempel-Ziv-Welch coding:Lempel-Ziv-Welch (LZW) is a universal
lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and
Terry Welch. It was published by Welch in 1984 as an improved implementation of
the LZ78 algorithm published by Lempel and Ziv in 1978. LZW

Dictionary based coding can be static or
dynamic. In static dictionary coding, dictionary is fixed when the encoding and
decoding processes. In dynamic dictionary coding, dictionary is updated on fly.
The algorithm is simple to implement and has the potential for very high
throughput in hardware implementations. It was the algorithm of the widely used
UNIX file compression method on computers. A large English text file can
typically be compressed via LZW to half its original size.