The Wavelet Digest Homepage
Return to the homepage
Search the complete Wavelet Digest database
Help about the Wavelet Digest mailing list
About the Wavelet Digest
The Digest The Community
 Latest Issue  Back Issues  Events  Gallery
The Wavelet Digest
   -> Volume 7, Issue 6


Thesis: Lossless Image Compression using Wavelet Decomposition
 
images/spacer.gifimages/spacer.gif Reply into Digest
Previous :: Next  
Author Message
Veerarag Ramaswamy (CS) (ramaswam@csee.usf.edu)
Guest





PostPosted: Tue Jun 16, 1998 9:09 pm    
Subject: Thesis: Lossless Image Compression using Wavelet Decomposition
Reply with quote

#10 Thesis: Lossless Image Compression using Wavelet Decomposition

Hi, This is the abstract of my Ph.D Dissertation. The complete
disseration can be downloaded from my home page:
http://figaro.csee.usf.edu/~ramaswam. Some of the published papers are
also available on my web page. They are in postscript format.

Title: Lossless Image Compression using Wavelet Decomposition

Abstract: Research advances in wavelet theory and subband coding have
created a surge of interest in wavelet based applications during the
past decade. Image coding (or compression) is an important application
that has benefited significantly from the wavelet theory. Lossless
image coding using wavelet decomposition is the main focus of this
dissertation. Specific contributions involve design of algorithms, the
development of criteria for selection of appropriate wavelets and new
context models for entropy coding of wavelet coefficients. A wavelet
decomposed image has intraband and interband correlations which can be
exploited to obtain higher compression. In order to exploit the
intraband correlation, four lossless image compression schemes based
on prediction are proposed. The schemes combine wavelet decomposition
and variable block size segmentation (VBSS) entropy encoding. The
proposed schemes are evaluated and compared with other schemes in the
literature.

In order to exploit the interband correlation, a need arises to
incorporate an appropriate data structure, like the embedded zerotree
proposed by Shapiro. The embedded zerotree wavelet (EZW) framework
for image coding system consists of three stages: (i) wavelet
transform, (ii) an embedded zerotree encoding, and (iii) adaptive
arithmetic entropy encoding. In this framework, the selection of
appropriate wavelet filter plays an important role for obtaining good
compression efficiency. Two new criteria are proposed for evaluating
the performance of wavelets in lossless image compression
applications: zerotree count and monotone spectral ordering of
subbands produced after wavelet transform. Several wavelet filters
are evaluated to test the criteria and experimental results are
presented to justify the proposed performance criteria.

It is shown that by replacing the regular raster scan approach
performed in most EZW algorithms with the z-scan algorithm, better
compression efficiency can be achieved. The z-scan ordering exploits
the correlation among the transformed coefficients in a 2 x 2 local
neighborhood. In the three stage framework, the zerotree coding in
the second stage and the context modeling based arithmetic coding in
the third stage play an important role in obtaining good compression
efficiency apart from the proper choice of wavelet filter. In the
rest of the dissertation, several apporaches for grouping and context
modeling are investigated. In the proposed approaches, the set
partitioning based zerotree coding, proposed by Said & Pearlman, is
used to split the wavelet coefficients into (i) a significance map and
(ii) a residue map. The significance information in the significance
map can be either coded bitwise (without any modeling) or can be coded
as a 4-bit symbol. The residue map and the symbols corresponding to
the significance map are then encoded using context based arithmetic
coding. Several experiments that were conducted on context modeling
of significance and residue maps in order to maximize the compression
efficiency of the EZW-based lossless image coding scheme are
discussed. It was observed that while context modeling of residue
improves compression, the context modeling of significance map does
not yield better compression efficiency.

If you find any problems in downloading feel free to send me an e-mail
at ramaswam@figaro.csee.usf.edu

regards
Veeru Ramaswamy
USF 30840
4202 E Folwler Avenue
Tampa, FL -33620-3084
E-mail: ramaswam@figaro.csee.usf.edu
http://figaro.csee.usf.edu/~ramaswam
All times are GMT + 1 Hour
Page 1 of 1

 
Jump to: 
 


disclaimer - webmaster@wavelet.org
Powered by phpBB

This page was created in 0.025855 seconds : 18 queries executed : GZIP compression disabled