The Wavelet Digest Homepage
Return to the homepage
Search the complete Wavelet Digest database
Help about the Wavelet Digest mailing list
About the Wavelet Digest
The Digest The Community
 Latest Issue  Back Issues  Events  Gallery
The Wavelet Digest
   -> Volume 4, Issue 4

Question: Denoising Via Wavelet Techniques
images/spacer.gifimages/spacer.gif Reply into Digest
Previous :: Next  
Author Message

PostPosted: Mon Dec 02, 2002 6:07 pm    
Subject: Question: Denoising Via Wavelet Techniques
Reply with quote

Question: Denoising Via Wavelet Techniques


I am an avid user of Wavelet Packet Laboratory for Windows.
This is a commercially available package sold via Digital Diagnostic
Corporation in Hamden, CT. The software has allowed me to extensive
research in signal processing of Mass Spectrometry Data.


Need Resources on Denoising Techniques:

The following is my current perception.
Wavelet Techniques for denoising:

I. What is Denoising?
A. Coherent structure extraction
1. Let's view our signal of interest as being composed of
a noisy incoherent signal plus an information signal. We would
really like to capture or extract the information signal and
discard the noise.

2. The task is to choose a basis in which to represent the
information signal. The basis function must be chosen such that the
correlation between it and the information signal is
maximized. This will then leave the noisy signal not well correlated
with the basis function. Of course now this noisy signal is left
susceptible to elimination via some non-linear
statistical filter.

3. There is a library of Wavelet bases that one can
choose from. How does one determine which basis to choose? One
method is to perform an exhaustive search of the library
selecting the basis that
has the least amount of cost. By cost we mean, which basis
represents our signal in the most efficient way. So we just simply
select a basis in which are signal has a minimum entropy. Now, the
work has been done for us with commercial software packages; all
that we have to do is to choose the minimum best basis from the set of
best basis derived from the library. The best basis gives the
smallest entropy expansion of our signal. We can also view this as
the smallest measure of distance between our signal and its
orthogonal decomposition on some subspace.

4. Now that we have a nice representation of our signal
along the best-basis, select the coefficients above a threshold say,
t (like above 0.1 %), an eliminate the others. We then reconstruct
the signal from these coefficients and save it.
Ah ha! There may be coherent structures still hiding in the residual
(i.e. buried in the noise if you will). How do we know when to
stop looking for hidden structures in the residual!!?? Well once
again we need some sort of threshold to compare against So that we
continue to test for information in the residual
until we reach a point where the presumed extracted coherent part is
greater than some threshold. We can
also just choose to iterate a fixed number of times (i.e. the old
trial by error method). Now unless you have develop some model for
which you think you understand the nature of this noise, then it is
time for human interpretation. (i.e. are
you happy with the results that you see? Do the results make sense?
Are the results useful?). I guess what I am trying to emphasize is
that in order to calculate "signal-to-noise" ratios one must be able
to measure the noise. Everyone likes to assume that the noise is
white noise or guassian. Sometimes the assumption is a good
approximation. Anyway enough babbling, just be careful on how you
interpret your results. This is not magic you have to be

5. Finally, I would to describe another denoising technique using
wavelets. This one comes to us via Stanford. A technique known as
Wavelet Shrinkage was developed Donoho and Johnstone etc... at
Stanford University. Those guys are very good statisticians and you
know statisticians are always looking for robustness in both
parametric and non-parametric data analysis. These guys view
denoising as non-parametric statistical estimation and smoothing,
whereas I view these operations as nonlinear filter theory. Same
thing right!!???? Well this method starts with the idea that we want
to estimate our signal of interest, however, it is contaminated by
iid zero mean guassian noise. Then they use the additive noise model.
Now the next step is to model your signal of interest if you have a
lot of apriori (or any at all) information or you have a nice
phenomenological model of your signal (i.e. you what's going on with
the physics/chemistry of your process).
The latter technique is a parametric approach. If on the other hand
you don't know much about your signal then nonparametric methods
should be employed. Let's get back to wavelet shrinkage. Well
manually it is very simple;since noise effects all wavelet
coefficients by shrinking the coefficients toward
zero we can reduce the noise and simultaneously preserve the features
of our signal. (i.e. blurring of any sharp feature is minimize).
Now we simply do the inverse transform and viola!!!! We have a
denoised signal. I hope didn't lead you on by saying that this was
easy. The hard part is to select a wavelet basis and come up with a
statistically based thresholding rule. The thresholding rule is the
means by which you decide how much to shrink the wavelet coefficients.
There are few thresholding rules developed by Donoho and his crew.
Such as , thresholding based on SURE (Stein Unbiased Risk Estimator),
minimax criteria, etc... There are details also concerning
the type of Shrinkage function. (i.e. hard shrinkage vs. soft

Thank You for your help in advance,
Ernest L. Williams Jr.
Battelle Pacific Northwest Labs
Applied Physics Center
(509) 375-3930
All times are GMT + 1 Hour
Page 1 of 1

Jump to: 

disclaimer -
Powered by phpBB

This page was created in 0.027596 seconds : 18 queries executed : GZIP compression disabled