You are currently browsing the category archive for the ‘Signal Processing’ category.

I love Youtube.  Every day, more or less on the average, I ends up spending (or at times wasting) some time there. Today was no exception, yet I was pleasantly surprised to hit upon some video taped lectures of Richard Hamming. These are apparently recorded in 1995 and are on a wide variety of topics. I didn’t get to go through all of them, which hopefully I will do sometime. I particularly liked this one on Discrete Evolution. The depth of knowledge these folks have is immense. What is astonishing is their ability and skill on connecting their point of expertise to vast majority of physical analogies. Long live internet!

The SODA 2012 paper by Dina Katabi, Piotr Indyk et al promises a relatively faster way to compute DFT of sparse signals. This is getting traction from outside the community too, after the MIT broadcast. The well know FFT originated from Gauss and laregely resurrected by Cooley and Tukey has the time complexity of  $\mathcal{O}(n\log n)$, which offers significant gains over the conventional DFT complexity of $\mathcal{O}\left(n^2\right)$ when the size $n$ is large.

If we really dissect the FFT complexity limits, it is already pretty good. With $n$ points to compute, the complexity will be proportional to $n$ and roughly, the per node complexity is $\log n$.

Now, what the new scheme promises is not a change in the order of complexity, but a clever way of simplifying the complexity depending on the inherent signal sparsity. When the signal is $k$ sparse (i.e.,  only $k$ among $n$ is significantly different from zero $0$ ), it is fanciful to ask whether we can indeed get to the complexity $\mathcal{O}(k \log n)$ and Katabi, Indyk et al have indeed reached there. Quite remarkable achievement this is, considering that the gain could be as good as the compressibility limit in most of the real world signals we deal today. Signals such as audio and video are the leading candidates in the practical signal signal processing world and they both are sparse in some transform basis. Recall that, the recent compressed sensing results for $k$ sparse signals showed the potential benefits of sparse signal processing and this new scheme will help to realize many things in a more meaningful way. One good thing with this is that this generalizes the conventional FFT. In that way, this is not just for sparse signals, but something which holds for any $k$ and in the limit when $k \to n$, the complexity is as good as the old mate FFT!

I want to try this out sometime for some communication problem that  I have in mind. At the moment, I am hopeful!

Windowing techniques have always offered confusion to me. The hardest one was to remember what is what and how they looked in frequency domain (the time domain view was something thankfully I recollect from the names). I hit upon this yet again while trying to map the theoretical spectrum to spectrum computed from discrete samples (for an OFDM modulated signal) and then to analog spectrum measured by an instrument (By the way, I figured out that, the closest discrete technique which maps to the spectrum computation in spectrum analyzer is the Danielle method, which for some reason is not there in Matlab!) . For ages, I was using the pwelch (now spectrum.pwelch) function (pwelch method to estimate the spectrum) to compute/plot the spectrum in Matlab. Usually, some windowing as well is used to adjust the mean square versus resolution.  What window function is more suitable is something that I’ve never mastered. These days, doing a Wikipedia search to find a temporary fix and then move on is the adopted, yet not entirely satisfying strategy. The frequency domain characteristic of the popular window functions are now in here for reference, thanks to Marcel Müller. Since I may need it once in a while, I’ll keep a copy here for my own selfish quick reference. If you ever need to grab a copy, please be aware that, the ownership of this is with them.