You are currently browsing the monthly archive for January 2012.

The SODA 2012 paper by Dina Katabi, Piotr Indyk et al promises a relatively faster way to compute DFT of sparse signals. This is getting traction from outside the community too, after the MIT broadcast. The well know FFT originated from Gauss and laregely resurrected by Cooley and Tukey has the time complexity of  $\mathcal{O}(n\log n)$, which offers significant gains over the conventional DFT complexity of $\mathcal{O}\left(n^2\right)$ when the size $n$ is large.

If we really dissect the FFT complexity limits, it is already pretty good. With $n$ points to compute, the complexity will be proportional to $n$ and roughly, the per node complexity is $\log n$.

Now, what the new scheme promises is not a change in the order of complexity, but a clever way of simplifying the complexity depending on the inherent signal sparsity. When the signal is $k$ sparse (i.e.,  only $k$ among $n$ is significantly different from zero $0$ ), it is fanciful to ask whether we can indeed get to the complexity $\mathcal{O}(k \log n)$ and Katabi, Indyk et al have indeed reached there. Quite remarkable achievement this is, considering that the gain could be as good as the compressibility limit in most of the real world signals we deal today. Signals such as audio and video are the leading candidates in the practical signal signal processing world and they both are sparse in some transform basis. Recall that, the recent compressed sensing results for $k$ sparse signals showed the potential benefits of sparse signal processing and this new scheme will help to realize many things in a more meaningful way. One good thing with this is that this generalizes the conventional FFT. In that way, this is not just for sparse signals, but something which holds for any $k$ and in the limit when $k \to n$, the complexity is as good as the old mate FFT!

I want to try this out sometime for some communication problem that  I have in mind. At the moment, I am hopeful!