You are currently browsing the category archive for the ‘Information theory’ category.

Through Anand Sarwate’s blog and this piece from Sergio Verdu, I came to know that  the well known Information and Coding theorist Jim Massey has passed away. I don’t have any direct personal recollection of Massey, other than seeing him once at an Information theory workshop and also last year at the Marconi award ceremony at Irvine. The one thing I always remember (besides the Berlekamp Massey algorithm and transform decoding paper) is his notes at ETH. I have enormously benefited from his lecture notes on Cryptography when I was trying to learn the topic at EPFL. So lucid, crisp and intuitive were his scribes. How I always wished to sit in one of his live lectures! RIP!

I am sure detailed writing on his life and work will appear at some point. I recall Rudi Urbanke once mentioned the early impact of Massey’s work (as a graduate student ?) on threshold decoding of convolutional code, having spurred interest in industry.  Codex corporation (to which, he was a co-founder, I learned recently.) once wanted to implement it into their line modems. Not sure whether I have all the details intact here, but prior to the Viterbi algorithm, his threshold decoding scheme must have been a big hit in communication! To have industry interested in a graduate student work must be special, any day, anywhere!

In his blog  Sergio Verdu, has pointed to the IEEE oral history interview archive, which I happened to read last year almost same time.

Info theory website has further details including the funeral info.

If you have not seen this yet, a fascinating talk (Cryptography- Science or Magic) he did at MIT several years ago, is archived here. Boy! who did the speaker introduction? Another true connoisseur Peter Elias! First time, I saw a video of Elias.

This year’s Marconi foundation prize is being awarded to our company founder Henry Samueli. With last year’s price awarded to the other connoisseur Irwin Jacob (jointly with another stalwart Jack Wolf), now we have the two stellar communication company founders getting the prestigious award in consecutive years!. Feel proud to be part of the company he founded. Broadcom simply has a lot of vibrancy and part of this must surely be due to the champion founder. You can see the energy when Henry Samueli talk. I could feel a similar charm when Aart De Geus (founder and CEO of my earlier employer Synopsys) talks too.  Congratulations Dr. Samueli, we are proud of you.

The first mail  this morning (from Nihar Jindal)  brought this very sad news that Tom Cover has passed away. A giant in this field who contributed immensely to many flavours of Information theory will be missed. Everything he touched had class written all over, gracefulness, simplicity, elegance and all the more depth.

A tremendous loss! His legacy will continue.

Yesterday evening, during the dinner at a restaurant  at  Hawaii, I and my colleagues (Eric, Jun, Nihar and myself) along with a fellow colleague (Neycer) from Motorola were having some random ramblings. Somewhere in the course,came the topic on history of OFDM. It was indeed fascinating to trace the history. I did a bit of Googling later on and also traced some old notes from the discussion with Emre Telatar (who to me is a walking encylcopedia on several things). My information may not be too acurate, but roughly this is what I gathered after all the pile collection.

The origin of OFDM idea as such is largely attributed to Chang 1970.  Saltzberg had identified the problem of ISI and in came the notion of guard interval. Apparently, there is also a patent filed on this idea. The idea of cyclic prefix, the killer beauty which made OFDM ridiculously easy for equalization, was brought in by Peled and Ruiz in 1980. It was then Weinstein and Ebert who came up with the possibility of using FFT into OFDM. This traces back to the summer of 1971.

There are a few more interesting pre-work prior to these official OFDM milestones. Even though they are not really related, but hindlisht, we can still bring in similarities on how ideas shaped over time and different eras. For instance, the concept of parallel transmission was realized even in a product form in 1957 by a company Collins Radio Company. It was known as a Kineplex system. And the very idea of splitting to multiple carriers and power filling have signs of Gallager’s work and even waterfilling:-)

There is a Globecom paper which discusses all these. All these and may be more are neatly discussed there.

 

The 2012  Shannon award will go to El Gamal.

It came as a shock, when I heard the demise of Rudi Ahlswede, first from here. He passed away in December 2009, just two months after the ITW in Dublin where he looked jovial as ever. Interestingly, I had lunch with him, in one of those ITW days and it was fun conversing him. A great loss to Applied math and Information theory in particular. With his demise a massive figure has been lost. Very sad indeed. Besides the large contribution to the field, the special thing which standout is his humility and the sense of respect to fellow researchers, and the occasional humour in light conversations. His lectures were very jovial and filled with the innocence of an enthused child.

On the Christmas day, out of blue I bumped across an old archive of Robert Fanos’s interview (oral history). Beautiful one.

I didn’t see this book before. While trying to dig a little deeper into the sigma delta modulation theory, I bumped across this book by Robert Gray. The book, first published in 1990 hasn’t really become a mainstream reference on source coding, but the book is awesome. I didn’t read the whole book, but the chapter on uniform quantization noise is simply a treat for someone who loves the theory. Among other things, it discusses the Benetts’s conditions for the exactness of the uniform quantization error. I am now going through the noise analysis of the delta modulation and sigma delta modulation schemes.

Initially, I just managed to read a near full chapter content at books.google.com, but later I was convinced myself to get the book at Amazon. After losing the 7.5 USD used book offer, I finally had to content myself from ordering the next cheapest option of 40USD. I am expecting this to arrive in a few days. A brand new book would cost 130 bucks anyway!

I am very thrilled to learn that Ruediger Urbanke has won the 2011 (Koji) Kobayashi award. He and Tom Richardson are named the 2010 receipients of the famous Kobayashi award. Rudi and Tom are awarded Kobayashi prize “for developing the theory and practice of transmitting data reliably at rates approaching channel capacity.” They truly deserve this. Looking at the list of earlier Kobayashi award winners, it really is a place of pantheon of greats. Gottfried Ungerboeck, Don Coppersmith, Rivest, Shamir, Addleman, Jack Wolf, Berlekamp and so on are among the famous awardees of the past.

When pointed this to Rudi, he  was as usual every modest about these. I am sure I will get to have a coffee treat from him, in Lausanne! Place Palud or Ouchy?

The memorial service for Ralf Koetter held at UCSD is video archived. Quite many of the stalwarts in Information theory field found it difficult to control their emotions when they spoke about him. Such was the level of closeness many people had with him.  I have never got to directly interact with Ralf , but was aware about his stellar contributions to many areas in and related to coding. The most notable thing other than his well known research contribution is his amazing teaching skills. The two guest lectures given by him during David Forney’s MIT class in 2005 were simply stunning. He then had talked about Reed Solomon codes and that is by far the best lucid presentation of such a difficult topic, that I have ever seen. His sudden and untimely  demise leaves an irreplaceable void on this planet. He was that good. So woefully cut short by cancer.

Alex Vardy knitted down a fitting tribute to his friend and colleague.

I had earlier promised to update on the Xitip, when a windows setup is ready.  Though delayed, I have something to say now. I have finally made a windows installer for the (Information theoretic inequality proverXitip software, which was working pretty smoothly on linux, cygwin and mac for a while. I was not too keen on making this windows installer since a few DLL files are involved with it. Besides it was  a bit painful to include these nasty DLL files which would unnecessarily increase the bundle size.  Some of these may not be required if Gtk is already installed on the machine, but anyway I made one double click style version to suit the layman windows users in information theory community. 

Vaneet Aggarwal is the one who motivated me to make this up since he uses Windows. He showed some interest to use it, should a windows version be available. If atleast one user benefit from it, why not make it. In the process, I got to learn about an easy way to produce a windows install (setup maker) program. I used the freeware Install creator to produce it. 

I will put this installer available at the xitip website, but for the time  being you can access it from here. A lot of people suggested to revamp the xitip webpage which is pretty unclean at the moment. May be a short tutorial is impending. That will take a while; the next two and a half months are out of equation since I am pretty busy till then.

It was today. I’ve just come back to office, after the dinner party hosted as part of the I&C anniversary celebrations at EPFL. Andrew Viterbi was the guest of honour and largely because of his fame, there was considerable crowd attending the function. Martin Vetterli made a nice colourful, flashy presentation illustrating the history of I&C in EPFL as well as scientific progress in Switzerland. He mentioned the names including Jim Massey, Ungerboek who are undoubtedly pioneers of modern communication theory and practice. He began saying that “…Ungerboek is our friend, and now not quite..I will come to that in a minute…”. And of course he didnt come back and fill the circumstance in which the friendship derailed. But I reckon it was a casual remark, perhaps to indicate that Ungerboek, now with Broadcom is a bitter rival to Qualcomm. Since Qualcomm recently established a scientific partnership with EPFL and Viterbi being a Qualcom founder and associate, he perhaps just jotted that remark. It was a nice, usual interesting presentation by Martin.

He also mentioned a nice story about the current EPFL president Patrick Aebischer. Interestingly Patrick Aebischer after an MD (Medical science) degree was fond of computer science and decided to venture into taking a MS degree in CS . He then decided to test his luck at EPFL and approached the admission committee with a formal application. CS was affiliated to the Math department in those days. EPFL politely rejected his application and in due course that ended Patrick’s quest for an EPFL CS degree. He then moved to the US, as a successful surgeon and took a career path of entirely different trace. Years later, as one would say, due to the uncertain turn of things in the great cycle of life, he became the EPFL president and now ruling not only the CS department, but the whole school.

Viterbi talked about the Digital Communication history. He started giving a perspective of this field starting from the days of Maxwell, Rao, Cramer, Wiener and Nyquist. Then he discussed the impact of Shannon’s work. He said the three driving force which made this digital mobile revolution are

1) Shannon’s framework (1948)

2) Satellite (Sparked by the Sputnik success in 1957)

3) Moores’s law, which is more of a socio economic law, which dramatically kept driving the industry so successfully.

The talk as such wasn’t too attention gathering, but he made a rather comprehensive presentation discussing the impact of  digital communication evolution spurred since Shannon’s days (and even early) knitting a dramatic success story of digital wireless world with millions of cell phones and similar devices, which showcased literally the realization of theoretical promise Shannon made in 1948. He himself has his name etched in part of that success story, at least in the form of Viterbi algorithm, which is (one of the instance of it) an algorithm used to detect sequences when perturbed by a medium.

Quite a lot of fun activities were organized by the committee. It was quite fun. Since many programs (especially the fun part) were in french, the appeal was considerably deaf to non-french speakers. But then the rationale given was that, the alumni in good percentage are french! I found it funfilled , mainly to see these successful people like Viterbi sharing their views in real. After all we can learn from history. Not many people can claim to have done so well in everything he touched. In the case of Viterbi, he is an academician, researcher, successful entrepreneur and now a venture capitalist, all scaled to the possible limits. Incredible role model, whichever way we look.

Here is an interesting riddle on random matrices.  

(Rank of Random Binary Matrix). Let R(l,m,k) denote the number of binary matrices of dimension l \times m and rank k, so that by symmetry R(l,m,k)=R(m,l,k).  This is a repost of the solution that I have arrived at (certainly not the first!) and submitted as part of a homework (9) problem from the doctoral course  Modern coding theory (by Rudiger Urbanke) at  EPFL.  The sumbitted solution in PDF is available here.

Rank of a matrix G is essentially the number of nonzero rows when the matrix G is expressed in echelon form. So, we just need to compute the ways these matrices can be created with k non zero rows. Since the elements of the matrix are binary (from \mathbb{F}_{q=2}), we can simply do a counting.

It is trivial to compute R(l,m,k) for k=0 and k>l. For  k=0, only all zero matrix possible, and only one such matrix exist. Hence R(l,m,0)=1. For  l>k>0, since  k>\min(l,m), no matrix exist, which means R(l,m,k)=0 . 

Now we consider l=k>0.  How many ways? We have l=k  non zero rows of the l\times m  matrix, which means all rows must be nonzero. Without loss of generality, for counting, we could assume that, the rows are ordered. The last row (l^{th} row can be be done in 2^{m}-1,  since there anything other than all 0 vector (of size m) is allowed. On (l-1)-th row, anything other than that of row l is allowed. There are 2^{m}-2 ways here. l-2-th row can have anything except any linear combination of the rows l and l-1. This is nothing but 2^m-\left({\binom{2}{0}+\binom{2}{1}+\binom{2}{2}}\right)=2^m-2^2. Row l-3 then have 2^m-\left(\binom{3}{0}+\binom{3}{1}+\binom{3}{2}\right)=2^m-2^3 and so on. In all, Following the same procedure, we can have a total of  

= \left(2^m-1\right) \left(2^m-2^1\right)\left(2^m-2^2\right)\ldots \left(2^m-2^{l-1}\right)

=\left(2^m-1\right) 2^{1} \left(2^{m-1}-1\right) 2^{2} \left(2^{m-2}-1\right) \ldots 2^{l-1}\left(2^{m-l+1}-1\right)

=2^{0} 2^{1} 2^{2} \ldots 2^{l-1}\left(2^m-1\right)\left(2^{m-1}-1\right)\left(2^{m-2}-1\right)\ldots\left(2^{m-l+1}-1\right)

=\prod_{i=0}^{l-1}{{2^i}\left(2^{m-i}-1\right)}

=\prod_{i=0}^{l-1}{\left(2^{m}-2^{i}\right)}

=\prod_{i=0}^{l-1}{2^m \left(1-2^{i-m}\right)}

=2^{ml} \prod_{i=0}^{l-1}{ \left(1-2^{i-m}\right)}

 ways.  For l>k>0, we can construct a rank k matrix of size l \times m in any of the following ways:

  1.  Take a rank k-1 matrix of size (l-1) \times m and add an independent row.
  2.  Take a rank k matrix of size (l-1) \times m and add a dependent row.

For every (l-1) \times m matrix, 

2^{m}-1+\binom{k-1}{1}+\binom{k-1}{2}+\ldots +\binom{k-1}{k-1}=\left(2^m-2^{k-1}\right)

and hence,

R(l-1,m,k-1) \left(2^m-2^{k-1}\right)= R_{1}(l,m,k)

ways. (Essentially avoid all possible linear combinations of existing k-1 rows).  Using the second (item 2 above) method, we can have 1+\binom{k}{1}+\binom{k}{2}+\ldots +\binom{k}{k} = 2^k and 

R_{2}(l,m,k)= 2^k R(l-1,m,k) 

different ways a rank k matrix can be formed. Where,the first term (=1) is when the all zero row is picked as the new row. In\binom{k}{1} ways we can pick any one of the exisiting row as a dependent (new row). In general for 0\le j\le k we can have combination of j existing rows  out of k in \binom{k}{j} different ways to make a dependent (new) row.

So using (1) and (2) we get,

R(l,m,k)=2^k R(l-1,m,k)+\left(2^m-2^{k-1}\right)R(l-1,m,k-1)

Putting everything together,

R(l,m,k) = \begin{cases} 1, & k=0, \\2^{ml} \displaystyle \prod_{i=0}^{l-1}{ \left(1-2^{i-m}\right)} , & l=k>0 \\ 2^k R(l-1,m,k) + \left(2^m-2^{k-1}\right) R(l-1,m,k-1) &l>k>0 \\ 0 & k>l>0 \end{cases}

This content is password protected. To view it please enter your password below:

While searching for the book Information theory, Coding theorems for discrete memoryless systems by Imre Csiszar and Janos Korner, I came across several sites which echoes the fact that this book is one of the rarest specimen on earth. However in the process, I found a blog forum which lists a whole lot of out of print books. This book, as expected is in demand already. We can even vote to request to bring the out of print books to a possible reprint. I am not sure how effective this is, but there is no harm in trying! We can suggest any books you may want to have a reprint. To me, this is a whole new and welcome idea. For learning, we should have access to the good books. Already quite a few demands (See this blog for instance) for the Csiszar and Korner book. Man, the world sometimes think alike!

This content is password protected. To view it please enter your password below:

Today, as part of EPFL annual research day, there were 3 interesting talks. In the morning Prakash Narayan gave a very interesting talk titled “Common randomness, multiuser secrecy and tree packing”. Essentially it covered three distinct problems and he showed a connection among the three. The first problem setup is the following: A set of terminals observe separate but correlated signals. The classical Slepian and Wolf formulation of the data compression then is essentially the problem where a subset of the given terminals seeking to acquire the signals observed by all the terminals. And this is done by means of efficiently compressed inter terminal communication. This is a problem of generating common randomness. This of course does not involve any secrecy constraints. Now suppose a secret key generation problem. There the same subset of terminals seek to devise “secret” common randomness or a secret key through public communication. Assume here that an eavesdropper can observe this. So the setup is such that the key is concealed from the eavesdropper. Such a secret key can be used for subsequent encryption. Prakash’s talk was then to explain the connection between the two problems. He went on to establish the connection to a problem in computer science namely the maximal packing og Steiner trees in an associated multi graph. I dont think I figured out the details that well, but it triggered some curiosity to read the work a little more detail. I hope to do that sometime soon.

The afternoon session had two talks. One was by Shamai who talked about Broadcast approach in communication systems. It went over time. I thought I focused well in the beginning to follow him, but partly because of the post lunch effect and partly because of the tiredness I lost the flow. From what I understood, he outlined a lot of communication scenarios incorporating the broadcast strategy. Some examples were MIMO rate diversity trade off, ARQ, multilayer schemes etc. A lot the work seems to have gone in this direction, especially Suhas and Sanket etc (from the citation) and David Tse, L. Zheng, Al-Dahir and Shamai himself. I am somewhat amazed by the areas Shamai worked on. He seems to have covered a broad spectrum of research and yet produced some stellar work.

After Shamai, it was an interesting talk by Amos Lapidoth. He presented handsomely. I was attentive enough to follow this. Also, it happened to be a talk of different kind. He talked about the well known Matched filter used in communication. He sort of started with a little story. The story of a man from a village, venturing out of that place with a mission to find the meaning of life. So he goes to the mountains with a resolve not to come back until he finds the meaning of life. So days passed, months passed and years passed. Even after 10 years no sign of him. Finally he comes back after 11 years or so. The whole village feels curious: Aha he has come back. They ask him, wow, you have figured out the meaning of life. Please share us what is it? He says, with a pause: Life is (he pauses again)…. : Villages out of patience ask him, : ” You please go on .. life is …”. The man completes and says ” Life is like a train!”. Then they ask what you mean by “life is like a train”. Then to the surprise of the entire village he says, “may be not!”.

That was simply amazing a prelude for the talk. The talk abstract is the following:
One of the key results of Digital Communications can be paraphrased very roughly as follows: “in guessing which of two deterministic signals is being observed in white Gaussian noise, the inner products between the observed waveform and each of the signals form a sufficient statistic. Consequently, it is optimal to base one’s decision on these two inner products.” It is surprising that this basic result is never formulated as a theorem in any of the textbooks on the subject. This may be because of the difficulties in defining white Gaussian noise, in defining sufficient statistics for waveform observations, and in relating sufficiency to optimal detection. In this talk I shall describe a number of approaches to formulating the above statement as a theorem and point out some of their shortcomings. I will finally describe my proposed approach, formulate the theorem, and prove it from first principles. The proposed approach does not rely on the Ito Calculus, on Brownian Motion, or on generalized stochastic processes. It does not introduce non-physical infinite-power noise processes. Moreover, it is suitable for rigorously treating colored noise.

He gave a counter example where we can do better than matched filter. He says a Gaussian noise, but choose a point at random where the noise is made zero. Since it is randomly chosen (the null point) he claims it is still Gaussian. To me, that will result in SNR to blow up to infinity. So, are we missing something. I cant wait to read the full paper presentation of this. Otherwise, it seem to be a very very interesting way to look at matched filter, without needing the sojourn mathematical machinery.

Anyway all these talks are available (schedule at the moment) at [1]
[1]http://ic.epfl.ch/page65253-fr.html

Last winter Etienne Perron, Suhas Diggavi and myself together, have developed a tool suit to prove inequalities in information theory. The tool is adapted from the previous work of Raymond Yeung and Ying-On Yan at Cornell. We have made it a complete C based software and removed the matlab dependency in the back end. There is also a pre-parser (using lex and yacc) built in to have flexibility on choosing random variable names. More importantly, a graphical front end is developed (using Gtk), which works well across the platform. Even though the beta version was ready in late 2007, for many reasons, including exhaustive testing (we always find scope for improvement) it was delayed. Last month, we finally made an official release. The original xitip project page in IPG has a short description and pointer to the exclusive Xitip page in EPFL (http://xitip.epfl.ch). A lot of things still need to be done, before we could say it is satisfactory. One of the main thing pending is the user guide and some kind of exemplified documentation. There is a technical report, I have prepared, but that is a bit too technical at the moment. Of course Raymond yeung’s amazing papers introducing the theoretical idea behind this prover and his book are valuable resources. I have tried to provide a little more easy understanding of the concept using some illustration and toy examples. I hope to put this report anyway in the EPFL repository sometime. The first version of the project discussing the background is available here in PDF form.

Xitip screenshot, the French version

Xitip screenshot, the French version

The software is open source. If you are not bothered to compile and make an executable yourself, then please download the binary executable and just run. It is just a matter of double click in the latter case. We have Linux, Windows, Windows(Cygwin) and Mac versions available. There are two different linear programming software used. One is a Gnu open source GLPK and the other one is Qsopt (developed at Gatech). The Qsopt version is faster than the GLPK. Just in case you are obsessed with a perfect open source model, you could avail the GLPK [5] version.

Hopefully during this summer we will get to complete the pending work on this project. If any of you happen to find it interesting please don’t forget to update us, on what you though about the software (Comments can be good, bad and ugly!).

Aside, I better mention this: Xitip is a software useful for proving (verifying) Information theoretic inequalities [7] only. Such inequalities contain expressions involving measures such as entropy, mutual information etc. It is a pretty handy tool if you are trying to prove some limiting bounds in information theory. In reality, there is broad classification of Shannon type and non-Shannon type inequalities. Non-Shannon type inequalities are not many, but they exist. Xitip at the moment is equipped to solve only the Shannon type inequalities. You can expect more information on this at the Xitip home page [2]

[1]http://ipg.epfl.ch/doku.php?id=en:research:xitip
[2]http://xitip.epfl.ch
[3]http://www2.isye.gatech.edu/~wcook/qsopt/
[4]http://user-www.ie.cuhk.edu.hk/~ITIP/
[5]http://www.gnu.org/software/glpk/
[6]http://en.wikipedia.org/wiki/Information_theory
[7]http://en.wikipedia.org/wiki/Inequalities_in_information_theory

Pages

May 2017
M T W T F S S
« Mar    
1234567
891011121314
15161718192021
22232425262728
293031  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 84 other followers

Like

%d bloggers like this: