You are currently browsing the category archive for the ‘EPFL’ category.

Through Anand Sarwate’s blog and this piece from Sergio Verdu, I came to know that  the well known Information and Coding theorist Jim Massey has passed away. I don’t have any direct personal recollection of Massey, other than seeing him once at an Information theory workshop and also last year at the Marconi award ceremony at Irvine. The one thing I always remember (besides the Berlekamp Massey algorithm and transform decoding paper) is his notes at ETH. I have enormously benefited from his lecture notes on Cryptography when I was trying to learn the topic at EPFL. So lucid, crisp and intuitive were his scribes. How I always wished to sit in one of his live lectures! RIP!

I am sure detailed writing on his life and work will appear at some point. I recall Rudi Urbanke once mentioned the early impact of Massey’s work (as a graduate student ?) on threshold decoding of convolutional code, having spurred interest in industry.  Codex corporation (to which, he was a co-founder, I learned recently.) once wanted to implement it into their line modems. Not sure whether I have all the details intact here, but prior to the Viterbi algorithm, his threshold decoding scheme must have been a big hit in communication! To have industry interested in a graduate student work must be special, any day, anywhere!

In his blog  Sergio Verdu, has pointed to the IEEE oral history interview archive, which I happened to read last year almost same time.

Info theory website has further details including the funeral info.

If you have not seen this yet, a fascinating talk (Cryptography- Science or Magic) he did at MIT several years ago, is archived here. Boy! who did the speaker introduction? Another true connoisseur Peter Elias! First time, I saw a video of Elias.

I am very thrilled to learn that Ruediger Urbanke has won the 2011 (Koji) Kobayashi award. He and Tom Richardson are named the 2010 receipients of the famous Kobayashi award. Rudi and Tom are awarded Kobayashi prize “for developing the theory and practice of transmitting data reliably at rates approaching channel capacity.” They truly deserve this. Looking at the list of earlier Kobayashi award winners, it really is a place of pantheon of greats. Gottfried Ungerboeck, Don Coppersmith, Rivest, Shamir, Addleman, Jack Wolf, Berlekamp and so on are among the famous awardees of the past.

When pointed this to Rudi, he  was as usual every modest about these. I am sure I will get to have a coffee treat from him, in Lausanne! Place Palud or Ouchy?

Glanced upon a paper from the 2009 ISIT, titled “LT Codes Decoding: Design and Analysis” by Feng Lu et al. The authors discusses LT decoding strategy which is Peeling decoder (conventional simple LT decoding) followed by an extension based on the well known Widemann algorithm solving a sparse homogeneous linear system. Here is a brief summary as I understood.

On packet erasure channels (internet being one close realization of a model of this kind, where, the TCP/IP suit, one can either gets packets correctly delivered or lost in the wild), Luby Transform (LT) codes forms an easy way to transfer information.  It is well known that, asymptotically (When the number of information symbols  (or packets)  k and the number of packets received $r$ are large) near full recovery is made possible by simple peeling decoder which grows in complextity at \mathcal{O}(k) . The claim is that, for n >10000  or so, with r= k(1+\delta) , on average  we can, with high probability, recover all of  k information packets  by simple unwrapping of xor-ed packets. A more precise statement will read as: If from k information packets, each coded packet constructed independently by xor operation (the number of packets taken for xor operation follows a distribution; Plain vanila version of LT codes uses Soliton distribution), then the content of the original information source can be recovered from any k+\mathcal{O} \left(\sqrt{k} \ln^{2} \left(\frac{k}{\epsilon}\right)\right) packets with probability 1-\epsilon by an average of \mathcal{O} \left(k \ln \frac{k}{\epsilon}\right) symbol operations. However, when the number of input symbols is not large (say less, n < 1000 ) the overhead term \delta is not small. It is not too surprising, since we need sufficiently large n and r to converge the performance in expectation.  One can think of solving the system of linear equations using Gaussian elimination and get the best figures, but the question mark stays on the complexity \mathcal{O}(k^{3}) of this method over the cheaper peeling decoder complexity \mathcal{O}(k) .

Conventional LT decoding based on Peeling decoding rule terminate, once the number of degree -1 packets (ripple as it is called in graph theory parlance) are void. One can hope to continue from there, by using Gaussian elimination and decode a a few more (if the graph matrix rank permits it). If the original graph is full rank, one can technically decode all of them.  Gauissian eliminatio being heavy, one hope to find an easy, preferably recursive method. Indeed there comes Wiedemann and that is what the authors use.  The authors propose what they call as full rank  decoding, which essentially is LT decoding, until there are no more degree one packets. From there, they borrow a packet and then uses Wiedemann algorithm which is somewhat light on computational complexity.

The larger aspect on whether this scheme do any better than conventional LT peeling decoder is the other question they answer. To achieve full recovery we need to have a full rank matrix. Now, the matrix being a sparse matrix defined over \mathbb{F}_{2}^{k \times r} where r is the number of received packets. The probability of the matrix having a full rank will directly help us to infer the decoding merits. This is an interesting piece since I had looked into the computation of the rank of a binary matrix and also the probability of a random matrix rank being equal to an arbitrary value. All this was done during the Modern Coding Theory doctoral course by Ruediger Urbanke at EPFL.  For binary random matrix, we can also setup a recursive formula and thereby a good inference on the probability of decoding can be arrived at.

Shrini Kudekar yet again showcased his creativity and acting skills on the eve of Dinkars public defense. Here is the video.

Today, I attended a very good talk given by Emo Welzl of ETHZ. I could not quite appreciate the drinks and snacks prior to the event, since the organizers kept too little of them and by the time I arrived,  smart guys had grabbed hold of almost all of them. I had to content with a glass of orange juice! Anyway nothing comes free in this country. So getting an orange juice is itself luxury, one would say! Nevertheless, glad that I attended this talk. Monika Henzinger did the speaker introduction part, which she did very well. She mentioned that Emo comes from the same village as that of her husband (Thomas Henzinger). That is not really relevant, but I like such personal, less formal introductions. It takes the audience to a touch curious and close. He indeed proved her (Monika promised us that we are in game for a great talk) right with a truly nice lecture, calm, composed and thoughtful;words precisely chosen, well articulated throughout. He gave some insights into a problem which was never known to me. My field is not quite into SAT or algorithms, but at the end of this talk, I got to learn some thing. Moreover, he instigated me to learn a little more about these nice problems.

Here is a gist of what I understood. If you are interested in the talk subject, perhaps you should visit his homepage. What I state down is something my little brain, which never for once trained on this topic, digested out. Suppose we are given a Boolean function (that is a logic function which has either true or false, equivalently 0 or 1 results). Deciding satisfiability (known as SAT problem) of such formula in conjunctive normal form is known to be an NP complete problem. He discussed some nice (surprisingly simplified bounds) combinatorial bounds on the number of clauses (or equivalently constraints) for unsatisfiability. As usual in talks, I hardly could grasp the proof in total, but he began quoting the Lovász lemma as an essential ingredient. I got to learn a little bit about this rather nice and cute lemma. Loosely the lemma has the following setting.
If we consider a sequence of events s_1,s_2,\ldots s_k where each of these events occur with a probability at most p. Suppose each event is independent from all other events, except at most d of them, then ep(d+1) \le 1, where e is the Napier constant (named after the famous Scottish mathematician John Napier). This did not strike me instantly, but pondering a little bit about it, I have realized that this is really cute a bound. I can think of a nice, little example scenario, where this can be applied. Let me figure out another cute one. You can expect me to post it. Now let me get back to that optimization problem on compound sets of channels that I have been stuck for the last four days.

I had earlier promised to update on the Xitip, when a windows setup is ready.  Though delayed, I have something to say now. I have finally made a windows installer for the (Information theoretic inequality proverXitip software, which was working pretty smoothly on linux, cygwin and mac for a while. I was not too keen on making this windows installer since a few DLL files are involved with it. Besides it was  a bit painful to include these nasty DLL files which would unnecessarily increase the bundle size.  Some of these may not be required if Gtk is already installed on the machine, but anyway I made one double click style version to suit the layman windows users in information theory community. 

Vaneet Aggarwal is the one who motivated me to make this up since he uses Windows. He showed some interest to use it, should a windows version be available. If atleast one user benefit from it, why not make it. In the process, I got to learn about an easy way to produce a windows install (setup maker) program. I used the freeware Install creator to produce it. 

I will put this installer available at the xitip website, but for the time  being you can access it from here. A lot of people suggested to revamp the xitip webpage which is pretty unclean at the moment. May be a short tutorial is impending. That will take a while; the next two and a half months are out of equation since I am pretty busy till then.

It was today. I’ve just come back to office, after the dinner party hosted as part of the I&C anniversary celebrations at EPFL. Andrew Viterbi was the guest of honour and largely because of his fame, there was considerable crowd attending the function. Martin Vetterli made a nice colourful, flashy presentation illustrating the history of I&C in EPFL as well as scientific progress in Switzerland. He mentioned the names including Jim Massey, Ungerboek who are undoubtedly pioneers of modern communication theory and practice. He began saying that “…Ungerboek is our friend, and now not quite..I will come to that in a minute…”. And of course he didnt come back and fill the circumstance in which the friendship derailed. But I reckon it was a casual remark, perhaps to indicate that Ungerboek, now with Broadcom is a bitter rival to Qualcomm. Since Qualcomm recently established a scientific partnership with EPFL and Viterbi being a Qualcom founder and associate, he perhaps just jotted that remark. It was a nice, usual interesting presentation by Martin.

He also mentioned a nice story about the current EPFL president Patrick Aebischer. Interestingly Patrick Aebischer after an MD (Medical science) degree was fond of computer science and decided to venture into taking a MS degree in CS . He then decided to test his luck at EPFL and approached the admission committee with a formal application. CS was affiliated to the Math department in those days. EPFL politely rejected his application and in due course that ended Patrick’s quest for an EPFL CS degree. He then moved to the US, as a successful surgeon and took a career path of entirely different trace. Years later, as one would say, due to the uncertain turn of things in the great cycle of life, he became the EPFL president and now ruling not only the CS department, but the whole school.

Viterbi talked about the Digital Communication history. He started giving a perspective of this field starting from the days of Maxwell, Rao, Cramer, Wiener and Nyquist. Then he discussed the impact of Shannon’s work. He said the three driving force which made this digital mobile revolution are

1) Shannon’s framework (1948)

2) Satellite (Sparked by the Sputnik success in 1957)

3) Moores’s law, which is more of a socio economic law, which dramatically kept driving the industry so successfully.

The talk as such wasn’t too attention gathering, but he made a rather comprehensive presentation discussing the impact of  digital communication evolution spurred since Shannon’s days (and even early) knitting a dramatic success story of digital wireless world with millions of cell phones and similar devices, which showcased literally the realization of theoretical promise Shannon made in 1948. He himself has his name etched in part of that success story, at least in the form of Viterbi algorithm, which is (one of the instance of it) an algorithm used to detect sequences when perturbed by a medium.

Quite a lot of fun activities were organized by the committee. It was quite fun. Since many programs (especially the fun part) were in french, the appeal was considerably deaf to non-french speakers. But then the rationale given was that, the alumni in good percentage are french! I found it funfilled , mainly to see these successful people like Viterbi sharing their views in real. After all we can learn from history. Not many people can claim to have done so well in everything he touched. In the case of Viterbi, he is an academician, researcher, successful entrepreneur and now a venture capitalist, all scaled to the possible limits. Incredible role model, whichever way we look.

Todays IPG seminar had Fritz Eisenbrand (the Disctete Opt chair, Math department EPFL) talking about Diameter of Polyhedra:Limits of Abstraction. I don’t think I followed the topic too well, but this is a share of what I understood.

The topic is about a convex geometric problem on the diameter of a polyhedra. The question of whether the diameter of a polyhedron is polynomial or not seemed to be a longstanding open problem. The largest diameter {\Delta_{u}(d,n)} of a {d} dimensional polyhedron with {n} facets has known upper and lower bounds.

{n-d+\lfloor d/5 \rfloor \le \Delta_{u}(d,n) \le n^{\log d +1}}.

The lower bound is due to Klee and Walkup and upper bound to Kalai and Kleitman. These bounds also hold good for combinatorial abstractions of the 1-skeleton of non-degenerate polyhedra (Polyhedron here is called non-degenrate). What Fritz and his colleagues have done is to look into the gap between these known lower and upper bounds. Apparently, the gap is wide and they have made some progress to get a super linear lower bound {\Delta_{u}(d,n) \le \Omega\left(n^{3/2}\right)} if {d} is allowed to grow with {n}.

The way they showed this bound is by establishing the bound for the largest diemeter of a graph in a base abstraction family. Let us say, the abstraction family of connected graphs be denoted by {\mathcal{B}_{d,n}}.The largest diameter of a graph in {\mathcal{B}_{d,n}} is denoted by {D(d,n)}. They find that,{D(d,n) =\Omega\left(n^{3/2}\right)} and then using the fact that {\Delta_{u}(d,n) \le D(d,n)}, they conclude the bound {\Delta_{u}(d,n) \le \Omega\left(n^{3/2}\right)}

I have not had a chance to see their paper yet. I must say, the proof was not all that within my grab during the talk. However it appeared that it is based on some layering and combinatorics. He said some applications to covering problem, in particular disjoint covering design which I didn’t follow that well. Sometimes I get the feeling that I am a little dumb to grasp these ideas during a talk. I wonder whether others understand it very well on a first shot presentation. I have put it in my agenda (among the millions of other papers to read) to see through this problem and proof, one day! His presentation was very clear and legible though.

Today, during the evening chat, Emmanuel Abbe threw an interesting question: Whether the sum of square roots of consecutive binomial coefficients converge to some closed form! That is, {S(n)=\displaystyle \sum_{k=0}^{n}{\sqrt{\binom{n}{k}}}}. We tried a few known combinatorics tweak, but no meaningful solution arrived. We were also wondering whether this has some asymptotic limit, but that too did not yield anything. A quick check on Mathematica wasn’t helpful either. Now the question is: Does this sum yield some closed form expression.

While playing with this sum in Mathematica, I found that for the sum of squares of binomial coefficients, there is a nice simple closed form.

{S_{2}(n)=\displaystyle \sum_{k=0}^{n}{{\binom{n}{k}}^{2}}=\binom{2n}{n}}

I was toying with a proof. It turns out that, the proof is extremely simple and is a one line tweak of the Vandermonde identity {\binom{p+q}{m}=\displaystyle \sum_{i=0}^{m}{\binom{p}{i}\binom{q}{m-i}}}. Simply substitute {p=q=m=n} and we have the results on table. The natural question then would be: Is there a generalization for { S_{r}(n)=\displaystyle \sum_{k=0}^{n}{{\binom{n}{k}}^{r}}} for any {r\in \mathbb{N}_{\ge 1}}. Ofcourse now for {r=1,2} it is trivial.

Apparently, it turns out that, there is no closed form expression for a general (all) {r}. There are some interesting divisibility properties of these sums. An interesting account of that is addressed by Neil Calkin (Factors of sums of powers of binomial coefficients).

At the moment, I get a feeling that sum of fractional powers of binomial coefficients is not trivial. May be there is no closed form. May be not!

From this blog piece, I came to know that the smart MIT theoretical computer scientist Madhu Sudan is making a move from MIT to industry. He is set to take up a research position with Microsoft. At this economy troubled days, lesser mortals would take the conservative route that ensure stability and so on. They would say a move from a tenured professorship to a more volatile industry is risky. But then one of the smartest mind in the world can have a world revolve around him, if need be. So no surprises here. On the positive side it is a gain for industry, while it is a big loss for MIT, if Madhu decides to stay away from academia for too long.

Interestingly, on the very same blog, someone commented about other famous moves. Apparently, Venkatesan Guruswami, Madhu’s celebrated student is also making a permanent move from UWash to CMU.  In industry, we are often associated with frequent hops. Academia is not too immune to attrition either. However, I see no harm in making smart moves. It is going to help the world, atleast  in expectation.

As in EPFL too, there is imminent big fish attrition(s). Tom Henzinger and his wife Monika Henzinger are about to leave EPFL to take up a permanent position in Austria. The awesome twosome will be missed in EPFL.

An interesting bound (problem from Rudi’s  Modern coding theory book) regarding the chromatic number of a random graph.  I first attempted it during the course.  Here is the problem statement:

C.5 (Concentration of the Chromatic Number – Spencer and Shamir). Consider a graph G on n vertices.  The chromatic number of a graph G, denoted by \chi(G), is the smallest number of colors needed to color all vertices so that no two vertices which are joined by an edge have the same color. Consider the standard ensemble of random graphs on n vertices with parameter p: to sample from this ensemble, pick n vertices and connect each of the \binom{n}{2}Ž ordered pairs of vertices independently from all other connections with probability p. Show that for this ensemble 

\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}\left[\chi(G)\right] \rvert >\lambda \sqrt{n-1}\right) \le 2e^{\frac{-\lambda^2}{2}}.

My solution is as folllows: (PDF typeset as single file is available here. Look for problem 3)

Let G is a random graph with n vertices.  If p is the probability that a given edge is in G. The probability space be G(n,p).  Let \chi be a filter on the set of all such random graphs. We can define a Martingale as follows:

X_{0}=\mathbb{E}[X(G)]

X_{i}=\mathbb{E}[X(G)\lvert 1_{1},1_{2},\ldots,1_{i}],\forall 1\le i\le \binom{n}{2}

where

1_{i}=\begin{cases}1,&\text{if edge} \quad e_{i}\in G\\ 0, &\text{if edge} \quad e_{i}\notin G\end{cases}

and \chi(G) is the Chromatic number of G. Chromatic number changes at most by one , when the information about the new edge comes in. Clearly, \chi satisfies the conditions for Azuma’s inequality. \{X_i\}_{i\ge 0}

is a Martingale, with \lvert X_i-X_0\rvert \le 1).  Let Z_i=X_i-E[X_i]. Clearly 

E[Z_i]=E[X_i]-E[x_i]=0

Z_m=X_m-E[X_m]=\mathbb{E}[\chi(G)|1_1,1_2,\ldots,1_m]-\mathbb{E}[\chi(G)]

=\chi(G)-\mathbb{E}[\chi(G)]

Now we can use the Azuma’s inequality on \{Z_i\} to get,

\mathbb{P}\left(\lvert Z_n-Z_0\rvert \ge \lambda \sqrt{n}\right)=\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert \ge \lambda \sqrt{n}\right)\le 2e^{\frac{-\lambda^2}{2}}.

Since \mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert \ge \lambda \sqrt{n}\right)=\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert > \lambda \sqrt{n-1}\right), the result 

\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}\left[\chi(G)\right] \rvert >\lambda \sqrt{n-1}\right) \le 2e^{\frac{-\lambda^2}{2}} 

follows.

Here is an interesting riddle on random matrices.  

(Rank of Random Binary Matrix). Let R(l,m,k) denote the number of binary matrices of dimension l \times m and rank k, so that by symmetry R(l,m,k)=R(m,l,k).  This is a repost of the solution that I have arrived at (certainly not the first!) and submitted as part of a homework (9) problem from the doctoral course  Modern coding theory (by Rudiger Urbanke) at  EPFL.  The sumbitted solution in PDF is available here.

Rank of a matrix G is essentially the number of nonzero rows when the matrix G is expressed in echelon form. So, we just need to compute the ways these matrices can be created with k non zero rows. Since the elements of the matrix are binary (from \mathbb{F}_{q=2}), we can simply do a counting.

It is trivial to compute R(l,m,k) for k=0 and k>l. For  k=0, only all zero matrix possible, and only one such matrix exist. Hence R(l,m,0)=1. For  l>k>0, since  k>\min(l,m), no matrix exist, which means R(l,m,k)=0 . 

Now we consider l=k>0.  How many ways? We have l=k  non zero rows of the l\times m  matrix, which means all rows must be nonzero. Without loss of generality, for counting, we could assume that, the rows are ordered. The last row (l^{th} row can be be done in 2^{m}-1,  since there anything other than all 0 vector (of size m) is allowed. On (l-1)-th row, anything other than that of row l is allowed. There are 2^{m}-2 ways here. l-2-th row can have anything except any linear combination of the rows l and l-1. This is nothing but 2^m-\left({\binom{2}{0}+\binom{2}{1}+\binom{2}{2}}\right)=2^m-2^2. Row l-3 then have 2^m-\left(\binom{3}{0}+\binom{3}{1}+\binom{3}{2}\right)=2^m-2^3 and so on. In all, Following the same procedure, we can have a total of  

= \left(2^m-1\right) \left(2^m-2^1\right)\left(2^m-2^2\right)\ldots \left(2^m-2^{l-1}\right)

=\left(2^m-1\right) 2^{1} \left(2^{m-1}-1\right) 2^{2} \left(2^{m-2}-1\right) \ldots 2^{l-1}\left(2^{m-l+1}-1\right)

=2^{0} 2^{1} 2^{2} \ldots 2^{l-1}\left(2^m-1\right)\left(2^{m-1}-1\right)\left(2^{m-2}-1\right)\ldots\left(2^{m-l+1}-1\right)

=\prod_{i=0}^{l-1}{{2^i}\left(2^{m-i}-1\right)}

=\prod_{i=0}^{l-1}{\left(2^{m}-2^{i}\right)}

=\prod_{i=0}^{l-1}{2^m \left(1-2^{i-m}\right)}

=2^{ml} \prod_{i=0}^{l-1}{ \left(1-2^{i-m}\right)}

 ways.  For l>k>0, we can construct a rank k matrix of size l \times m in any of the following ways:

  1.  Take a rank k-1 matrix of size (l-1) \times m and add an independent row.
  2.  Take a rank k matrix of size (l-1) \times m and add a dependent row.

For every (l-1) \times m matrix, 

2^{m}-1+\binom{k-1}{1}+\binom{k-1}{2}+\ldots +\binom{k-1}{k-1}=\left(2^m-2^{k-1}\right)

and hence,

R(l-1,m,k-1) \left(2^m-2^{k-1}\right)= R_{1}(l,m,k)

ways. (Essentially avoid all possible linear combinations of existing k-1 rows).  Using the second (item 2 above) method, we can have 1+\binom{k}{1}+\binom{k}{2}+\ldots +\binom{k}{k} = 2^k and 

R_{2}(l,m,k)= 2^k R(l-1,m,k) 

different ways a rank k matrix can be formed. Where,the first term (=1) is when the all zero row is picked as the new row. In\binom{k}{1} ways we can pick any one of the exisiting row as a dependent (new row). In general for 0\le j\le k we can have combination of j existing rows  out of k in \binom{k}{j} different ways to make a dependent (new) row.

So using (1) and (2) we get,

R(l,m,k)=2^k R(l-1,m,k)+\left(2^m-2^{k-1}\right)R(l-1,m,k-1)

Putting everything together,

R(l,m,k) = \begin{cases} 1, & k=0, \\2^{ml} \displaystyle \prod_{i=0}^{l-1}{ \left(1-2^{i-m}\right)} , & l=k>0 \\ 2^k R(l-1,m,k) + \left(2^m-2^{k-1}\right) R(l-1,m,k-1) &l>k>0 \\ 0 & k>l>0 \end{cases}

Today, as part of EPFL annual research day, there were 3 interesting talks. In the morning Prakash Narayan gave a very interesting talk titled “Common randomness, multiuser secrecy and tree packing”. Essentially it covered three distinct problems and he showed a connection among the three. The first problem setup is the following: A set of terminals observe separate but correlated signals. The classical Slepian and Wolf formulation of the data compression then is essentially the problem where a subset of the given terminals seeking to acquire the signals observed by all the terminals. And this is done by means of efficiently compressed inter terminal communication. This is a problem of generating common randomness. This of course does not involve any secrecy constraints. Now suppose a secret key generation problem. There the same subset of terminals seek to devise “secret” common randomness or a secret key through public communication. Assume here that an eavesdropper can observe this. So the setup is such that the key is concealed from the eavesdropper. Such a secret key can be used for subsequent encryption. Prakash’s talk was then to explain the connection between the two problems. He went on to establish the connection to a problem in computer science namely the maximal packing og Steiner trees in an associated multi graph. I dont think I figured out the details that well, but it triggered some curiosity to read the work a little more detail. I hope to do that sometime soon.

The afternoon session had two talks. One was by Shamai who talked about Broadcast approach in communication systems. It went over time. I thought I focused well in the beginning to follow him, but partly because of the post lunch effect and partly because of the tiredness I lost the flow. From what I understood, he outlined a lot of communication scenarios incorporating the broadcast strategy. Some examples were MIMO rate diversity trade off, ARQ, multilayer schemes etc. A lot the work seems to have gone in this direction, especially Suhas and Sanket etc (from the citation) and David Tse, L. Zheng, Al-Dahir and Shamai himself. I am somewhat amazed by the areas Shamai worked on. He seems to have covered a broad spectrum of research and yet produced some stellar work.

After Shamai, it was an interesting talk by Amos Lapidoth. He presented handsomely. I was attentive enough to follow this. Also, it happened to be a talk of different kind. He talked about the well known Matched filter used in communication. He sort of started with a little story. The story of a man from a village, venturing out of that place with a mission to find the meaning of life. So he goes to the mountains with a resolve not to come back until he finds the meaning of life. So days passed, months passed and years passed. Even after 10 years no sign of him. Finally he comes back after 11 years or so. The whole village feels curious: Aha he has come back. They ask him, wow, you have figured out the meaning of life. Please share us what is it? He says, with a pause: Life is (he pauses again)…. : Villages out of patience ask him, : ” You please go on .. life is …”. The man completes and says ” Life is like a train!”. Then they ask what you mean by “life is like a train”. Then to the surprise of the entire village he says, “may be not!”.

That was simply amazing a prelude for the talk. The talk abstract is the following:
One of the key results of Digital Communications can be paraphrased very roughly as follows: “in guessing which of two deterministic signals is being observed in white Gaussian noise, the inner products between the observed waveform and each of the signals form a sufficient statistic. Consequently, it is optimal to base one’s decision on these two inner products.” It is surprising that this basic result is never formulated as a theorem in any of the textbooks on the subject. This may be because of the difficulties in defining white Gaussian noise, in defining sufficient statistics for waveform observations, and in relating sufficiency to optimal detection. In this talk I shall describe a number of approaches to formulating the above statement as a theorem and point out some of their shortcomings. I will finally describe my proposed approach, formulate the theorem, and prove it from first principles. The proposed approach does not rely on the Ito Calculus, on Brownian Motion, or on generalized stochastic processes. It does not introduce non-physical infinite-power noise processes. Moreover, it is suitable for rigorously treating colored noise.

He gave a counter example where we can do better than matched filter. He says a Gaussian noise, but choose a point at random where the noise is made zero. Since it is randomly chosen (the null point) he claims it is still Gaussian. To me, that will result in SNR to blow up to infinity. So, are we missing something. I cant wait to read the full paper presentation of this. Otherwise, it seem to be a very very interesting way to look at matched filter, without needing the sojourn mathematical machinery.

Anyway all these talks are available (schedule at the moment) at [1]
[1]http://ic.epfl.ch/page65253-fr.html

Last winter Etienne Perron, Suhas Diggavi and myself together, have developed a tool suit to prove inequalities in information theory. The tool is adapted from the previous work of Raymond Yeung and Ying-On Yan at Cornell. We have made it a complete C based software and removed the matlab dependency in the back end. There is also a pre-parser (using lex and yacc) built in to have flexibility on choosing random variable names. More importantly, a graphical front end is developed (using Gtk), which works well across the platform. Even though the beta version was ready in late 2007, for many reasons, including exhaustive testing (we always find scope for improvement) it was delayed. Last month, we finally made an official release. The original xitip project page in IPG has a short description and pointer to the exclusive Xitip page in EPFL (http://xitip.epfl.ch). A lot of things still need to be done, before we could say it is satisfactory. One of the main thing pending is the user guide and some kind of exemplified documentation. There is a technical report, I have prepared, but that is a bit too technical at the moment. Of course Raymond yeung’s amazing papers introducing the theoretical idea behind this prover and his book are valuable resources. I have tried to provide a little more easy understanding of the concept using some illustration and toy examples. I hope to put this report anyway in the EPFL repository sometime. The first version of the project discussing the background is available here in PDF form.

Xitip screenshot, the French version

Xitip screenshot, the French version

The software is open source. If you are not bothered to compile and make an executable yourself, then please download the binary executable and just run. It is just a matter of double click in the latter case. We have Linux, Windows, Windows(Cygwin) and Mac versions available. There are two different linear programming software used. One is a Gnu open source GLPK and the other one is Qsopt (developed at Gatech). The Qsopt version is faster than the GLPK. Just in case you are obsessed with a perfect open source model, you could avail the GLPK [5] version.

Hopefully during this summer we will get to complete the pending work on this project. If any of you happen to find it interesting please don’t forget to update us, on what you though about the software (Comments can be good, bad and ugly!).

Aside, I better mention this: Xitip is a software useful for proving (verifying) Information theoretic inequalities [7] only. Such inequalities contain expressions involving measures such as entropy, mutual information etc. It is a pretty handy tool if you are trying to prove some limiting bounds in information theory. In reality, there is broad classification of Shannon type and non-Shannon type inequalities. Non-Shannon type inequalities are not many, but they exist. Xitip at the moment is equipped to solve only the Shannon type inequalities. You can expect more information on this at the Xitip home page [2]

[1]http://ipg.epfl.ch/doku.php?id=en:research:xitip
[2]http://xitip.epfl.ch
[3]http://www2.isye.gatech.edu/~wcook/qsopt/
[4]http://user-www.ie.cuhk.edu.hk/~ITIP/
[5]http://www.gnu.org/software/glpk/
[6]http://en.wikipedia.org/wiki/Information_theory
[7]http://en.wikipedia.org/wiki/Inequalities_in_information_theory

Pages

July 2017
M T W T F S S
« Mar    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 84 other followers

Like

%d bloggers like this: