You are currently browsing the category archive for the ‘Talks and conferences’ category.

This year’s Marconi foundation prize is being awarded to our company founder Henry Samueli. With last year’s price awarded to the other connoisseur Irwin Jacob (jointly with another stalwart Jack Wolf), now we have the two stellar communication company founders getting the prestigious award in consecutive years!. Feel proud to be part of the company he founded. Broadcom simply has a lot of vibrancy and part of this must surely be due to the champion founder. You can see the energy when Henry Samueli talk. I could feel a similar charm when Aart De Geus (founder and CEO of my earlier employer Synopsys) talks too.  Congratulations Dr. Samueli, we are proud of you.

The first mail  this morning (from Nihar Jindal)  brought this very sad news that Tom Cover has passed away. A giant in this field who contributed immensely to many flavours of Information theory will be missed. Everything he touched had class written all over, gracefulness, simplicity, elegance and all the more depth.

A tremendous loss! His legacy will continue.

Yesterday evening, during the dinner at a restaurant  at  Hawaii, I and my colleagues (Eric, Jun, Nihar and myself) along with a fellow colleague (Neycer) from Motorola were having some random ramblings. Somewhere in the course,came the topic on history of OFDM. It was indeed fascinating to trace the history. I did a bit of Googling later on and also traced some old notes from the discussion with Emre Telatar (who to me is a walking encylcopedia on several things). My information may not be too acurate, but roughly this is what I gathered after all the pile collection.

The origin of OFDM idea as such is largely attributed to Chang 1970.  Saltzberg had identified the problem of ISI and in came the notion of guard interval. Apparently, there is also a patent filed on this idea. The idea of cyclic prefix, the killer beauty which made OFDM ridiculously easy for equalization, was brought in by Peled and Ruiz in 1980. It was then Weinstein and Ebert who came up with the possibility of using FFT into OFDM. This traces back to the summer of 1971.

There are a few more interesting pre-work prior to these official OFDM milestones. Even though they are not really related, but hindlisht, we can still bring in similarities on how ideas shaped over time and different eras. For instance, the concept of parallel transmission was realized even in a product form in 1957 by a company Collins Radio Company. It was known as a Kineplex system. And the very idea of splitting to multiple carriers and power filling have signs of Gallager’s work and even waterfilling:-)

There is a Globecom paper which discusses all these. All these and may be more are neatly discussed there.

 

The SODA 2012 paper by Dina Katabi, Piotr Indyk et al promises a relatively faster way to compute DFT of sparse signals. This is getting traction from outside the community too, after the MIT broadcast. The well know FFT originated from Gauss and laregely resurrected by Cooley and Tukey has the time complexity of  \mathcal{O}(n\log n), which offers significant gains over the conventional DFT complexity of \mathcal{O}\left(n^2\right) when the size n is large.

If we really dissect the FFT complexity limits, it is already pretty good. With n points to compute, the complexity will be proportional to n and roughly, the per node complexity is \log n.

Now, what the new scheme promises is not a change in the order of complexity, but a clever way of simplifying the complexity depending on the inherent signal sparsity. When the signal is k sparse (i.e.,  only k among n is significantly different from zero 0 ), it is fanciful to ask whether we can indeed get to the complexity \mathcal{O}(k \log n) and Katabi, Indyk et al have indeed reached there. Quite remarkable achievement this is, considering that the gain could be as good as the compressibility limit in most of the real world signals we deal today. Signals such as audio and video are the leading candidates in the practical signal signal processing world and they both are sparse in some transform basis. Recall that, the recent compressed sensing results for k sparse signals showed the potential benefits of sparse signal processing and this new scheme will help to realize many things in a more meaningful way. One good thing with this is that this generalizes the conventional FFT. In that way, this is not just for sparse signals, but something which holds for any k and in the limit when k \to n, the complexity is as good as the old mate FFT!

I want to try this out sometime for some communication problem that  I have in mind. At the moment, I am hopeful!

The 2012  Shannon award will go to El Gamal.

Recently (almost a month back), I happened to ask a simple question to an interview candidate. The candidate apparently worked, among other things, on the design of scrambler and descrambler blocks in 3G/LTE terminal development .Naturally, I expected him to know in and out of the block and of course the scrambling idea itself. To my innocence, the answer came: “Scrambling adds protection to the information source”. That was a bit of foot in the mouth reply, which I seldom expected from such a candidate profile. Anyway, come to think of it I shouldn’t overly blame the candidate because many a time, engineers overlook the primary goal of placing a block or unit in the overall design, even when they do a glory traverse with it. I personally think this is ignorance on part of an individual engineer if he or she doesn’t spend time to figure out the reasoning behind every single unit in  design, especially the one he or she works on, if not to an expert level, the intuitive idea gathering is paramount.

Anyway, let us get some facts right with scramblers. Scramblers are there in almost all the standardized data communication systems. It is some kind of randomize acting upon the information bit stream (Well, one can also think of it operating on non-binary symbol sequence or packets, but let us consider binary for the time being, for simplicity.). The input stream upon the act of a scrambler changes the patter of the original bit stream. Naturally, at the receiver should be doing a reverse operation (descrambler) to make sense of the source information. One can argue that, a receiver not having the right descrambler (descrambler is very well based on the scrambler algorithm) cannot retrieve the intended information, adding protection/security on the data stream is NOT the reason for the existence a scrambler/descrambler unit in a communication system. The why? Let us see why.

A typical receiver will have several algorithmic stages such as synchronization, equalization, decoding etc ( These are broad classifications. One can go into finer details. For the discussion on scrambling we can suffice with this much). All these stages are usually designed with the assumption that the source stream is  i.i.d (continuous stream of identically independent ‘0’s and ‘1’s). For example, a long stint of ‘0’s or ‘1’s can degrade the timing/clock synchronizer performance or may even result in loss of synchronization.

The other important argument on the scrambler is the spectral density (spectral mask as defined in standard documents) requirements, usually imposed by the standard specification and (country specific) regulation requirements.  Any periodicity (short or long), can result in spectral spikes, which in turn may result in co-channel and adjacent channel interference due to various nonlinear blocks present in the transceiver chain.

So, scrambler’s job is to make an uniform i.i.d stream so that, the spectrum stays as close to white. Similarly, the  receivers do not misbehave because of arbitrary source distribution.

Yesterday I attended a very nice talk during the Broadcom internal seminar series. It was a nice talk discussing the semiconductor fabrication. For me, it was truly whale swallowing kind of a talk, since a lot of new things learned.  The term “tapeout” is there in the common vocabulary in semiconductor industry and we all use it with such ease that no one really bothered to search the origin of the term itself. I had done it once by doing a wikipedia, but then as usual had forgotten all about it. Anyway, here is the legend behind tapeout. Mike Magee said it all very cleanly, but let me sort of state it again.

Well, tapeout refers to that fancy thing the designers deliver to the (semiconductor fabrication) fab companies in Taiwan (well, I can say so because most of the fab companies are in Taiwan). The fancy algorithm we conceptualized in plain equations, went through the LaTex pages and then C programs finally took shape into RTL and gates and transistors. That then further went through the backend processes to eventually a set of masks. These mask description is what is given to the semiconductor manufacturing companies. These days, it is just a  matter of file transfer using ftp or so. Then you can rightfully ask:why there is a term called tape out? Well there is history to it, fair enough I will say!

Also check as well as this for some more details.

Today, I attended a very good talk given by Emo Welzl of ETHZ. I could not quite appreciate the drinks and snacks prior to the event, since the organizers kept too little of them and by the time I arrived,  smart guys had grabbed hold of almost all of them. I had to content with a glass of orange juice! Anyway nothing comes free in this country. So getting an orange juice is itself luxury, one would say! Nevertheless, glad that I attended this talk. Monika Henzinger did the speaker introduction part, which she did very well. She mentioned that Emo comes from the same village as that of her husband (Thomas Henzinger). That is not really relevant, but I like such personal, less formal introductions. It takes the audience to a touch curious and close. He indeed proved her (Monika promised us that we are in game for a great talk) right with a truly nice lecture, calm, composed and thoughtful;words precisely chosen, well articulated throughout. He gave some insights into a problem which was never known to me. My field is not quite into SAT or algorithms, but at the end of this talk, I got to learn some thing. Moreover, he instigated me to learn a little more about these nice problems.

Here is a gist of what I understood. If you are interested in the talk subject, perhaps you should visit his homepage. What I state down is something my little brain, which never for once trained on this topic, digested out. Suppose we are given a Boolean function (that is a logic function which has either true or false, equivalently 0 or 1 results). Deciding satisfiability (known as SAT problem) of such formula in conjunctive normal form is known to be an NP complete problem. He discussed some nice (surprisingly simplified bounds) combinatorial bounds on the number of clauses (or equivalently constraints) for unsatisfiability. As usual in talks, I hardly could grasp the proof in total, but he began quoting the Lovász lemma as an essential ingredient. I got to learn a little bit about this rather nice and cute lemma. Loosely the lemma has the following setting.
If we consider a sequence of events s_1,s_2,\ldots s_k where each of these events occur with a probability at most p. Suppose each event is independent from all other events, except at most d of them, then ep(d+1) \le 1, where e is the Napier constant (named after the famous Scottish mathematician John Napier). This did not strike me instantly, but pondering a little bit about it, I have realized that this is really cute a bound. I can think of a nice, little example scenario, where this can be applied. Let me figure out another cute one. You can expect me to post it. Now let me get back to that optimization problem on compound sets of channels that I have been stuck for the last four days.

It was today. I’ve just come back to office, after the dinner party hosted as part of the I&C anniversary celebrations at EPFL. Andrew Viterbi was the guest of honour and largely because of his fame, there was considerable crowd attending the function. Martin Vetterli made a nice colourful, flashy presentation illustrating the history of I&C in EPFL as well as scientific progress in Switzerland. He mentioned the names including Jim Massey, Ungerboek who are undoubtedly pioneers of modern communication theory and practice. He began saying that “…Ungerboek is our friend, and now not quite..I will come to that in a minute…”. And of course he didnt come back and fill the circumstance in which the friendship derailed. But I reckon it was a casual remark, perhaps to indicate that Ungerboek, now with Broadcom is a bitter rival to Qualcomm. Since Qualcomm recently established a scientific partnership with EPFL and Viterbi being a Qualcom founder and associate, he perhaps just jotted that remark. It was a nice, usual interesting presentation by Martin.

He also mentioned a nice story about the current EPFL president Patrick Aebischer. Interestingly Patrick Aebischer after an MD (Medical science) degree was fond of computer science and decided to venture into taking a MS degree in CS . He then decided to test his luck at EPFL and approached the admission committee with a formal application. CS was affiliated to the Math department in those days. EPFL politely rejected his application and in due course that ended Patrick’s quest for an EPFL CS degree. He then moved to the US, as a successful surgeon and took a career path of entirely different trace. Years later, as one would say, due to the uncertain turn of things in the great cycle of life, he became the EPFL president and now ruling not only the CS department, but the whole school.

Viterbi talked about the Digital Communication history. He started giving a perspective of this field starting from the days of Maxwell, Rao, Cramer, Wiener and Nyquist. Then he discussed the impact of Shannon’s work. He said the three driving force which made this digital mobile revolution are

1) Shannon’s framework (1948)

2) Satellite (Sparked by the Sputnik success in 1957)

3) Moores’s law, which is more of a socio economic law, which dramatically kept driving the industry so successfully.

The talk as such wasn’t too attention gathering, but he made a rather comprehensive presentation discussing the impact of  digital communication evolution spurred since Shannon’s days (and even early) knitting a dramatic success story of digital wireless world with millions of cell phones and similar devices, which showcased literally the realization of theoretical promise Shannon made in 1948. He himself has his name etched in part of that success story, at least in the form of Viterbi algorithm, which is (one of the instance of it) an algorithm used to detect sequences when perturbed by a medium.

Quite a lot of fun activities were organized by the committee. It was quite fun. Since many programs (especially the fun part) were in french, the appeal was considerably deaf to non-french speakers. But then the rationale given was that, the alumni in good percentage are french! I found it funfilled , mainly to see these successful people like Viterbi sharing their views in real. After all we can learn from history. Not many people can claim to have done so well in everything he touched. In the case of Viterbi, he is an academician, researcher, successful entrepreneur and now a venture capitalist, all scaled to the possible limits. Incredible role model, whichever way we look.

Todays IPG seminar had Fritz Eisenbrand (the Disctete Opt chair, Math department EPFL) talking about Diameter of Polyhedra:Limits of Abstraction. I don’t think I followed the topic too well, but this is a share of what I understood.

The topic is about a convex geometric problem on the diameter of a polyhedra. The question of whether the diameter of a polyhedron is polynomial or not seemed to be a longstanding open problem. The largest diameter {\Delta_{u}(d,n)} of a {d} dimensional polyhedron with {n} facets has known upper and lower bounds.

{n-d+\lfloor d/5 \rfloor \le \Delta_{u}(d,n) \le n^{\log d +1}}.

The lower bound is due to Klee and Walkup and upper bound to Kalai and Kleitman. These bounds also hold good for combinatorial abstractions of the 1-skeleton of non-degenerate polyhedra (Polyhedron here is called non-degenrate). What Fritz and his colleagues have done is to look into the gap between these known lower and upper bounds. Apparently, the gap is wide and they have made some progress to get a super linear lower bound {\Delta_{u}(d,n) \le \Omega\left(n^{3/2}\right)} if {d} is allowed to grow with {n}.

The way they showed this bound is by establishing the bound for the largest diemeter of a graph in a base abstraction family. Let us say, the abstraction family of connected graphs be denoted by {\mathcal{B}_{d,n}}.The largest diameter of a graph in {\mathcal{B}_{d,n}} is denoted by {D(d,n)}. They find that,{D(d,n) =\Omega\left(n^{3/2}\right)} and then using the fact that {\Delta_{u}(d,n) \le D(d,n)}, they conclude the bound {\Delta_{u}(d,n) \le \Omega\left(n^{3/2}\right)}

I have not had a chance to see their paper yet. I must say, the proof was not all that within my grab during the talk. However it appeared that it is based on some layering and combinatorics. He said some applications to covering problem, in particular disjoint covering design which I didn’t follow that well. Sometimes I get the feeling that I am a little dumb to grasp these ideas during a talk. I wonder whether others understand it very well on a first shot presentation. I have put it in my agenda (among the millions of other papers to read) to see through this problem and proof, one day! His presentation was very clear and legible though.

Today, as part of EPFL annual research day, there were 3 interesting talks. In the morning Prakash Narayan gave a very interesting talk titled “Common randomness, multiuser secrecy and tree packing”. Essentially it covered three distinct problems and he showed a connection among the three. The first problem setup is the following: A set of terminals observe separate but correlated signals. The classical Slepian and Wolf formulation of the data compression then is essentially the problem where a subset of the given terminals seeking to acquire the signals observed by all the terminals. And this is done by means of efficiently compressed inter terminal communication. This is a problem of generating common randomness. This of course does not involve any secrecy constraints. Now suppose a secret key generation problem. There the same subset of terminals seek to devise “secret” common randomness or a secret key through public communication. Assume here that an eavesdropper can observe this. So the setup is such that the key is concealed from the eavesdropper. Such a secret key can be used for subsequent encryption. Prakash’s talk was then to explain the connection between the two problems. He went on to establish the connection to a problem in computer science namely the maximal packing og Steiner trees in an associated multi graph. I dont think I figured out the details that well, but it triggered some curiosity to read the work a little more detail. I hope to do that sometime soon.

The afternoon session had two talks. One was by Shamai who talked about Broadcast approach in communication systems. It went over time. I thought I focused well in the beginning to follow him, but partly because of the post lunch effect and partly because of the tiredness I lost the flow. From what I understood, he outlined a lot of communication scenarios incorporating the broadcast strategy. Some examples were MIMO rate diversity trade off, ARQ, multilayer schemes etc. A lot the work seems to have gone in this direction, especially Suhas and Sanket etc (from the citation) and David Tse, L. Zheng, Al-Dahir and Shamai himself. I am somewhat amazed by the areas Shamai worked on. He seems to have covered a broad spectrum of research and yet produced some stellar work.

After Shamai, it was an interesting talk by Amos Lapidoth. He presented handsomely. I was attentive enough to follow this. Also, it happened to be a talk of different kind. He talked about the well known Matched filter used in communication. He sort of started with a little story. The story of a man from a village, venturing out of that place with a mission to find the meaning of life. So he goes to the mountains with a resolve not to come back until he finds the meaning of life. So days passed, months passed and years passed. Even after 10 years no sign of him. Finally he comes back after 11 years or so. The whole village feels curious: Aha he has come back. They ask him, wow, you have figured out the meaning of life. Please share us what is it? He says, with a pause: Life is (he pauses again)…. : Villages out of patience ask him, : ” You please go on .. life is …”. The man completes and says ” Life is like a train!”. Then they ask what you mean by “life is like a train”. Then to the surprise of the entire village he says, “may be not!”.

That was simply amazing a prelude for the talk. The talk abstract is the following:
One of the key results of Digital Communications can be paraphrased very roughly as follows: “in guessing which of two deterministic signals is being observed in white Gaussian noise, the inner products between the observed waveform and each of the signals form a sufficient statistic. Consequently, it is optimal to base one’s decision on these two inner products.” It is surprising that this basic result is never formulated as a theorem in any of the textbooks on the subject. This may be because of the difficulties in defining white Gaussian noise, in defining sufficient statistics for waveform observations, and in relating sufficiency to optimal detection. In this talk I shall describe a number of approaches to formulating the above statement as a theorem and point out some of their shortcomings. I will finally describe my proposed approach, formulate the theorem, and prove it from first principles. The proposed approach does not rely on the Ito Calculus, on Brownian Motion, or on generalized stochastic processes. It does not introduce non-physical infinite-power noise processes. Moreover, it is suitable for rigorously treating colored noise.

He gave a counter example where we can do better than matched filter. He says a Gaussian noise, but choose a point at random where the noise is made zero. Since it is randomly chosen (the null point) he claims it is still Gaussian. To me, that will result in SNR to blow up to infinity. So, are we missing something. I cant wait to read the full paper presentation of this. Otherwise, it seem to be a very very interesting way to look at matched filter, without needing the sojourn mathematical machinery.

Anyway all these talks are available (schedule at the moment) at [1]
[1]http://ic.epfl.ch/page65253-fr.html

Pages

July 2017
M T W T F S S
« Mar    
 12
3456789
10111213141516
17181920212223
24252627282930
31  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 84 other followers

Like

%d bloggers like this: