You are currently browsing the tag archive for the ‘linkedin’ tag.

Sadly, the great man has left us. The world becomes a lesser place. Nelson Mandela, has been a remarkable symbol of peace, determination and humility. A champion leader for humanity. Perhaps, no one since Mahatma Gandhi will have a legacy like Mandela. I vividly remember holding a placard in one of the school day processions, with a message “Free Nelson Mandela”. Little would I have known the true greatness of this man, back then… An absolute hero.

The most striking thing about Mandela to me, is the way he composed himself, after release from 27 years of prison. 27 long years.. think of it! 18 of that 27 years in the isolated Robben Island. He held himself, dumped all vengeance against the very people who took away a golden slice of precious life. For the world, we had a great leader in him for years since then! His release could easily have turned into a bloodshed like never before, but he instead chose the path of peace. Talk about leadership. He is something beyond special.

I can feel the deafening feeling in South Africa and Africa in general, the world by and large too. What a sacrifice! Hope we can continue his legacy. BBC has this nice obituary on him. RIP.

If you have not seen this before, here is a must watch page from PBS.

So, the number has descended down from 70 million to a mortal 600! Expectedly, there is a new Polymath initiative to improve this further and who knows, perhaps 2 indeed is that number. Well, we are talking about the new result by James Maynard on the asymptotic gap between prime numbers.

Just to recap, the problem statement is as follows: If and are the th and th prime numbers (e.g., ). On cursory counting, the gap appears to grow larger as , but not necessarily, because there are these well known twin prime pairs such as . Will this gap of stay good forever as ? If not as low as , will that be something low enough? i.e., How small can be, where

Earlier this year, Zhang proved that is no more than $70$ million. That in a way proved that, the prime gap is bounded as we move along the number line. A bunch of mathematicians including Terrance tao worked further (polymath project on this) and improved that gap to as a few thousands. The latest result from Maynard brings in an independent proof for . Maynard also claims that if the Elliott–Halberstam conjecture (See this nice blog post on prime tuple theory this by Terry Tao) is indeed true, then, . Stunning!

What is stated here is just one avatar of the prime tuple theorem. More general results are also being discussed within the community. Terrance Tao again has this nicely articulated and maintains a polymath page for us. As an onlooker, I am as excited as many others to see this progress.

Sitting at 15th floor of the Hyatt regency in Dallas, with a night view of the city scrapers in the background, the only thing that flashes is nostalgia. The feeling of growing up, adoring and tirelessly following a true champion in Tendulkar. Suddenly, he is taking one last walk to the 22 yard strip, a test match at his home turf in Mumbai. It must be emotional for many and I for sure cannot hold my tears. It is only a retirement, but wait no. It has been our life in some ways.

Today may well be his last innings with cricket bat in a test match. Stamp of vintage Tendulkar was seen yesterday. Let us hope that he adds a few more of his straight drives and cover drives today. Never have a farewell touched this close. He was not just a hero, but a pride and a part of my growing up. As he walks to the sunset of a glittering career, a part of me fades from the horizon as well. The timeless thing of our life, prepares to turn the page! Oh dear life, we have crossed two decades!

Amar Bose, the name almost symbolizes high quality sound has passed away. Years ago, I had come across reading up something about the man who had inspired the making of something that I literally use everyday, the Bose Wave music system. A tiny box which uses crisp quality sound has been my favourite since 2006.

Bose’s life reflects a successful life of a passionate researcher who fearlessly chased his dream, produced a world-class product and organization. What amazes me is that, he managed to do all this, while still staying as a faculty, having involved in a good share of regular teaching activities and formal student advising (For starters, Alan Oppenheim was his student). It is widely known that he was a great motivator as well as an exceptional teacher, said to have enthralled audience anytime, anywhere. If this is of any indication, then we can imagine how great it would have been to be in one of his class.

I have read and heard many stories of him, about his experience with starting up of Bose corporation, interaction with his illustrious professor Yuk-Wing Lee (who was instrumental in motivating the young Bose in eventually starting up a company; It was he apparently who donated his life savings of $10,000 in 1950s to seed in the making of what is now a multi million corporation) and also the rather interesting and embarrassing event where he had to his first ever public/technical talk on Wiener’s (then recent) work to a celebrated audience which had included a certain Norbert Wiener himself. After knowing a bit about all the Wiener stories, I pause to think, how different an experience that would have been! Anyway, Bose’s legacy will easily stretch beyond mere Bose corporation and MIT. His life is a message of courage and pursuit of passion, if not anything else. RIP.

The 2012 Turing award goes to two cryptography pioneers Shafi Goldwasser and Silvio Micali. I don’t do much of cryptography (for that matter I never did anything serious on that front, besides doing some C programming to demonstrate RSA and later on some class project involving elliptic curve cryptography and mathematica programming at EPFL). But I always was fascinated by the topic of Cryptography, largely for the many interesting toy like, yet deeply thought provocative combinatorial as well as probabilistic problems it addressed. Several of these problems stay at the very intersection of CS theory and cryptography.

One such classical problem that we all learned in graduate classes is the zero knowledge proof. The idea is pretty cute. In the most simple scenario, this involve two parties Alice and Bob, who don’t trust each other, but one of them (say Alice) have to convince the other (bob) that she has knowledge (say a mathematical proof) of something without really revealing it. The fact that it is not revealed explains why it is zero knowledge (Bob in the end never get to know what Alice know, but just gets convinced that she knows!). So, it will work more like an interactive protocol wherein a lot of questions are being asked by Bob, chosen randomly, depending on prior answers from Alice. Doing this way for long, can Alice convince Bob, with an overwhelmingly high probability, that she knows what was claimed? Such an interactive protocol constitute what is coined as a zero knowledge proof. An example would be a novel ATM machine where, you don’t have to enter the PIN (unlike the ones that we have), but you can still convince the machine that you know your correct PIN. Sound cool and yet thought provoking, right? Well, that is why I said it is cute and interesting. This zero knowledge interactive proof idea was first conceived by the new Turning award winners. The names didn’t strike me initially with the Turning award news, but after reading the details, it occurred to me that, I had read their classic paper , as part of my coursework at EPFL.

A bit more formally stated, the two entities are namely a Prover and a Verifier. In the ATM example, the ATM machine is a verifier and you the user is a Prover. Prover knows the proof of a mathematical statement and verifier tries to verify whether the claim is indeed correct with high probability. The machinery of zero knowledge proof is that, if the prover is trying to trick, the verifier will be able to find that out as well, with high probability. There are many examples illustrating this beautiful idea. A classic example is the problem of helping a blind man in identifying whether two otherwise identical balls are of different colors? Can you convince him, without really telling which is which? Now there are variants of their original idea and myriads of practical significance have emerged or are still emerging.

The ACM award page has pretty enthralling account of these two pioneers (Shafi and Micali). Now, here is an interesting family trivia. Shafi Goldwasser and her husband Nir Shavit together now keeps three Gödel Prize in their shelves, besides adding the Turing award aroma, now to their household!

Since this is year end holiday time, I got a fair share of my time to read up on several random topics. Amidst the many sad out comings in the past couple of weeks starting from the shocking events from Newtown Connecticut elementary school shootout, then on had a few more disturbing news all over the world, particularly this one from the other side of globe. A rather non disturbing news is on the new claim on proving/establishing the so called Ramanujan’s deathbed conjecture. I was doing a bit of follow up read up on this to first understand the premise of the problem in news.

OK, the subject deals with what is known as mock theta functions (Note: There is no credible Wikipedia entry to it and hence I cannot add one here. Also remember that this mock theta functions are not the same as Ramanujan’s theta functions; Just to not confuse these subtle mathematical terms!). These are not quite the Riemann Zeta functions, which are well known, but more similar to the Jacobi theta functions. The mathematical prodigy Srinivasa Ramanujan while on deathbed had shared some quick tidbits to the famous Cambridge mathematician (and I should say a hard core cricket enthusiasts! If you guys have not read his memoir *A mathematician’s Apology* already, step out and grab it soon. A fabulous quick read that!) G.H.Hardy who was paying a visit to his now famous student. After their famous meet up (or may be during their meeting itself), somewhere in 1920, before his death, Ramanujan wrote a letter to Hardy in Cambridge in which he introduced as many as seventeen new functions. These functions, as per Ramanujan, had shared some distinct similarities with the theta functions (The Zeta functions were itself had been studied before before by mathematicians like Jacobi, Riemann and several others. Now of course the connection between Jacobi theta functions and Riemann Zeta functions is already established!).

The Jacobi theta function itself is pretty complicated to lesser mortals like us. It looks harmless for a starter, but the properties, its various avatars and connections to many real world applications is what makes it monstrous. And then there are also its generalized cousin Riemann Zeta functions , which as well, appear as a simple and elegant looking form for , for higher changes the form beyond what we can fathom (For example, it is not even known whether for larger such a number is transcendental!). I remember playing with (I think it was 2000 or 2001) a proof of using some integral calculus in two variables, which again turns out to be rather well known as easy. There are other ways to prove as well for , but it stops being that simplistic there!) Anyway, Jacobi’s theta function has the form , which in words can be roughly stated as some form of elliptic form of exponentials.

Much like Fermat did for his famous last theorem, Ramanujan too didn’t give much hints beyond listing and stating that they are elegant. For example, he didn’t reveal where they come from or how to formulate them or for that matter what use having them. Like many of his other famous mathematical findings, Ramanjunan, a self taught genius, made a quick listing of these. Rest of the curiosity has enthralled and haunted the mathematical minds of coming generations, all over the world. A brilliant spark from the great genius helped to stir an intellectual stir for almost 10 years!

The latest is that, two Wisconsin mathematician, Ken Ono and University of Cologne professor Kathrin Bringmann came up with a framework explaining what mock theta functions are and how to derive them. They connect modular forms or more specifically the mass forms to Ramanujans’ functions. If you recall, the modular forms also played a big part in the proof of the other famous problem of this kind Fermat’s last theorem.

It will take a while for me to understand (if at all I manage to digest it) all the details of this new claim. I must say, it is likely that I will fail to comprehend it in full . As always, I will be curious to read it up and grab as much as I can. As and Indian, the connection to India is always an interesting pursuit!

I leave you with a nice documentary video on Ramanjuan that I found in youtube. If you have not see this already, check this out. Pretty good documentary, unfortunately tarred by a lot of useless and unrelated comments.

Last week, had a chance to catch up with two pioneers at Irvine. The coding (among so many other things, trellis coded modulation came to light through his seminal thoughts) champion Gottfried Ungerboeck and the multi faceted public crypto (Diffie-Hellman fame among the so many other things he has done) fame Martin Hellman were gracious enough to join me for lunch. They were in town for this years Marconi award felicitation. For me, personally, it was whale of an opportunity to interact with these two connoisseurs. I didn’t have much time to interact with Ungerboeck when he was still employed at Broadcom, but the little time I spent with him last week gave an indication on how much I would have gained, had he stayed longer (or had I joined Broadcom earlier):-)

Wireless gigabit alliance (WiGig) has a new(updated) website. For a first up, there is a link How WiGig Works which nicely explain what WiGig is all about, in a clear layman’s terms. If you ever wondered whether we saw the finale of the wireless rate surge, just re-think. We are still a lot far from drafting even a proposal, but there is surely plenty of light seen in the wireless horizon. As an example, HDTV would require about 3Gbps rate. WiGig is addressing applications such as this which demand rates beyond 3 giga bits per second. The brief tutorial is a compelling read.

The much expected Wolfram alpha has gone for a soft launch since last night. It had some start up glitches, as Wolfram briefed during the live demo, but nothing major fortunately, prevented me from getting a first feel of it. Erick Schonfeld has a nice blog with a detailed first hand feel description of this new computing web search engine. He also did a one to one comparison with Google for a few specific search queries.

My first impression is in much the same line as what I expected after reading Wolfram’s pre-launch blog. This is not a Google competitor for sure, but instead an incredibly complementing brother. Wolfram alpha is more of a scientific and quantitative information search engine. For instance, if you want to know the Taylor series expansion of exponential function , you can do it easily by entering “Taylor series of Exp[x/2]”. As you would imagine, Google does not give this precise answer, but instead give you a list of documents matching this query, for instance a set of PDF links where this is already calculated. Clearly, Wolfram gives a more accurate and clever presentation of this query result. Wolfram alpha seem to use quite a lot of Mathematica capabilities too, like plot etc. Any mathematical query, will lead to pretty good result, sometimes including plots, histograms, Taylor expansions, approximations, derivatives, continuity etc. It is a nice feature to have for students and engineers.

wolfram1

This is the sort of query it likes the most and not something like “proof of Sanov’s theorem”. Google will incredibly list a set of documents which has the proof one is looking for, since it simply search down the web and display a listof matching queries, ordered based on pagerank, which is *loosely* speaking in the order of relevance.

Not all queries are bound to get a result with wolfram alpha, atleast for now. That is expected since it is not yet in launch mode, but on soft launch. In the coming days they are likely to have it running full fledged with all kind od queries supported.

So, the wolfram alpha is definitely going to be useful for very many cases and it surely is going to rock in scientific searches. I initially thought the Google squared which is going to come from Google shortly is addressing the very same segment of search area, but it is clearly different.

I tried “tallest mountain Switzerland” . It gave a very nice cute quantified table. I love this kind of result. It is also state things with less ambiguity. For instance the height is mentioned in meter, but there is a list of unit conversions listed along, which help people to map them into the units of their convenience.

I tried a query “Who is Claude Shannon”. This is what it displayed. Of course, the result you get is a very brief information about him. Same query in Google will lead you to the more detailed Wikipedia entry of Shannon or may be the Mathworld entry of Shannon among the list of hits . Wolfram alpha gives information more like in capsule form. If you need to know more, you should ask more. Clearly, what search engine to use is thus subject to the query type. I strongly see Google and Wolfram alpha are complementary. Wolfram alpha gives more or less one reply to a single question. Of course you can renew the query and then get answer to that. In some sense, this is like people asking questions to one another in real physical scenario. Imagine you ask a friend, knowledgeable pal that is: Who is Shannon? He would perhaps start answering in those lines as Wolfram Alpha do. On repeated question he will give more details. On the other hand, Googling is like broadcasting your query to a large pool of friends, each one of them sends what they know or heard about Claude Shannon. It is you,who decides whichamong the many answer(s)/explanation(s) suit your need!

We can afford some amount of spelling errors while entering the query in wolfram alpha. Since it is natural language based, that is a decent feature to have. I deliberately typed the query “**distnace*** from Bangalore to geneva* ” instead of “**distance*** from Bangalore to geneva *“. It understood the intended query and displayed the result in a nice quantified table. Eve the geographical trace between the two places is shown. Incredible!

When I tried “weather in Lausanne”, this is as good as it gets. Spot on with all possible things you want to know in one screen! It had a list of mountains and their heights mentioned!

In a nutshell, Wolfram alpha give you the best cooked food, given a user recipient as input. Google will give you a list of foods available and then you pick the one tasting suit . It really then is a question of preference, time, and satisfaction of the end user on what to choose from. As far as I am concerned, it is subjective. I see both of these are invaluable and both will co-exist. Scientists,economists, finance folks, mathematicians, historians are all bound to benefit from this new computing engine. I am waiting for a full release!

Today, I attended a very good talk given by Emo Welzl of ETHZ. I could not quite appreciate the drinks and snacks prior to the event, since the organizers kept too little of them and by the time I arrived, smart guys had grabbed hold of almost all of them. I had to content with a glass of orange juice! Anyway nothing comes free in this country. So getting an orange juice is itself luxury, one would say! Nevertheless, glad that I attended this talk. Monika Henzinger did the speaker introduction part, which she did very well. She mentioned that Emo comes from the same village as that of her husband (Thomas Henzinger). That is not really relevant, but I like such personal, less formal introductions. It takes the audience to a touch curious and close. He indeed proved her (Monika promised us that we are in game for a great talk) right with a truly nice lecture, calm, composed and thoughtful;words precisely chosen, well articulated throughout. He gave some insights into a problem which was never known to me. My field is not quite into SAT or algorithms, but at the end of this talk, I got to learn some thing. Moreover, he instigated me to learn a little more about these nice problems.

Here is a gist of what I understood. If you are interested in the talk subject, perhaps you should visit his homepage. What I state down is something my little brain, which never for once trained on this topic, digested out. Suppose we are given a Boolean function (that is a logic function which has either true or false, equivalently 0 or 1 results). Deciding satisfiability (known as SAT problem) of such formula in conjunctive normal form is known to be an NP complete problem. He discussed some nice (surprisingly simplified bounds) combinatorial bounds on the number of clauses (or equivalently constraints) for unsatisfiability. As usual in talks, I hardly could grasp the proof in total, but he began quoting the Lovász lemma as an essential ingredient. I got to learn a little bit about this rather nice and cute lemma. Loosely the lemma has the following setting.

If we consider a sequence of events where each of these events occur with a probability at most . Suppose each event is independent from all other events, except at most of them, then , where is the Napier constant (named after the famous Scottish mathematician John Napier). This did not strike me instantly, but pondering a little bit about it, I have realized that this is really cute a bound. I can think of a nice, little example scenario, where this can be applied. Let me figure out another cute one. You can expect me to post it. Now let me get back to that optimization problem on compound sets of channels that I have been stuck for the last four days.

I had earlier promised to update on the Xitip, when a windows setup is ready. Though delayed, I have something to say now. I have finally made a windows installer for the (Information theoretic inequality prover) Xitip software, which was working pretty smoothly on linux, cygwin and mac for a while. I was not too keen on making this windows installer since a few DLL files are involved with it. Besides it was a bit painful to include these nasty DLL files which would unnecessarily increase the bundle size. Some of these may not be required if Gtk is already installed on the machine, but anyway I made one double click style version to suit the layman windows users in information theory community.

Vaneet Aggarwal is the one who motivated me to make this up since he uses Windows. He showed some interest to use it, should a windows version be available. If atleast one user benefit from it, why not make it. In the process, I got to learn about an easy way to produce a windows install (setup maker) program. I used the freeware *Install creator* to produce it.

I will put this installer available at the xitip website, but for the time being you can access it from here. A lot of people suggested to revamp the xitip webpage which is pretty unclean at the moment. May be a short tutorial is impending. That will take a while; the next two and a half months are out of equation since I am pretty busy till then.

Todays IPG seminar had Fritz Eisenbrand (the Disctete Opt chair, Math department EPFL) talking about Diameter of Polyhedra:Limits of Abstraction. I don’t think I followed the topic too well, but this is a share of what I understood.

The topic is about a convex geometric problem on the diameter of a polyhedra. The question of whether the diameter of a polyhedron is polynomial or not seemed to be a longstanding open problem. The largest diameter of a dimensional polyhedron with facets has known upper and lower bounds.

.

The lower bound is due to Klee and Walkup and upper bound to Kalai and Kleitman. These bounds also hold good for combinatorial abstractions of the 1-skeleton of non-degenerate polyhedra (Polyhedron here is called non-degenrate). What Fritz and his colleagues have done is to look into the gap between these known lower and upper bounds. Apparently, the gap is wide and they have made some progress to get a super linear lower bound if is allowed to grow with .

The way they showed this bound is by establishing the bound for the largest diemeter of a graph in a base abstraction family. Let us say, the abstraction family of connected graphs be denoted by .The largest diameter of a graph in is denoted by . They find that, and then using the fact that , they conclude the bound

I have not had a chance to see their paper yet. I must say, the proof was not all that within my grab during the talk. However it appeared that it is based on some layering and combinatorics. He said some applications to covering problem, in particular disjoint covering design which I didn’t follow that well. Sometimes I get the feeling that I am a little dumb to grasp these ideas during a talk. I wonder whether others understand it very well on a first shot presentation. I have put it in my agenda (among the millions of other papers to read) to see through this problem and proof, one day! His presentation was very clear and legible though.

I’ve finished reading the memoirs of Walter Rudin. It was a quick read for a few hours. His autobiography is titled *The way I remember it*, published by AMS in the history of mathematics series. It wasn’t particularly interesting, to say the least. From a mathematician who wrote excellent books on functional analysis and several others, I was expecting a much better story. Of course one cant write an imaginary story in an autobiography, but then the incidents in his life is pretty much the story of any European intellectual during the war days. The best I liked is the one from Karl Popper. However, I could connect many incidents from Rudin’s life, primarily because of the geography. There is a chapter on his days in Switzerland, which also touched upon Lausanne. That part for once enthused me! Was wondering how Lausanne would have been 70 years ago! If you are completely unaware of the life in Europe around the WW period, then this will give you a perspective. Like many scientific minds of that era, he had a long route to the United States. He discusses the path and family traits of that journey, in a somehat uncomplicated language.

In his autobiography, Rudin has discussed some of his contributions to mathematics as well. That part appeared a little informative, but technical read. If you know his work already, you would connect it nicely. I particularly liked the chapter on *Function Theory in the Unit Ball of* **C**^{n. }

In all, not a book I would recommend, unless you are a Walter Rudin fan and knows his contributions in much more detail. However, this may be a motivating read for a young school kid aspiring to be a mathematician. Why did I say that? I don’t know! Don’t ask me why either!

A very interesting report on the neuro socio development and progress of children from poorer background, is reported in the recent edition of The Economist. In the report they discuss the research study by Martha Farah of UPenn. Their investigation came out with a worrisome conclusion that, children born with poorer socio economic background have a greater chance of becoming underachievers (read as under performers compared to their middle-class counterparts). The study is of course based on statistical inference and hence there ought to be scope for exceptions (Large deviation theory!). However, being a statistical method, we can well assume that the behaviour is true on the average (expectation). This is truly not a conclusion we would like to hear, but to me, it appears to be a careful study and its conclusion opens up the ramifications of the larger crisis faced by millions of people all over the world, especially from developing countries and Africa.

What these researchers did is to study the stress level suffered by a person over the span of his/her life. They combined various type of pressure (such as systolic and diastolic blood pressure) and formed an index, what they called allostatic load. They found that this index is on the higher range among people from poorer background than those from middle class. They also have found that, the duration of the poverty life of a person is correlated with allostatic load.

The report appears to conclude that, stress is more or less the sole reason for spoiling the working memories of an individual. We could say that it is a little too strong a statement. Children under too much socio-economic stress tend to do badly in studies and that unfortunately carries on for ever. I am tempted to argue that, a socio-economic push, say by providing opportunities to such children will change the performance of an individual. After all, we know many instances of children born into poorer backgrounds scaled highs. But if you read the report carefully, they are not refuting this either. What they simply say is that, on a relative scale, the impact of stress during early childhood is much more serious than what we perceived to be. Children of poor perform poorly in school and stay on that way and sadly, remain as poor (under achievers) adults. Clearly, the authors refer ‘poor’ adults as state of ‘under achieving’ compared to their counter parts from a middle class background. In that case, one can always argue on the definition. True, one doesn’t have to be a genius to do well in life. But, the larger picture however is clear. A poorer childhood may limit his/her potential.

It can be very easily mistaken for that, this report is derived from a non scientific study. I too was inclined to think in those lines when I read the title. A careful reading however convinced me that, there could be genuine truth in their argument. After all, the conclusion is not on a single individual, but on a collection.

I am sure the wider picture of this report may have a scientific explanation too. Too much stress, at an earlier stage of life may prevent development of nerve cells. Bottom line is that, we simply do not want to take a risk. It isindeed very important that our children and future generations not to undergo that ill fate. We have a social responsibility to be aware of these and try to do a part to ease up the trouble, as much as we can.

While the talk and boom about multimode multiband phone in CMOS is turning greener, there should be a natural question around it. How about doing all these in software? Rather add a level of programmability such that a great deal of issues from a hardwired implementation are shifted to more flexible firmware. Without contention, pros and cons with the idea of programmability still prevail. Clearly, one definite advantage I see with programmable design is the significant cost reduction and reuse. Additionally a migration or upgrade, which is imminent from a future gadget design point of view, can get done with relative ease with a programmable multimode chip. Building a suitable processor architecture to suit the modulations schemes (say an OFDM based scheme can have an inbuilt FFT engine or a WCDMA can have a correlator engine). Aren’t anyone working seriously in these directions? I am sure there are many, atleast startup ventures. Vaanu and Icera indeed are two things coming to my mind. How about the big boys? There were lot of furies about software programmable baseband chips being developed. Not quite sure what is the latest in that front. Isn’t it the next big thing in the offing? I am sure the EDA big houses have thought ahead for building tools for a heavily software oriented design, at least for years ahead. Or is it that, I am jumping the gun a little too far? However, I see some top level bottlenecks in making this programmable multimode chips realizable at an easier pace than a textbook concept. One of them is difficulty in getting away the analog front end. As a matter of fact, now I feel that, analog is going to stay.

So where are we heading to? Clearly, an all CMOS multiband multimode single chip (baseband and analog) with a near perfect RF and a software architecture would be the ultimate holy grail of cellular chip design. How many bands and how many modes to be incorporated becomes less important, if the programmability aspect is assured. Challenges within a single chip concept are themselves many. Clearly the RF portion is expected to take up lesser share of the overall chip size. An all digital front end is aimed in that direction. While a direct digitization of radio signal of high frequency eliminates analog life process significantly, there are several practical bottlenecks with this Utopian design model. We are not quite there to say good bye to analog entirely. Analog signal processing is still critical and inevitable, even for a programmable multimode dream. I will give you some numerical facts to substantiate my claim:

Suppose we decide to build programmable all digital zero if receiver for a 2GHz system (around the UMTS band). Then, Shannon Nyqusit sampling would demand at-least 4 G samples/second. Even with a processor which clocks 4Ghz and say 8 operations per cycle, our full steam purchase is going to be a maximum 32000000 operations per second. This theoretical figure is based on the assumption that processor memory is fully utilized. At the sampling rate of 4G samples/second, we only are going to get operations per sample. How are we going to have all the fancy radio algorithms shape life with this? Even to implement realistic functionality of a typical modern radio, this is inadequate. Another important thing is the imminent power dissipation to run a processor at . For a portable gadget, where these chip are targeted for, we still need more and more hand in hand optimization and integration with analog processing, software as well as digital processing, in addition to an optimized system architecture. My feeling is that, the analog front end is going to stay for some more time, if not for ever. At least on the immediate future, we need more inroads from analog processing, to realize the small size, cost effective multiband multi mode chip dream.

From this blog piece, I came to know that the smart MIT theoretical computer scientist Madhu Sudan is making a move from MIT to industry. He is set to take up a research position with Microsoft. At this economy troubled days, lesser mortals would take the conservative route that ensure stability and so on. They would say a move from a tenured professorship to a more volatile industry is risky. But then one of the smartest mind in the world can have a world revolve around him, if need be. So no surprises here. On the positive side it is a gain for industry, while it is a big loss for MIT, if Madhu decides to stay away from academia for too long.

Interestingly, on the very same blog, someone commented about other famous moves. Apparently, Venkatesan Guruswami, Madhu’s celebrated student is also making a permanent move from UWash to CMU. In industry, we are often associated with frequent hops. Academia is not too immune to attrition either. However, I see no harm in making smart moves. It is going to help the world, atleast in expectation.

As in EPFL too, there is imminent big fish attrition(s). Tom Henzinger and his wife Monika Henzinger are about to leave EPFL to take up a permanent position in Austria. The awesome twosome will be missed in EPFL.

Today, there appeared an interestng (and perhaps general) question posted on the Linkedin Analog RFmixed signal group. The question was this “Regarding multi-mode multiband RF transmitters for handsets (CMOS), what do you think are the hot issues (besides PA)?” I have given a short overview of the challenges that I could see when a multi mode phone is to be designed on CMOS: The phone has to support a wide range of frequency bands as well as multiple standards/technologies/modulation/air interface. Here is what I wrote. I am not sure whether the discussion is accessible to public. Hence I repost here.

Integrating the RF transmitter and receiver circuits is a challenging thing since we have to support multiple bands (within a single mode. Say GSM/EDGE should support GSM900 to 1900 bands) as well as support for multiple phone modes. For instance a natural multi mode multi band phone supporting GSM/GPRS/EDGE/WCDMA/LTE will have to consider a wide frequency ranges from 850MHz to over 2GHz. If we were to consider incorporating GPS and WLAN, add that extra consideration. Not just the transceiver circuitry, but also other components such as oscillators, filters, passive components, frequency synthesizers and power amplifiers. Another thing is that, for multi mode, the sensitivity requirements are much more stringent than a single mode, multi band design.

Since CMOS offers low cost, better performance and better scaling, to me that is the way forward. The natural choice of transceiver in CMOS would be the direct conversion/Zero IF, since it eliminates the costly SAW filters, and also reduce the number of on chip oscillators and mixers. Now, there would be several key design issues to be considered now with direct conversion architecture. Most notable ones are the well known ghost “DC offset” and the 1/f noise. Designers will have the task cut out to get a cleaner front end and as well as near ideal oscillators.

Now I see another problem with multi mode, depending on what level of flexibility we prefer on this integration. Do we need the phone to operate in multiple modes simultaneously? Say a voice call on GSM and at the same time a multimedia streaming on LTE. In such a case, the question of sharing components are completely ruled out. If not, say some components such as synthesizers and mixers (if in the same band for multiple modes) can be shared. Clearly, simultaneous mode operation will ask for increased silicon die size as well as cost. Challenges may be there for circuit isolation for different modes as well.

In all, depending on the level of sophistication (and of course all these things will have to be scaled economically too) the design,partitioning, architecture challenges are aplenty. Now the choice between a single chip (containing both analog baseband and digital baseband) versus two chips (analog and digital partitioned) will get a little more trickier with multiple modes. With multiple antennas (MIMO), add another dimension to this whole thing:-(.

https://ratnuu.wordpress.com

http://people.epfl.ch/rethnakaran.pulikkoonattu

The latest edition of The Economist reported this shocking fact based on Amnesty international study on capital punishment: our civilized governments have executed close to 2400 people in the last one year alone; all in the form of capital punishment. Over 70% of these numbers are coming from China. Perhaps more revealing is the fact that Saudi Arabia and Iran took the percentage share (per population) when it comes to executing supreme order, all to its people.

Now, there may be a stronger story to defend these killings, because some of those criminals may indeed have done heinous crimes. But that, simply cannot justify capital punishment, Can it? To me, killing a human on any count is unjustified. By doing so, we are exhibiting more insanity. Why killing? We could rather put them in jail and let them be forced to work extra hard and learn the hardship of life.

The other day, couple of my Iranian friends and I were discussing about the sort of criminal law practiced in our respective countries (here India as in my case and Iran for them). One of them asked me about the kind of punishments that would be given for instance to a guy who accidentally injures someone on the road. I had to tell him, in India, a capital punishment is usually given only in rarest or rare crimes. Even then, there would be widespread opposition to the hanging. On the other hand, I was shocked to hear that in Iran, some crimes are dealt with a ‘blood for blood’ response. Apparently in Iran, for a recent case wherein a husband poured acid to permanently disable his wife’s eyes, the judge ruled a replica punishment so that the culprit get to feel the pain and agony that his victim had gone through. To me, this sounded too uncivilized a punishment, even though the victim deserve no mercy. He surely does not deserve rights for any decent public life for the heinous crime he carried out. The best way to punish such barbaric people is to put them through the most rigorous hardship over a prolonged period. That way, they get to experience the bitterness of life. Killing someone to compensate the loss of innocent people is simply no justification: At least I find no solace to accept it as a civilized way of dealing crime and criminals.

In a way, by killing the criminals we are giving an easy escape to those in-humans.

The latest talk/demo at TED opened up a fresh life to the possibility of a sixth sense. The MIT Media labs now have unveiled a prototype of the sixth sense setup. The whole thing is reasonably economical already and all indications are that this is going to rock some day. Incredible idea which went all the way to realization. Kudos to Pranav Mistry, Pattie Meas and their team. One thing I am really hoping out of it is that, this paving way to assist disabled people. For instance a blind, deaf or dumb person finding avenues to get a sixth sense aid would be really helpful.

http://www.ted.com/talks/pattie_maes_demos_the_sixth_sense.html

## Recent Comments