You are currently browsing the category archive for the ‘Business and technology’ category.

Amar Bose, the name almost symbolizes high quality sound has passed away. Years ago, I had come across reading up something about the man who had inspired the making of something that I literally use everyday, the Bose Wave music system. A tiny box which uses crisp quality sound has been my favourite since 2006.

Bose’s life reflects a successful life of a passionate researcher who fearlessly chased his dream, produced a world-class product and organization. What amazes me is that, he managed to do all this, while still staying as a faculty, having involved in a good share of regular teaching activities and formal student advising (For starters, Alan Oppenheim was his student). It is widely known that he was a great motivator as well as an exceptional teacher, said to have enthralled audience anytime, anywhere. If this is of any indication, then we can imagine how great it would have been to be in one of his class.

I have read and heard many stories of him, about his experience with starting up of Bose corporation, interaction with his illustrious professor Yuk-Wing Lee (who was instrumental in motivating the young Bose in eventually starting up a company; It was he apparently who donated his life savings of $10,000 in 1950s to seed in the making of what is now a multi million corporation) and also the rather interesting and embarrassing event where he had to his first ever public/technical talk on Wiener’s (then recent) work to a celebrated audience which had included a certain Norbert Wiener himself.  After knowing a bit about all the Wiener stories, I pause to think, how different an experience that would have been! Anyway, Bose’s legacy will easily stretch beyond mere Bose corporation and MIT. His life is a message of courage and pursuit of passion, if not anything else. RIP.

Almost all the deployed and successful communication strategies till date are half duplex (HD). That is, we don’t have simultaneous transmission and reception on the same frequency band, aka, full duplex (FD). For example, 802.11 WiFi uses a time switch (TDD) between transmit and receive mode.  Both transmission and reception takes place over the same frequency band as well. A single antenna is (typically) used for both tx and rx in this case. It is always either transmit or receive (or none!) that happen at any given time. In the cellular world, such as LTE the popular scheme is to have the frequency slice shared (FDD). In that case the up-link (link from a cell phone to base station) takes place in a range of frequency band different from that on link receiving signal from base station, while both transmit and receive can take place simultaneously. In both TDD and FDD cases, there is no overlap between the transmit and receive signals at a given frequency at the same time.

Let us posit this question. In a given frequency band,  is it feasible at all to have simultaneous transmission and reception? One way of course is to find a domain where these two (transmit and receive) signals stay perfectly distinct. Say use some orthogonal codes. In theory yes, but there is an important practical hurdle here. It is the issue of the loudness (aka self interference) from own transmit signal! An analogy is like one tries to decipher a whisper coming from someone, while he/she is simultaneously shouting at top of his/her voice. In reality, the desired signal comes from a distant source after traveling through adverse medium/channel. More than anything else, the signal intensity level would have got severely degraded by the time signal arrives at the receiver unit. Well, let me put some numbers from a practical setup. In a (typical) WiFi scenario the incoming signal (from an AP) at your receiver antenna (of say tablet) may be around -70dBm, whereas, the power of (tablet PC’s) concurrent transmission power could be 20 dBm!  The task to fulfill the full duplex goal is really to recover the information from the relatively week signal in the presence of a self interference stronger by 80 to 90dB! In other words, we should hit a mechanism to suppress the self interference by 90dB! Getting a 90dB suppression is no easy, especially when we are constrained chip and board area to get deployed in portable devices! Traditional board layout tricks such as isolation, beam steering etc alone wouldn’t get us there.

OK, now what? the reason I suddenly brought this up is largely due to the increased momentum this one is gathering off later in both academia as well as industry. It still has enormous challenges ahead. Realizing FD on the other hand will bring in enormous benefits. Historically, we always mulled over capacity and throughput, with the strong assumption that all resources in the lot are available. Say for a given channel bandwidth W, the capacity is C(W) and throughput is so much and so on. The reality is that, in most cases, to have information exchange, we need two way communication and that means double resources. Spectrum being pricey and scarce, getting the full duplex can potentially get up to double fold in throughput and several other benefits along the way such as remedy to the hidden node problem in current 802.11 MAC access. Now 802.11 standards front, we have a new study group on high efficiency wireless (HEW). I believe HD can play a role there too.

I am not prepared to discuss all the details involved here. Let me outline a rough problem formulation of FD.  More general versions exists, but let me try with a simple case. Much more detailed formulation of the problem can be seen here and elsewhere. I kinda used the notations and problem statement from this. Let y_{a} be the desired signal from a distant sender, arriving  at the rx antenna. Simultaneously, a much high power signal x is being sent . The signal x is significantly higher power than y_{a}. Now, the signal x leaks through some path H and produce an interference v_{a} at the receive antenna. In other words, the effective signal at the receiver antenna port is z_a=x+y_a. For sake of simplicity, let us assume that H is modeled as a FIR filter. The sampled signal relationship can be then stated as follows.

z_{a}[n]=y_{a}[n]+\underbrace{\sum_{k=0}^{\infty}{H[m] x[n-m]}}_{\triangleq u_{a}[n]}.

Now here is the thing. We cannot simply pass the buck to digital domain and ask to recover the useful signal from powerful interference. Recall that, the A/D converter stands at the very interface of analog to digital partition. High power interference signal will severely saturate the A/D and result in irreversible clipping noise. So, first we must do a level of analog suppression of this interference and make sure that, the A/D is not saturated. Let us say, we go for an analog filter C_{a} and do this job.  Post analog cancellation using a filter C_{a}[n] we will have,

\tilde{z}_{a}[n]=z_{a}[n]+\underbrace{\sum_{k=0}^{\infty}{C_{a}[m] x[n-m]}}_{\triangleq v_{a}[n]}.

The A/D signal transformation can be decomposed to the following form (using Bussgang theorem for instance). \tilde{z}_{d}[n]=\mathcal{A} \tilde{z}_{a}[n]+q[n]. Now,

\tilde{z}_{d}[n]={\mathcal{A}} z_{a}[n]+{\mathcal{A}} {\displaystyle \sum_{k=0}^{\infty}{H[m] x[n-m]}}.

If we do a digital cancellation at the A/D output state with a filter C_{d}[n], we can have \hat{z}_{d}[n]=\tilde{z}_{d}[n]+\sum_{k=0}^{\infty}{C_{d}[m] x[n-m]}. Incorporating all these, we will have

\hat{z}_{d}[n]={\mathcal{A}} y_{a}[n]+ \displaystyle \sum_{m=0}^{\infty}{\left[\mathcal{A} \left(H[m]+C_{a}[m]\right)+C_{d}[m]\right] x[n-m]}+q[n].

Now if we can adapt and find C_{a}[n] and C_{d}[n] such that \mathcal{A} \left(H[m]+C_{a}[m]\right)+C_{d}[m] \rightarrow 0, then we can hope to have a near perfect self noise cancellation and produce \hat{z}_{d}[n]={\mathcal{A}} y_{a}[n]+q[n]!

So, in theory there is a way to do this, by a hybrid approach where in some correction is done in analog domain (before A/D) followed by a more easily realizable digital cancellation circuit. There are many more practical hurdles. Some of them are:

  1. Performing correction/adaptation at RF frequency is not trivial
  2. If we are to do this post mixer (after downconversion), then the LNA nonlinearity (and a potential saturation) will come into play
  3. Channel/coupling path estimation error will degrade performance
  4. Calibrating analog correction is a little more involved
  5. A typical goal may be to have about 40dB suppression from analog correction and another 40dB from digital.
  6. Digital and analog correction, calibration time should be reasonably fast, so as not to spoil the set goal of simultaneity!

Some of the recent results, published are indeed promising. Some prototypes are also being developed. More general version involving multiple antennas’s are also being talked about. In that case, some beam forming can provide additional support. Let us hope that, with some more push and effort, we get to realize this one day into real world.

image001

Most of you may have been following this new prototype being developed and deployed by Google. I am talking about project Loon, an idea conceived by Google to help connect the few billion friends around the world who are still deprived of internet benefits. The idea at first may spell like fiction, but this one is for real. Already, some pilot projects are on the way, in New Zealand. Let us watch out for this to spread its wings in the coming months and years!

Anyone remember the old Motorola/Iridium initiative?  It scooped and failed for many a reasons, but the idea that time was to have the entire world voice connected, but project Loon is a bit more than that in intention, technology and economic viability. Besides, Loon is  backed by a highly successful technology driven company. The goal in itself is to have pretty much every corner of the world to stay connected by internet, the holy grail of global networking. Whereas, Iridium needed sophisticated lower orbit satellites, project Loon can get the job done through a set of balloons equipped with wireless communication technologies. The number of balloons may be much larger than the number 66 or 70 satellites, but the latter is a lot less expensive and green than the failed initiative!

So what goes into the making of project Loon?  Logistic wise it needs deployment of enough number of helium powered balloons into the sky, the stratosphere layer of earth atmosphere to be precise. Why stratosphere? Because, the balloons will make use of the wind flow that prevail at stratosphere layers to steer and position it around a certain location locked to ground. The balloons are not quite stationary; they instead will move around, but on the average a certain number of balloons will stay put up in location range to provide a reasonable coverage for any given location. All the balloons are equipped with enough circuitry to perform necessary wireless communication networking jobs. The balloons are all the time connected (wireless that is) to neighboring balloons and some of them will talk to an available ground station terminals through which it will establish connection to the internet backbone and thus to rest of the connected world!

The balloons may have varying shapes and orientation. The shape of the balloon and the wind pattern may come into the equation to steer them and stay around (or move around the earth) at the atmosphere. They may, not only move around the earth, but also can potentially move up and down in the stratosphere layers. Each of these balloons are of approximately 15 meters in diameter which will float at about 20 km altitude from earth surface. For record, this height is more than double the distance where we can spot the farthest cloud or for that matter the highest altitude where airplanes fly!  The task involves gyration, ballon steering and of course quite a lot of wireless mesh networking as well as co-ordination prospects. At the user side, you will have specialized antenna (or antennas, depending on whether MIMO comes in) to talk to one of the balloons above your location and we are all set to go. When fully operational, everything else will be transparent! Pretty much the energy for operation at balloons all will come from solar energy. The other natural resource used is wind. Both are green and free and almost universal!

I am very excited about the prospect of this coming off in full force in the near future. On the beneficiary side, one it will help reaching the far corners of  our planet. More than that this may well serve as an inexpensive way for many billion folks to reap the benefits of internet and staying connected. Of all, the children of a lesser world can as well get to bite  a share of a better world. Imagine a remote village school in Burundi or Bangladesh getting access to better educational tools through internet! Wouldn’t that be a beautiful? Corporations will make money, but when less privileged ones also benefit, that is something to cheer. In the end a model will sustain and everyone can have a share, monetary or otherwise.

Check out more details at the project Loon page. The google+ page has more updates pouring in.

In a lighter vein, what is the main downside of this everywhere connectedness? Here is a potential spoilsport scenario! You will agree with me here:-)

One of my favorite cell phone app till date is the navigation utility Waze. The only downside that I’ve noticed is its hunger for power (It drains the phone battery in no time), but GPS in general hog battery anyway. In a car with some charging unit, it is not a killer drawback, but it is a negative thing anyway. Since this app has such nice user friendliness, coupled with ability to provide almost real time side information (through user assistance and online feeds) such as traffic situations, presence of police etc., makes this such a handy tool on the move. I was almost contemplating that this will be bought over by Google or a Facebook. Now what? It didn’t take too long! Waze is gobbled by Google, for a reported billion odd USD.  I like Google maps too. Now, we have a chance to have all in one! Hopefully, a better one!

I love Youtube.  Every day, more or less on the average, I ends up spending (or at times wasting) some time there. Today was no exception, yet I was pleasantly surprised to hit upon some video taped lectures of Richard Hamming. These are apparently recorded in 1995 and are on a wide variety of topics. I didn’t get to go through all of them, which hopefully I will do sometime. I particularly liked this one on Discrete Evolution. The depth of knowledge these folks have is immense. What is astonishing is their ability and skill on connecting their point of expertise to vast majority of physical analogies. Long live internet!

It was interesting reading up on this piece of remake; somewhat a historical remake so to speak. That classic Paul Allen and Bill Gates photo shot as young geeks in 1981, now have a complementary remake with a new, yet ‘older’ avatar!

A sad end to what looked like a promising and prodigious mind, complicated by many wizardly , perhaps at times turbulent actions and more so haste reactions from various corners of our society including the law enforcement offices. The news of Aaron Swart’s death at the young age of 26 is disturbing. The man, who at the age of 14 sparked into stardom by creating the now popular tool RSS for information subscription is no more! More than the wizardly invention, his name unfortunately caught into wider limelight perhaps through the MIT/JSTOR documents retrieving case.  He had later championed to several causes on free information access.  The right to free information in the internet era once again had caught the worldwide attention with that drive. It is difficult to keep side of this case, because it is strangled with multiple levels of complications involving the right to information, ethics, social stigma, law of the land, money, business,a wizardly mind and of course the turbulence of human mind!

I read his Uncle’s statement, “He looked at the world, and had a certain logic in his brain, and the world didn’t necessarily fit in with that logic, and that was sometimes difficult.” I couldn’t agree more to these words of Mr Wolf on Swartz. Don’t forget he was an ardent contributor to Wikipedia as well. Rest in peace Aaron!

Last week, had a chance to catch up with two pioneers at Irvine. The coding (among so many other things, trellis coded modulation came to light through his seminal thoughts) champion Gottfried Ungerboeck and the multi faceted public crypto (Diffie-Hellman fame among the so many other things he has done) fame Martin Hellman were gracious enough to join me for lunch. They were in town for this years Marconi award felicitation. For me, personally, it was whale of an opportunity to interact with these two connoisseurs. I didn’t have much time to interact with Ungerboeck when he was still employed at Broadcom, but the little time I spent with him last week gave an indication on how much I would have gained, had he stayed longer (or had I joined Broadcom earlier):-)

With the Crypto guru Martin Hellman

With the great Ungerboeck

This year’s Marconi foundation prize is being awarded to our company founder Henry Samueli. With last year’s price awarded to the other connoisseur Irwin Jacob (jointly with another stalwart Jack Wolf), now we have the two stellar communication company founders getting the prestigious award in consecutive years!. Feel proud to be part of the company he founded. Broadcom simply has a lot of vibrancy and part of this must surely be due to the champion founder. You can see the energy when Henry Samueli talk. I could feel a similar charm when Aart De Geus (founder and CEO of my earlier employer Synopsys) talks too.  Congratulations Dr. Samueli, we are proud of you.

The first mail  this morning (from Nihar Jindal)  brought this very sad news that Tom Cover has passed away. A giant in this field who contributed immensely to many flavours of Information theory will be missed. Everything he touched had class written all over, gracefulness, simplicity, elegance and all the more depth.

A tremendous loss! His legacy will continue.

While coming back from lunch, the office front desk TV had this breaking news from CNN. It said “Kodak exiting camera business”. Firs thought, I said, Huh!,and  I am sure many would have felt like what I did. Kodak’s case is really a case of getting footed in the analog world, when the world around had the technology transformed in digital way. A sad loss, but then in business there is no emotion!

Here is the BBC clip

I wasn’t surprised at all with this. To be honest, I did expect this to happen, a long long ago, as early as 2006 or so. In 2009, I was seeing the writing on the wall. So, when my colleague sent the note yesterday afternoon, I was pointing him this blog to him!

Now then, Magma is now part of Synopsys. With Extreme-DA too in the kitty, Synopsys is clearly staying ahead in the EDA leadership. The Analog is where Cadence still have the thrust above them.

Another stalwart, the founding father of Unix, C and in many ways one of the founding fathers of computing itself, Dennis Ritchie passed away. For me, Ritchie along with Kernigham was one of the first few names registered in mind, since learning computer programming. The first book I have ever seen on a programming language is this duo’s book on C. And boy wasn’t that most concise book in technology, every so compact and yet rich and clean!

Come to think of it, the impact of Ritchie to modern science and technology is enormous. He may not have been a very public figure, but his contributions indeed is the touchstone on the modern world, especially the information network world. Much of the internet, the Google, the iPhone’s and what more, almost everything we can think of runs on his stuffs or its variants. Quite remarkable.

I think the most apt summary of Ritchie’s contribution is heard from Brian Kernighan himself.  He said  “The tools that Dennis built -and their direct descendants – run pretty much everything today”

 

We all knew that this will be inevitable, but all of us were hoping that it will be delayed as much as possible. Sadly, that day we all feared has finally come and it was today. Steve Jobs, the ever so mercurial leader of our industry has finally lost his battle with cancer and passed away this evening. Thousands of pages have been written about him and on his contributions. More will follow in days to come as well, from every corner of this planet. Let me not go there. To me, he has been a symbol of a child who always followed his dream and to top it up, he had the trust and ability to see it through. People may say, he is not philanthropic, but that is not his title, nor did he claim to be one. What he showed us is that, it is what “you decide”, what you want to become and it is entirely up to you to follow it tirelessly and achieve it. That’s it, no more no less.  What others think and say is completely irrelevant, as long as you put the trust honestly into your mind.

Come to think of it, his life and work and the glory associated with the making of one of the largest valued company in the world all have a charm and special persona associated with. His 2005 Stanford commencement speech made him immortal and inspirational to wider circle of life all over the world. More than being a tech whizkid, he was a symbol of innovation. More than a manager or a programmer, where he stays above the rest is the clarity in product vision and leadership to drive it through. I have heard several stories from my friends and comrades on his passion to drive stuffs at all cost and at times at the risk of spoiling personal relationships. That single minded drive to realize something special everyday made him this special. More often than not, we could see the sense of honesty in every statements he made, whether it is in public forums or in personal remarks at interviews. No tantrums and no diplomacy hanging around, plain simple truth in blunt words.

The world has lost a leader, visionary and innovator. He did not invent a medicine for cancer or aids, but he had made many a mark in the lives of thousands of people around the world. For some, he was the man who championed behind the realization of many amazing products of everyday use (Myself a huge beneficiary of it directly and indirectly!) and for others, his life itself serves as a message to follow their own dreams and then enjoy a lovely and satisfying life.  Thank you Steve Jobs. You have made a stamp in many lives.

While driving back home this evening on  a dark and rainy day I had the Stanford speech in mind. My mind seem to have said. Thank you sir. The words “Stay hungry, Stay foolish” reverberated on. Immortal words! Along the same bay he is resting at peace!

Footnote: CNN money had this report published sometime ago. The level of scaling Apple achieved under his rein since the beginning of this century is stunning.

Courtesy: From CNN money report (See the link above)

Tom Kailath is a genius when it come to presenting concepts. His talk at USC during the annual Viterbi lecture in 2011 was no exception. He talked about the connection between radiative transfer and how some of the results there beautifully connected to algorithms used in communications. As always, the master of many had a lot of stories to tell and it was mind blowing, to say the least.

So, things are going great for the high tech. Hopefully things will stay like this for a while. Intel just reported a strong 4th quarter result. Q4,10 to Q4,11 increase in revenue is 40%. Wow! The forecast is looking good too. Clearly, the tablet market is set to explode. The upbeat was reflective in the market as well. The good thing with big brother companies like Intel is that, they can take the market along. When they produce great results, it generally strengthens the health of the industry as a whole. When they take a hit, the impact is disastrous for the high tech houses and semiconductor industry in particular.

If this is true, then this has to be one of the heaviest buy in the communication industry. Atheros buy may well be a WLAN entry for Qualcomm. Fingers crossed!

Phew! Think of this. SAP in 2005 acquired a services company named TomorrowNow for $10 million. In just about 5 years, the new owner is in line to pay $1.3 billion to Oracle. For what? For all the wrong doing of the acquired company in their teens! There had been several corporate white collar crimes in the past. One distinctly vivid case is the Avant!-Cadence battle, but the new one scales much higher. Clearly, SAP wouldn’t have anticipated the literal realization of “tomorrow now” then, but now, it is a blown up penalty that SAP will have to content with.

So, what is the case against TomorrowNow (which is well part of SAP AG now)? Oracle filed a case against SAP for an illegal stealing/usage of Oracle licensed software. Oracle claims that TomorrowNow illegally copied software code needed to support customers without buying licenses (from Oracle) to access it. TomorrowNow made thousands of duplicates of copyrighted software obtained by illegally accessing electronic materials from Oracle’s customer-support websites, the lawyers said. That is quite a mess TomorrowNow brought into SAP. Well, now no go, but to pay the 1.3billion and work harder for future!

Recently, this came up during the lunch discussion with my colleagues at Broadcom.  I remember reading up an article somewhere, quoting the impact of birds migration due to cellular phone towers.

Researchers from Research Institute for Nature and Forest, Brussels, Belgium have investigated into this subject and published an article (more reports here). According to the authors (Joris Everaert and Dirk Bauwens),the house sparrows do not prefer to stay near the GSM base stations because the radiation in the 900 MHz range is adversely affecting them. The paper investigated the male house sparrow population. The statistics quoted in this paper clearly shows that, there is some kind of an impact the cellular tower radiation is causing onto the city birds.Perhaps a more scientific study on this is needed to assess the details, but this itself is a reason to worry.

Another interesting blog comes from India, which also suggests that the Belgian research finding is correlated to what is observed elsewhere.  But then, I am wondering how come there were thousands of sparrows flooded across almost every street at Schaffhausen. We had been on a family holiday there, last year. It was the place where I have seen the highest amount of sparrows, all waiting to get fed and pampered! May be they have adapted to the technology, huh?

I remember reading this Spectrum magazine at a friend’s house in Zurich, last month. I am not going to reveal his identity any further ( I fear a backslash:-)), but he has a nice habbit of keeping pretty good collection of magazines in the bathroom. The collection includes National Geographic, The Economist, Scientific American and Le Monde. I am not one of those guys who relish reading at length in those hot seats, but for once, I did scan through the hanging Spectrum magazine.

Anyway, the one I wanted to mention is the Spectrum article on “the Internet speed”. The fastest internet speed is enjoyed by South Korea and not the United States. The average speed itself is 11.7 Mbps. When you desperate for the best browsing, now you know where to head to! The list of countries which top the list is a bit of surprise. In Europe for instance, the fastest pal is Romania, the eastern beautiful land which is not really known as the internet bull dog. Switzerland is 10th, which is not really surprising, because I never found the speed less there. The Euro cup live HD streaming was so peaceful that I never had realized the need of TV.

Ah, back to the country statistics! Don’t worry too much if you feel doomed at the prospect of applying for a Korean visa. There are places in US  which are as good; in fact better! If you go by the fastest internet cities/towns, then Berkeley is the place. Average speed of internet at Berkeley is 18.7 Mbps, which is better than the Korea’s national average:-).

All these are published by Akamai technologies. An interesting thing, reported by them is on the trend in the average speed. It turns out that, the average speed has come down, in the recent years. Korea itself slowed down there.  The Korean downloads were 29 percent slower in 2009 than 2008 and they were further 24 percent slower in the fourth quarter than in the third quarter of 2009.

I can never have enough of Aart De Geus, my former CEO who still very much remain as my role model. Every time, I hear something from him, it is inspirational and mind blowing. No wonder  Daniel Nenni is damn impressed by Aart’s presentation at the EDA CEO’s meet last month (A detailed account is here in Nenni’s blog). Well, the point Aart stress is the need of collaboration and more so at these times, where the social networking has spurred by the internet shaping. It happens everywhere these days, more so in research. Many years ago, it was a norm to have single author publications, but things have changed off late. Now we have authors collobarate across boundaries and continents, sometime even without seeing them personally. I think this is a good trend. Everybody benefits. Aart ofcourse was stressing that semiconductor industry need no less. Gone are the days, when discussing problems were considered unethical. In a free world, one needs to be fearless in asking questions. After all,  talking is good!

As always, Aart has that super skill to put things in an eye catching manner. Daniel phrased it more aptly in his blog, as follows: “Aart also introduced the word systemic (yes I had to look it up) and a mathematical equation correction: Semiconductor design enabled results are not a SUM but a PRODUCT. As in, if anywhere in the semiconductor design and manufacturing equation there is a zero, the results will be a bad wafer, die, chip, or electronic device, which supports GFI’s vision for a new type of collaboration between partners and customers.” Beautifully put and phrased.

If you have ever listened to Aart’s talks, it is a no brainier to guess the kind of super presentation slides he makes. Here is the one from this talk (Again, please read Daniel’s blog for elaborate discussion on this). The analogy is the task of finding a vegetarian restaurant without the service of a vegan mother in-law. The point is that, at the moment it is still long and expensive a route. We need smarter ways to speedup (and cheaper of course).  I leave you to Daniel’s blog for further read. It indeed is a fabulous read.

Wireless transmission at rate in the order of 1000Mbps! It was once considered to be  like the holygrail in wireless transmission. Well, now we have the Wireless HD and WiGig, which can scale these mountains.  The new WiGig standard is coping with the possibility of multiple of 1000Mbps. We can transmit up to 7Gbps, albeit short range (in the form of 10 meters or so), all without any wires using WiGig over the 60GHz spectrum available, largely unused across the world. Come to think of it, 7Gbps is hell a lot of data for a tick duration of time. Just about 10 years ago, we would have easily brushed away the need for something like these, because we never really could fathom an application which needs these sack of data. But things changed since then. Now, we have the blue ray players and uncompressed HD video eminent for a wireless transfer to the high-definition displays.

Couple of months ago, the WiGig alliance and WiFi announced a co operative agreement to share the technical specification and compliance testing for the 60GHz and WiFi. So, from a standard point of view things are moving pretty fast. Afterall we seem to have learned from the atrocious delay in the many earlier standard evolution, most notoriously the IEEE 802.11n. A product compliant to WiGig/IEEE 802.11ad is still far away, but there is serious motivation to get the standard spec evolve.

There are two parallel drives on the 60GHz spectrum. In terms of productization, the non standard, some flavour of proprietary solutions are  kind of available in the market.  The Sibeam’s WirelessHD™ and Amimon’s  propritery WHDI solutions on 5GHz spectrum are  available now. On 60GHz only one product (as far as I know) is available and that is compliant to WirelessHD™.

By the way, the WirelessHD™ also published the 1.1 version of their consortium spec. The IEEE spec and WirelessHD™ are now showing no sighs of consensus, which is  abad sign. Hopefully, at some stage these two merge and get one standard spec. My concern is that, in the event dual standard, there is potential interference between the  two standard compliant products. The one chipset WirelessHD™ compliant which is available (not sure whether it is selling) is damn too expensive. So, we need tremendous price scaling down to make these things viable from a business point of view.

The WiGig product is unlikely to hit the market in the next 2 years, but it will come sooner than later. The three main applications of WiGig are  (1) Short range streaming of uncompressed video for HDMI to HDMI devices (2) Desktop storage (which is much like the wireless USB once talked about highly during the UWB days).The much talked about USB3.0 will become an important requirement for this to happen. Intel will have to abide this transition on all processors, which I am sure will happen at some stage  (3) Docking stations: Wireless transfer between monitor and docking station.

Pricing is going to be the single most bottleneck for WiGig to get into the mass market. Under $10 chipset is a bare minimum requirement to have any kind of penetration into the consumer electronic market. Learning from the way, things moved in the past, pricing problem can be scaled in a few years.

In my opinion, the killer need for 60GHz to succeed will be to get serious power savings. The antenna size will be significantly small (because of the higher carrier frequency) and perhaps that may perhaps be a silicon based integrated antenna. To get into portable devices, we may have a solution which stress less on battery. Can we look that ahead now, say 5 years from now?

The new spec has some very interesting features. While it consume 1.6GHz of bandwidth, with multiple antennas it calls for some sophisticated signal processing techniques to scale the 1GHz mountain. The radio design is extremely challenging. Above all, we need backward compatibility with the WiFi. I hope by then we can get away with those annoying IEEE 802.11b out of the box!

So, the days ahead are exciting. It is natural to pose this question: how much more on wireless? As Marconi’ said. “It is dangerous to put limit on wireless”. So true!

Come to think of it, it may be possible one day, that we can have a realistic relay channel setup. May be future short and medium range wireless LANs. I am going to make a modest attempt and pen down a bit on a possible model in a few days time. I think it has potential.

One of the term which often resonates inside a semiconductor company is “split lots”. Even though I vaguely knew what it referred to, the haziness around was deeper than the meat of the term. Borrowing Mark Twain, ” So throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover”. If not fully,  I should know what I am completely dumb about. In the end it is a fairly simple terminology. Here is what I understood. Most of what I gathered are a result of an information gathering attempt, after repeated bugging on many of my VLSI colleagues coupled with my dump fight with Mr. Google.

Firstly, the hero term, “corner lot”. Corner lot is a term referring to the semiconductor fabrication process corners. Basically, the fabrication parameters are deliberately (and carefully) changed to create extreme corners of a circuit etched in semiconductor wafer. These corner chips are expected to run slower or faster than the nominal (average behaviour of) chip produced in large volume. These corner lot chips also function at lower or higher temperature and voltages than a nominal chip. In short they differ from the typical parts, in terms of performance.

Why do this? Well, the simple answer is: To study the robustness of semiconductor chips mass manufactured out of a complicated fabrication process. When millions of chips are produced at the assembly, statistics come into play (think of the law of large numbers and central limit theorems). In order to ensure that, the manufactured chips function within a certain confidence interval  (within certain variance from the typical parts), it is a practice to fabricate corner lots.

When the volume of samples is large, the manufactured VLSI chips are likely to have performance variation, which admit a Gaussian statistical distribution. The mean of the distribution is the nominal performance of the chips. The three sigma or six sigma performance from the nominal one is that entails upon the corner lot chips. The process parameters are carefully adjusted to the three sigma (or six sigma depending on the need) from the nominal doping concentration in transistors on a silicon wafer. This way, one can deliberately mimick the corner chips which comes out in the volume production. In the manufacturing routine, the variation in performance may occur for many reasons, due to minor changes in temperature or humidity present in the clean room. The variation can also happen with variation in the die position relative to the center of the wafer.

In essence, the corner lots are groups of wafers whose process parameters are carefully adjusted according to chosen extremes. How are these corners produced? In other words, what exactly is changed to carefully achieve these extreme wafers? The exact details are not presented here. From this forum, I infer that the main parameters are 1) the doping concentration. 2) The process variation. 3) The resistance of the actives 4) the properties and thickness of oxides 5) The effective width, length of the stray capacitances etc.

What do we do with these corner lot chips? These extreme corners are characterized at various conditions such as temperature, voltages etc. Once the characterization across these corners is proved to be within accepted limits, then the mass manufactured volume of semiconductor chips falls within the required confidence interval. If all the corner lot chips meet he performance, it is safe to assume that, the huge volume of chips mass manufactured also will fall within the performance limits. That way, humongous saving of time and effort from testing of each chips is achieved.

What are the different process corner types? There is no end to the terminologies here. The nomenclature of the corners are based on two letters (we limit attention to CMOS semiconductors alone). The first letter attributed to the NMOS corner and the second letter for the PMOS. Three types exist in each, namely typical (T), fast (F) and slow (S).Here, slow and fast refers to the speed (mobility) of electrons and holes. In all {3 \choose 2 } (i.e., 6) corners and they are FF,FS,FT,SS,ST,TT. Among these, FF,SS and  TT are even corners since both PMOS and NMOS are affected equally. The corners FS and SF are skewed. The even corners are expected to function less adversely compared to the skewed corners, for valid reasons.

By the way, the obvious impact of the “corner” feature is the difference in device switching speed, which is related to the mobility of electrons and holes. The mobility of electron is related to the doping concentration (among others). A rough empirical (as shown) model shows the following relationship. The mobility \mu depends also on the impurity and doping concentration \rho. The parameters vary, depending on the type of impurity. The three common impurity elements are arsenic, phosphorus, boron (See this link for further details)

\mu=\mu_{0}+\frac{\mu_{1}-\mu_{0}}{1+\left(\frac{\rho}{\rho_{0}}\right)^{\alpha}}

where, \mu_{0} and \mu_{1} are the minimum and maximum limits of the mobility. \alpha is a fitting parameter.

In the figure (below), the mobility of electrons due to different impurities and doping concentration are shown. The three commonly used dopants are arsenic, boron and phosphorus.


It is an irony that, the dopant which is deliberately added  to increase conductivity of semiconductor itself slows down the mobility of electrons and holes due to collision (of electrons or holes) with the dopants.

A touch of note on the term mobility too. Broadly speaking, the device switching speed is directly related to the mobility of charged particles (electrons and holes). So, higher mobility somewhat implies that better and faster switching logic etc. When an electric field E is applied to a semiconductor, the electrostatic force will drive the carriers to a constant average velocity v, when the carriers scatter though the impurities and lattice vibrations. The ratio of the velocity to the applied electric field is called the mobility \mu. That is., \mu=v/E. Initially, the velocity increases with increase in electric field and finally reach a saturation velocity at high electric field. When the carriers flow at the surface of semiconductor, additional scattering may occur and that will pull down the mobility.

The kind of interactions which happens at atomic and sub atomic levels within and outside a transistor are way too complex to comprehend (in a blog for sure!). Besides, the millions (and these days billions) of these transistors all must work in tandem to make the system function as desired; And that too, with not a single one failing! To make business, certain economic sense should prevail as well. It goes without saying that, time is extremely critical to get the product into action. The split lot is one heck of a way not to stretch that time window.

(Photo courtesy)

Great to find and read an article/report in EEtimes about a company founded by many of my ex colleagues and friends. Saankhya labs seem to be in good shape to make that big impact in the fabless startup  arena. So far, the success of Indian startups have been mainly in the service sector and a few in the IP/networking boxes.  Saankhya is targetting a niche market, thrived by the software defined programmable radios, targetting for the digital TV market.  It is beyond doubt that, a universal demodulator is of tremendous potential in the consumer TV market, yet largely untapped.  With so many different standards running around the world for digital tv transmission itself, it is of heavy interest to have one decoder which does work for all locations. Saankhya also have analog decoder (for the US ATSC schemes) which will be handy during the period of  transition  when  the service providers swtich from analog to digital. Best wishes to Saankhya.

Wireless gigabit alliance (WiGig) has a new(updated) website. For a first up, there is a link How WiGig Works which nicely explain what  WiGig is all about, in a clear layman’s terms. If you ever wondered whether we saw the finale of the wireless rate surge, just re-think. We are still a lot far from drafting even a proposal, but there is surely plenty of light seen in the wireless horizon. As an example, HDTV would require about 3Gbps rate. WiGig is addressing applications such as this which demand rates beyond 3 giga bits per second. The brief tutorial is a compelling read.

The much expected Wolfram alpha has gone for a soft launch since last night. It had some start up glitches, as Wolfram briefed during the live demo, but nothing major fortunately, prevented  me from getting a first feel of it. Erick Schonfeld  has a nice blog with a detailed first hand feel description of this new computing web search engine.  He also did a one to one comparison with Google for a few specific search queries.

My first impression is in much the same line as what I expected after reading Wolfram’s pre-launch blog. This is not a Google competitor for sure, but instead an incredibly complementing brother.  Wolfram alpha is more of a scientific and quantitative information search engine. For instance, if you want to know the Taylor series expansion of  exponential function e^{x}, you can do it easily by entering “Taylor series of Exp[x/2]”. As you would imagine, Google does not give this precise answer, but instead give you a list of documents matching this query, for instance a set of PDF links where this is already calculated. Clearly, Wolfram gives a more accurate and clever presentation of this query result. Wolfram alpha seem to use quite a lot of Mathematica capabilities too, like plot etc. Any mathematical query, will lead to pretty good result, sometimes including plots, histograms, Taylor expansions, approximations, derivatives, continuity etc. It is a nice feature to have for students and engineers.

wolfram1

This is the sort of query it likes the most and not something like “proof of Sanov’s theorem”. Google will incredibly list a set of documents which has the proof one is looking for, since it simply search down the web and display a listof  matching queries, ordered based on pagerank, which is loosely speaking in the order of relevance.

Not all queries are bound to get a result with wolfram alpha, atleast for now. That is expected since it is not yet in launch mode, but on soft launch. In the coming days they are likely to have it running full fledged with all kind od queries supported.

So, the wolfram alpha is definitely going to be useful for very many cases and it surely is going to rock in scientific searches. I initially thought the Google squared which is going to come from Google shortly is addressing the very same segment of search area, but it is clearly different.

I tried “tallest mountain Switzerland” . It gave a very nice cute quantified table. I love this kind of result. It is also state things with less ambiguity. For instance the height is mentioned in meter, but there is a list of unit conversions listed along, which help people to map them into the units of their convenience.

I tried a query “Who is Claude Shannon”. This is what it displayed. Of course, the result you get is a very brief information about him. Same query in Google will lead you to the more detailed Wikipedia entry of Shannon or may be the Mathworld entry of Shannon among the list of hits .  Wolfram alpha gives information more like in capsule form. If you need to know more, you should ask more. Clearly, what search engine to use is thus subject to the query type.  I strongly see Google and Wolfram alpha are complementary. Wolfram alpha gives more or less one reply to a single question. Of course you can renew the query and then get answer to that. In some sense, this is like people asking questions to one another in real physical scenario. Imagine you ask a friend, knowledgeable pal that is: Who is Shannon? He would perhaps start answering in those lines as Wolfram Alpha do. On repeated question he will give more details. On the other hand, Googling is like broadcasting your query to a large pool of friends, each one of them sends what they know or heard about Claude Shannon. It is you,who decides whichamong the many answer(s)/explanation(s) suit your need!

We can afford some amount of spelling errors while entering the query in wolfram alpha. Since it is natural language based, that is a decent feature to have. I deliberately typed the query “distnace from Bangalore to geneva ” instead of “distance from Bangalore to geneva “. It understood the intended query and displayed the result in a nice quantified table. Eve the geographical trace between the two places is shown. Incredible!

When I tried “weather in Lausanne”, this is as good as it gets.  Spot on with all possible things you want to know in one screen! It had a list of mountains and their heights mentioned!

In a nutshell, Wolfram alpha give you the best cooked food, given a user recipient as input. Google will give you a list of foods available and then you pick the one tasting suit . It  really then is a question of preference, time, and satisfaction of the end user on what to choose from. As far as I am concerned, it is subjective. I see both of these are invaluable and both will co-exist. Scientists,economists, finance folks, mathematicians, historians are all bound to benefit from this new computing engine.  I am waiting for a full release!

I am eagerly waiting for this new search and compute engine promised by Stephen Wolfram.  They call it wolfram|alpha (If google always went with the beta release, Wolfram is going even early).This, if it work in the promised lines is going to rock the Internet evolution. From the outset, this is not just a search engine. It is kind of an intelligent searcher who can loosely understand the human requirements. 

wolframalpha1

For long, it was perceived that a search engine driven by natural language processing is the way forward. But it is pretty hard to build such a system since natural language processing is no mean business.  Wolfram’s idea is to create an abstraction and then algorithm of these realizable models. Once we can do a mapping of the requirements  to algorithm that is computable, at least in principle we can build such a system. But that is a whole lot of heavy statements already. How easy it is to build all these methods and models into an algorithmic framework? He is using the New Kind of Science (NKS) armoury to realize that. We have to wait to get the full rainbow, but when he promises we can confidently expect something big. 

Now once the algorithmic mapping (and implementation) is done, then the question of natural interacting between humans and the system comes. Natural language is the way, but according to him we don’t have to worry about doing that as such. Once the knowledge of the individual is made into a computational framework, then that is enough.  I am not an expert in this natural language processing and NKS framework, but for sure this is pretty exciting,both from an algorithmic point of view as well as a practical MontBlanc. As Wolfram himself pointed out Pulling all of this together to create a true computational knowledge engine is a very difficult task. Indeed it is still being considered a difficult problem, both in academia and industry. So there is excitement aplenty in the offing. I am eagerly waiting for this to hit soon.

Considering that, the big wig search engine houses including Google are still struggling to make that dream natural language engines (the many pseudo ones in the market are not quite approved). I remember http://www.ask.com started their business in those lines, but never seemed to have crossed that elusive mark of acceptance, atleast not to an extend to capture a world wide wow!  If Wolfram has a new way to get this through, that will be a big breakthrough. I cant wait to see that. Wolfram promises that it is going to be very soon. He says it is in May 2009. My guess is that they will release it on May 14,2009.

It was today. I’ve just come back to office, after the dinner party hosted as part of the I&C anniversary celebrations at EPFL. Andrew Viterbi was the guest of honour and largely because of his fame, there was considerable crowd attending the function. Martin Vetterli made a nice colourful, flashy presentation illustrating the history of I&C in EPFL as well as scientific progress in Switzerland. He mentioned the names including Jim Massey, Ungerboek who are undoubtedly pioneers of modern communication theory and practice. He began saying that “…Ungerboek is our friend, and now not quite..I will come to that in a minute…”. And of course he didnt come back and fill the circumstance in which the friendship derailed. But I reckon it was a casual remark, perhaps to indicate that Ungerboek, now with Broadcom is a bitter rival to Qualcomm. Since Qualcomm recently established a scientific partnership with EPFL and Viterbi being a Qualcom founder and associate, he perhaps just jotted that remark. It was a nice, usual interesting presentation by Martin.

He also mentioned a nice story about the current EPFL president Patrick Aebischer. Interestingly Patrick Aebischer after an MD (Medical science) degree was fond of computer science and decided to venture into taking a MS degree in CS . He then decided to test his luck at EPFL and approached the admission committee with a formal application. CS was affiliated to the Math department in those days. EPFL politely rejected his application and in due course that ended Patrick’s quest for an EPFL CS degree. He then moved to the US, as a successful surgeon and took a career path of entirely different trace. Years later, as one would say, due to the uncertain turn of things in the great cycle of life, he became the EPFL president and now ruling not only the CS department, but the whole school.

Viterbi talked about the Digital Communication history. He started giving a perspective of this field starting from the days of Maxwell, Rao, Cramer, Wiener and Nyquist. Then he discussed the impact of Shannon’s work. He said the three driving force which made this digital mobile revolution are

1) Shannon’s framework (1948)

2) Satellite (Sparked by the Sputnik success in 1957)

3) Moores’s law, which is more of a socio economic law, which dramatically kept driving the industry so successfully.

The talk as such wasn’t too attention gathering, but he made a rather comprehensive presentation discussing the impact of  digital communication evolution spurred since Shannon’s days (and even early) knitting a dramatic success story of digital wireless world with millions of cell phones and similar devices, which showcased literally the realization of theoretical promise Shannon made in 1948. He himself has his name etched in part of that success story, at least in the form of Viterbi algorithm, which is (one of the instance of it) an algorithm used to detect sequences when perturbed by a medium.

Quite a lot of fun activities were organized by the committee. It was quite fun. Since many programs (especially the fun part) were in french, the appeal was considerably deaf to non-french speakers. But then the rationale given was that, the alumni in good percentage are french! I found it funfilled , mainly to see these successful people like Viterbi sharing their views in real. After all we can learn from history. Not many people can claim to have done so well in everything he touched. In the case of Viterbi, he is an academician, researcher, successful entrepreneur and now a venture capitalist, all scaled to the possible limits. Incredible role model, whichever way we look.

…and it is Oracle! Quite a surprise! Thats the least I felt, when the news broke out stating that Oracle is buying Sun Microsystems. The once great and proud maker of some of the best servers and computing power houses is now leading to the hands of a software giant, largely focused on database solutions. There is no natural connection to the obvious eye But who knows? Oracle may be eying something big! I cant see a justification of spending 7.4Billion $ to get hold of Java and MySQL alone. These are the big software solutions from Sun, apart from Solaris.  Anyway both these are open source software too. Afterall Sun is known for its champion make of servers right? Is it that Oracle feared an imminent acquisition by some other competitor, which might have distracted their lead? For a good amount of time the speculation was on whether IBM would still buy Sun. Then it was the Cisco, and the HP taking rounds as potential buyers. None of these materialized, but Oracle, the one choice with maximum entropy!

Would it be that, Oracle saw something big with Solaris? Are they eying on a solid operating system market? In any case, a decision to buy a company for 7.4Billion cant be for fun. Surely there got to be a plan, at least in theory!As someone opined in some article recently about possible consolidation of SAP and a possible buy over by one of he bigger fishes like IBM or HP. Now, that would take some shape too. Nothing can be ruled out at the moment. This is the sort of indication floating around.

It was almost unthinkable that a single company would rule the EDA world. At least this is what I strongly perceived, a few years ago. Now, put the present dishes on the table and I see that, Synopsys is giving nightmares to all other EDA shops. While working with Synopsys, we always saw Cadence as the rival company to get floored on. All of that, was in the wish list and not many of us thought we could do that, ever so easily. Cadence was the obvious leader of EDA for many years and Synopsys strongly stood at the second position. Then there were the Mentors and the Magmas, at a fair distance down. Magma was the emerging company with a strong future predicted by many pundits within and outside the EDA world. It was imminent that Magma one day would give a stronger competition to both the big brothers Synopsys and Cadence. They may still be a force to reckon, but sadly they tried to act over smart and it all triggered a downfall. I am not sure whether their, rather peculiar sue attempt on Synopsys was wholly responsible for their slide. Definitely that may have had a role. 

Now it appears that, the discounts offered by the EDA big fellows are giving more aches to smaller players. It is well known that the EDA tools are phenomenally expensive and the marketing always revolved around giving deals for bulk purchase of tools. What is more colourful is that the buyers offer to make the deal public in exchange of more discounts. The concept of primary EDA vendor was not that prevalent a few years ago. However, the trend these days is to grab that extra mileage by roping with leading semiconductor houses. It is a big win for both the buyer and seller. Synopsys for sure  is going to enjoy this. First they are among the very few making profit even in these difficult economy. They are perhaps the only one from EDA. Considering that the EDA market itself is only about 4 or 5Billion dollar market, the impact of a near 1.5billion dollar Synopsys doing too well is going to give more headache to other little fellows, in the coming days.

Cadence is literally having a plate of their own problems and now with the whole semiconductor market trying to minimize their R&D spending, it is double advantage for Synopsys; That too with newer friends adding to their primary EDA friends list. Magma is becoming more or less a prospective buying target than a rival. A few years ago, Synopsys had worries about a growing Magma. Now I wouldnt rule out a potential buy over by Synopsys itself, may be Cadence or Mentor Graphics! 

Some people say that Synopsys is going to be the next Microsoft in EDA. Aart perhaps rightly said they want to be the Apple of EDA. I would prefer Aarts view here. Not just because Synopsys was my breadwinner for a while and not because I attended the same grad school as De geus, nor because of the well known fact that yours truly is an ardent fan of Aart de Geus. But because Synopsys is  well managed by a great management team with great work ethics. When the ratable (subscription) revenue/ licensing model was announced there were lot of eyebrows, but it was a long term vision and Synopsys is really reaping the fruits now. 

Having said all these, like many of you, I am too worried by this single monopoly trend in EDA. We need smaller players in every market and we need more innovation. From Synopsys standpoint having less competition would yield relaxed days ahead, but for the market we need better products and superior innovation. We need Cadence to revive and at the same time companies to emerge to take position for the next Magma. At this stage, I am worried about Magma. Is Magma to follow the Avant! route to get merged with Synopsys?

Aart has aptly mentioned that “I understand that the entire world is under economic pressure,” he said. “When that happens, some will do better than others”. One thing for sure. Among all the EDA executives, Synopsys folks must be getting better sleep these days.

While the talk and boom about multimode multiband phone in CMOS is turning greener, there should be a natural question around it.  How about doing all these in software? Rather add a level of programmability such that a great deal of issues from a  hardwired implementation are shifted to more flexible firmware.  Without contention, pros and cons with the idea of programmability still prevail. Clearly, one definite advantage I see with programmable design is the significant cost reduction and reuse.  Additionally a migration or upgrade, which is imminent from a future gadget design point of view, can get done with relative ease with a programmable multimode chip. Building a suitable processor architecture to suit the modulations schemes (say an OFDM based scheme can have an inbuilt FFT engine or a WCDMA can have a correlator engine). Aren’t anyone working seriously in these directions? I am sure there are many, atleast startup ventures.  Vaanu and Icera indeed are two things coming to my mind.  How about the big boys? There were lot of furies about software programmable baseband chips being developed. Not quite sure what is the latest in that front.  Isn’t it the next big thing in the offing? I am sure the EDA big houses have thought ahead for building tools for a heavily software oriented design, at least for years ahead. Or is it that, I am jumping the gun a little too far? However,  I see some top level bottlenecks in making this programmable multimode chips realizable at an easier pace than a textbook concept. One of them is difficulty in getting away the analog front end. As a matter of fact, now I feel that, analog is going to stay.

So where are we heading to? Clearly, an all CMOS multiband multimode single chip (baseband and analog) with a near perfect RF and a software architecture would be the ultimate holy grail of cellular chip design. How many bands and how many modes to be incorporated becomes less important, if the programmability aspect is assured. Challenges within a single chip concept are themselves many.  Clearly the RF portion is expected to take up lesser share of the overall chip size. An all digital front end is aimed in that direction. While a direct digitization of radio signal of high frequency eliminates analog life process significantly, there are several practical bottlenecks with this Utopian design model.  We are not quite there to say good bye to analog entirely. Analog signal processing is still critical and inevitable, even for a programmable multimode dream.  I will give you some numerical facts to substantiate my claim:

Suppose we decide to build  programmable all digital zero if receiver for a 2GHz system (around the UMTS band). Then,  Shannon Nyqusit sampling would demand at-least 4 G samples/second.  Even with  a processor which clocks 4Ghz and say 8 operations per cycle, our full steam purchase is going to be a maximum 32000000 operations per second. This theoretical figure is based on the assumption that processor memory is fully utilized. At the sampling rate of 4G samples/second, we only are going to get \frac{32 \times 10^{9}}{4\times 10^{9}}=8 operations per sample. How are we going to have all the fancy radio algorithms shape life with this? Even to implement realistic functionality of a typical modern radio, this is inadequate. Another important thing is the imminent power dissipation to run a processor at 4 GHz. For a portable gadget, where  these chip are targeted for, we still need more and more hand in hand optimization and integration with analog processing, software as well as digital processing, in addition to an optimized system architecture. My feeling is that, the analog front end is going to stay for some more time, if not for ever. At least on the immediate future, we need more inroads from analog processing, to realize the small size, cost effective multiband multi mode chip dream.

Today, there appeared an interestng (and perhaps general) question posted on the Linkedin Analog RFmixed signal group. The question was this “Regarding multi-mode multiband RF transmitters for handsets (CMOS), what do you think are the hot issues (besides PA)?” I have given a short overview of the challenges that I could see when a multi mode phone is to be designed on CMOS: The phone has to support a wide range of frequency bands as well as multiple standards/technologies/modulation/air interface. Here is what I wrote.  I am not sure whether the discussion is accessible to public. Hence I repost here. 

Integrating the RF transmitter and receiver circuits is a challenging thing since we have to support multiple bands (within a single mode. Say GSM/EDGE should support GSM900 to 1900 bands) as well as support for multiple phone modes. For instance a natural multi mode multi band phone supporting GSM/GPRS/EDGE/WCDMA/LTE will have to consider a wide frequency ranges from 850MHz to over 2GHz. If we were to consider incorporating GPS and WLAN, add that extra consideration. Not just the transceiver circuitry, but also other components such as oscillators, filters, passive components, frequency synthesizers and power amplifiers. Another thing is that, for multi mode, the sensitivity requirements are much more stringent than a single mode, multi band design. 

Since CMOS offers low cost, better performance and better scaling, to me that is the way forward. The natural choice of transceiver in CMOS would be the direct conversion/Zero IF, since it eliminates the costly SAW filters, and also reduce the number of on chip oscillators and mixers. Now, there would be several key design issues to be considered now with direct conversion architecture. Most notable ones are the well known ghost “DC offset” and the 1/f noise. Designers will have the task cut out to get a cleaner front end and as well as near ideal oscillators. 

Now I see another problem with multi mode, depending on what level of flexibility we prefer on this integration. Do we need the phone to operate in multiple modes simultaneously? Say a voice call on GSM and at the same time a multimedia streaming on LTE. In such a case, the question of sharing components are completely ruled out. If not, say some components such as synthesizers and mixers (if in the same band for multiple modes) can be shared. Clearly, simultaneous mode operation will ask for increased silicon die size as well as cost. Challenges may be there for circuit isolation for different modes as well. 

In all, depending on the level of sophistication (and of course all these things will have to be scaled economically too) the design,partitioning, architecture challenges are aplenty. Now the choice between a single chip (containing both analog baseband and digital baseband) versus two chips (analog and digital partitioned) will get a little more trickier with multiple modes. With multiple antennas (MIMO), add another dimension to this whole thing:-(. 

https://ratnuu.wordpress.com 
http://people.epfl.ch/rethnakaran.pulikkoonattu

Phew! After the heck of debates and discussions (over years) on the standard evolution of the IEEE 802.11n for multiple antennas (MIMO), now it appears that we are all in for a single stream (single antenna) chip. It sounds more like an 11g upgrade, or perhaps a conservative lead from there on? If Atheros believe this is is the way to go I have my belief that Broadcom and Marvell have it in the delivery line too.  Here is that interesting new story at EEtimes.

Stumbled upon the news on New York times: it is about a new search engine being developed by some former Google folks. First there is excitement when it comes to a startup idea when you know that they know how it is to be confronting their former employers in business. Anyway, the new engine is called cuil (pronounced just like ‘cool’). I am all for new ideas. Hopefully we are into better search engines. Since these folks are also from Google, you can expect a certain Google standard guaranteed. Google undoubtedly changed the search engine business, by simply scaling the internet to a level hitherto unimagined. Yet again, a Stanford connection to a new startup. Tom Costello and his wife Anna Patterson (former Google architect) surely will know this business better than us (correction, better than me to say the least).

If their motto of producing a more appropriate search engine, bettering Google, then we should feel happy and proud of this adventure. Surely Google cant relax either. In all it is a win win for the world. A preliminary look at the search engine game me a good feel. I am not sure whether the change in appearance (after being stuck and used to Google search for so long) gives me this impression. Anyway I look forward to see their progress.

I leave it to you to try out for a comparison. I did a Cuil on “compressed sensing” and found this where as a google of “compressed sensing” displayed this. Google displayed the search result as a list (rows) where as the Cuil results to a tabular form. Too early to say anything discrete, but I am going to try the new one as well. Google is by far the fastest (at the moment).

Pages

September 2017
M T W T F S S
« Mar    
 123
45678910
11121314151617
18192021222324
252627282930  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 86 other followers

Like

%d bloggers like this: