You are currently browsing the category archive for the ‘Wireless’ category.

Almost all the deployed and successful communication strategies till date are half duplex (HD). That is, we don’t have simultaneous transmission and reception on the same frequency band, aka, full duplex (FD). For example, 802.11 WiFi uses a time switch (TDD) between transmit and receive mode.  Both transmission and reception takes place over the same frequency band as well. A single antenna is (typically) used for both tx and rx in this case. It is always either transmit or receive (or none!) that happen at any given time. In the cellular world, such as LTE the popular scheme is to have the frequency slice shared (FDD). In that case the up-link (link from a cell phone to base station) takes place in a range of frequency band different from that on link receiving signal from base station, while both transmit and receive can take place simultaneously. In both TDD and FDD cases, there is no overlap between the transmit and receive signals at a given frequency at the same time.

Let us posit this question. In a given frequency band,  is it feasible at all to have simultaneous transmission and reception? One way of course is to find a domain where these two (transmit and receive) signals stay perfectly distinct. Say use some orthogonal codes. In theory yes, but there is an important practical hurdle here. It is the issue of the loudness (aka self interference) from own transmit signal! An analogy is like one tries to decipher a whisper coming from someone, while he/she is simultaneously shouting at top of his/her voice. In reality, the desired signal comes from a distant source after traveling through adverse medium/channel. More than anything else, the signal intensity level would have got severely degraded by the time signal arrives at the receiver unit. Well, let me put some numbers from a practical setup. In a (typical) WiFi scenario the incoming signal (from an AP) at your receiver antenna (of say tablet) may be around -70dBm, whereas, the power of (tablet PC’s) concurrent transmission power could be 20 dBm!  The task to fulfill the full duplex goal is really to recover the information from the relatively week signal in the presence of a self interference stronger by 80 to 90dB! In other words, we should hit a mechanism to suppress the self interference by 90dB! Getting a 90dB suppression is no easy, especially when we are constrained chip and board area to get deployed in portable devices! Traditional board layout tricks such as isolation, beam steering etc alone wouldn’t get us there.

OK, now what? the reason I suddenly brought this up is largely due to the increased momentum this one is gathering off later in both academia as well as industry. It still has enormous challenges ahead. Realizing FD on the other hand will bring in enormous benefits. Historically, we always mulled over capacity and throughput, with the strong assumption that all resources in the lot are available. Say for a given channel bandwidth W, the capacity is C(W) and throughput is so much and so on. The reality is that, in most cases, to have information exchange, we need two way communication and that means double resources. Spectrum being pricey and scarce, getting the full duplex can potentially get up to double fold in throughput and several other benefits along the way such as remedy to the hidden node problem in current 802.11 MAC access. Now 802.11 standards front, we have a new study group on high efficiency wireless (HEW). I believe HD can play a role there too.

I am not prepared to discuss all the details involved here. Let me outline a rough problem formulation of FD.  More general versions exists, but let me try with a simple case. Much more detailed formulation of the problem can be seen here and elsewhere. I kinda used the notations and problem statement from this. Let y_{a} be the desired signal from a distant sender, arriving  at the rx antenna. Simultaneously, a much high power signal x is being sent . The signal x is significantly higher power than y_{a}. Now, the signal x leaks through some path H and produce an interference v_{a} at the receive antenna. In other words, the effective signal at the receiver antenna port is z_a=x+y_a. For sake of simplicity, let us assume that H is modeled as a FIR filter. The sampled signal relationship can be then stated as follows.

z_{a}[n]=y_{a}[n]+\underbrace{\sum_{k=0}^{\infty}{H[m] x[n-m]}}_{\triangleq u_{a}[n]}.

Now here is the thing. We cannot simply pass the buck to digital domain and ask to recover the useful signal from powerful interference. Recall that, the A/D converter stands at the very interface of analog to digital partition. High power interference signal will severely saturate the A/D and result in irreversible clipping noise. So, first we must do a level of analog suppression of this interference and make sure that, the A/D is not saturated. Let us say, we go for an analog filter C_{a} and do this job.  Post analog cancellation using a filter C_{a}[n] we will have,

\tilde{z}_{a}[n]=z_{a}[n]+\underbrace{\sum_{k=0}^{\infty}{C_{a}[m] x[n-m]}}_{\triangleq v_{a}[n]}.

The A/D signal transformation can be decomposed to the following form (using Bussgang theorem for instance). \tilde{z}_{d}[n]=\mathcal{A} \tilde{z}_{a}[n]+q[n]. Now,

\tilde{z}_{d}[n]={\mathcal{A}} z_{a}[n]+{\mathcal{A}} {\displaystyle \sum_{k=0}^{\infty}{H[m] x[n-m]}}.

If we do a digital cancellation at the A/D output state with a filter C_{d}[n], we can have \hat{z}_{d}[n]=\tilde{z}_{d}[n]+\sum_{k=0}^{\infty}{C_{d}[m] x[n-m]}. Incorporating all these, we will have

\hat{z}_{d}[n]={\mathcal{A}} y_{a}[n]+ \displaystyle \sum_{m=0}^{\infty}{\left[\mathcal{A} \left(H[m]+C_{a}[m]\right)+C_{d}[m]\right] x[n-m]}+q[n].

Now if we can adapt and find C_{a}[n] and C_{d}[n] such that \mathcal{A} \left(H[m]+C_{a}[m]\right)+C_{d}[m] \rightarrow 0, then we can hope to have a near perfect self noise cancellation and produce \hat{z}_{d}[n]={\mathcal{A}} y_{a}[n]+q[n]!

So, in theory there is a way to do this, by a hybrid approach where in some correction is done in analog domain (before A/D) followed by a more easily realizable digital cancellation circuit. There are many more practical hurdles. Some of them are:

  1. Performing correction/adaptation at RF frequency is not trivial
  2. If we are to do this post mixer (after downconversion), then the LNA nonlinearity (and a potential saturation) will come into play
  3. Channel/coupling path estimation error will degrade performance
  4. Calibrating analog correction is a little more involved
  5. A typical goal may be to have about 40dB suppression from analog correction and another 40dB from digital.
  6. Digital and analog correction, calibration time should be reasonably fast, so as not to spoil the set goal of simultaneity!

Some of the recent results, published are indeed promising. Some prototypes are also being developed. More general version involving multiple antennas’s are also being talked about. In that case, some beam forming can provide additional support. Let us hope that, with some more push and effort, we get to realize this one day into real world.

image001

One of my favorite cell phone app till date is the navigation utility Waze. The only downside that I’ve noticed is its hunger for power (It drains the phone battery in no time), but GPS in general hog battery anyway. In a car with some charging unit, it is not a killer drawback, but it is a negative thing anyway. Since this app has such nice user friendliness, coupled with ability to provide almost real time side information (through user assistance and online feeds) such as traffic situations, presence of police etc., makes this such a handy tool on the move. I was almost contemplating that this will be bought over by Google or a Facebook. Now what? It didn’t take too long! Waze is gobbled by Google, for a reported billion odd USD.  I like Google maps too. Now, we have a chance to have all in one! Hopefully, a better one!

This year’s Marconi foundation prize is being awarded to our company founder Henry Samueli. With last year’s price awarded to the other connoisseur Irwin Jacob (jointly with another stalwart Jack Wolf), now we have the two stellar communication company founders getting the prestigious award in consecutive years!. Feel proud to be part of the company he founded. Broadcom simply has a lot of vibrancy and part of this must surely be due to the champion founder. You can see the energy when Henry Samueli talk. I could feel a similar charm when Aart De Geus (founder and CEO of my earlier employer Synopsys) talks too.  Congratulations Dr. Samueli, we are proud of you.

The first mail  this morning (from Nihar Jindal)  brought this very sad news that Tom Cover has passed away. A giant in this field who contributed immensely to many flavours of Information theory will be missed. Everything he touched had class written all over, gracefulness, simplicity, elegance and all the more depth.

A tremendous loss! His legacy will continue.

If this is true, then this has to be one of the heaviest buy in the communication industry. Atheros buy may well be a WLAN entry for Qualcomm. Fingers crossed!

Recently, this came up during the lunch discussion with my colleagues at Broadcom.  I remember reading up an article somewhere, quoting the impact of birds migration due to cellular phone towers.

Researchers from Research Institute for Nature and Forest, Brussels, Belgium have investigated into this subject and published an article (more reports here). According to the authors (Joris Everaert and Dirk Bauwens),the house sparrows do not prefer to stay near the GSM base stations because the radiation in the 900 MHz range is adversely affecting them. The paper investigated the male house sparrow population. The statistics quoted in this paper clearly shows that, there is some kind of an impact the cellular tower radiation is causing onto the city birds.Perhaps a more scientific study on this is needed to assess the details, but this itself is a reason to worry.

Another interesting blog comes from India, which also suggests that the Belgian research finding is correlated to what is observed elsewhere.  But then, I am wondering how come there were thousands of sparrows flooded across almost every street at Schaffhausen. We had been on a family holiday there, last year. It was the place where I have seen the highest amount of sparrows, all waiting to get fed and pampered! May be they have adapted to the technology, huh?

I remember reading this Spectrum magazine at a friend’s house in Zurich, last month. I am not going to reveal his identity any further ( I fear a backslash:-)), but he has a nice habbit of keeping pretty good collection of magazines in the bathroom. The collection includes National Geographic, The Economist, Scientific American and Le Monde. I am not one of those guys who relish reading at length in those hot seats, but for once, I did scan through the hanging Spectrum magazine.

Anyway, the one I wanted to mention is the Spectrum article on “the Internet speed”. The fastest internet speed is enjoyed by South Korea and not the United States. The average speed itself is 11.7 Mbps. When you desperate for the best browsing, now you know where to head to! The list of countries which top the list is a bit of surprise. In Europe for instance, the fastest pal is Romania, the eastern beautiful land which is not really known as the internet bull dog. Switzerland is 10th, which is not really surprising, because I never found the speed less there. The Euro cup live HD streaming was so peaceful that I never had realized the need of TV.

Ah, back to the country statistics! Don’t worry too much if you feel doomed at the prospect of applying for a Korean visa. There are places in US  which are as good; in fact better! If you go by the fastest internet cities/towns, then Berkeley is the place. Average speed of internet at Berkeley is 18.7 Mbps, which is better than the Korea’s national average:-).

All these are published by Akamai technologies. An interesting thing, reported by them is on the trend in the average speed. It turns out that, the average speed has come down, in the recent years. Korea itself slowed down there.  The Korean downloads were 29 percent slower in 2009 than 2008 and they were further 24 percent slower in the fourth quarter than in the third quarter of 2009.

Wireless transmission at rate in the order of 1000Mbps! It was once considered to be  like the holygrail in wireless transmission. Well, now we have the Wireless HD and WiGig, which can scale these mountains.  The new WiGig standard is coping with the possibility of multiple of 1000Mbps. We can transmit up to 7Gbps, albeit short range (in the form of 10 meters or so), all without any wires using WiGig over the 60GHz spectrum available, largely unused across the world. Come to think of it, 7Gbps is hell a lot of data for a tick duration of time. Just about 10 years ago, we would have easily brushed away the need for something like these, because we never really could fathom an application which needs these sack of data. But things changed since then. Now, we have the blue ray players and uncompressed HD video eminent for a wireless transfer to the high-definition displays.

Couple of months ago, the WiGig alliance and WiFi announced a co operative agreement to share the technical specification and compliance testing for the 60GHz and WiFi. So, from a standard point of view things are moving pretty fast. Afterall we seem to have learned from the atrocious delay in the many earlier standard evolution, most notoriously the IEEE 802.11n. A product compliant to WiGig/IEEE 802.11ad is still far away, but there is serious motivation to get the standard spec evolve.

There are two parallel drives on the 60GHz spectrum. In terms of productization, the non standard, some flavour of proprietary solutions are  kind of available in the market.  The Sibeam’s WirelessHD™ and Amimon’s  propritery WHDI solutions on 5GHz spectrum are  available now. On 60GHz only one product (as far as I know) is available and that is compliant to WirelessHD™.

By the way, the WirelessHD™ also published the 1.1 version of their consortium spec. The IEEE spec and WirelessHD™ are now showing no sighs of consensus, which is  abad sign. Hopefully, at some stage these two merge and get one standard spec. My concern is that, in the event dual standard, there is potential interference between the  two standard compliant products. The one chipset WirelessHD™ compliant which is available (not sure whether it is selling) is damn too expensive. So, we need tremendous price scaling down to make these things viable from a business point of view.

The WiGig product is unlikely to hit the market in the next 2 years, but it will come sooner than later. The three main applications of WiGig are  (1) Short range streaming of uncompressed video for HDMI to HDMI devices (2) Desktop storage (which is much like the wireless USB once talked about highly during the UWB days).The much talked about USB3.0 will become an important requirement for this to happen. Intel will have to abide this transition on all processors, which I am sure will happen at some stage  (3) Docking stations: Wireless transfer between monitor and docking station.

Pricing is going to be the single most bottleneck for WiGig to get into the mass market. Under $10 chipset is a bare minimum requirement to have any kind of penetration into the consumer electronic market. Learning from the way, things moved in the past, pricing problem can be scaled in a few years.

In my opinion, the killer need for 60GHz to succeed will be to get serious power savings. The antenna size will be significantly small (because of the higher carrier frequency) and perhaps that may perhaps be a silicon based integrated antenna. To get into portable devices, we may have a solution which stress less on battery. Can we look that ahead now, say 5 years from now?

The new spec has some very interesting features. While it consume 1.6GHz of bandwidth, with multiple antennas it calls for some sophisticated signal processing techniques to scale the 1GHz mountain. The radio design is extremely challenging. Above all, we need backward compatibility with the WiFi. I hope by then we can get away with those annoying IEEE 802.11b out of the box!

So, the days ahead are exciting. It is natural to pose this question: how much more on wireless? As Marconi’ said. “It is dangerous to put limit on wireless”. So true!

Great to find and read an article/report in EEtimes about a company founded by many of my ex colleagues and friends. Saankhya labs seem to be in good shape to make that big impact in the fabless startup  arena. So far, the success of Indian startups have been mainly in the service sector and a few in the IP/networking boxes.  Saankhya is targetting a niche market, thrived by the software defined programmable radios, targetting for the digital TV market.  It is beyond doubt that, a universal demodulator is of tremendous potential in the consumer TV market, yet largely untapped.  With so many different standards running around the world for digital tv transmission itself, it is of heavy interest to have one decoder which does work for all locations. Saankhya also have analog decoder (for the US ATSC schemes) which will be handy during the period of  transition  when  the service providers swtich from analog to digital. Best wishes to Saankhya.

Wireless gigabit alliance (WiGig) has a new(updated) website. For a first up, there is a link How WiGig Works which nicely explain what  WiGig is all about, in a clear layman’s terms. If you ever wondered whether we saw the finale of the wireless rate surge, just re-think. We are still a lot far from drafting even a proposal, but there is surely plenty of light seen in the wireless horizon. As an example, HDTV would require about 3Gbps rate. WiGig is addressing applications such as this which demand rates beyond 3 giga bits per second. The brief tutorial is a compelling read.

It was today. I’ve just come back to office, after the dinner party hosted as part of the I&C anniversary celebrations at EPFL. Andrew Viterbi was the guest of honour and largely because of his fame, there was considerable crowd attending the function. Martin Vetterli made a nice colourful, flashy presentation illustrating the history of I&C in EPFL as well as scientific progress in Switzerland. He mentioned the names including Jim Massey, Ungerboek who are undoubtedly pioneers of modern communication theory and practice. He began saying that “…Ungerboek is our friend, and now not quite..I will come to that in a minute…”. And of course he didnt come back and fill the circumstance in which the friendship derailed. But I reckon it was a casual remark, perhaps to indicate that Ungerboek, now with Broadcom is a bitter rival to Qualcomm. Since Qualcomm recently established a scientific partnership with EPFL and Viterbi being a Qualcom founder and associate, he perhaps just jotted that remark. It was a nice, usual interesting presentation by Martin.

He also mentioned a nice story about the current EPFL president Patrick Aebischer. Interestingly Patrick Aebischer after an MD (Medical science) degree was fond of computer science and decided to venture into taking a MS degree in CS . He then decided to test his luck at EPFL and approached the admission committee with a formal application. CS was affiliated to the Math department in those days. EPFL politely rejected his application and in due course that ended Patrick’s quest for an EPFL CS degree. He then moved to the US, as a successful surgeon and took a career path of entirely different trace. Years later, as one would say, due to the uncertain turn of things in the great cycle of life, he became the EPFL president and now ruling not only the CS department, but the whole school.

Viterbi talked about the Digital Communication history. He started giving a perspective of this field starting from the days of Maxwell, Rao, Cramer, Wiener and Nyquist. Then he discussed the impact of Shannon’s work. He said the three driving force which made this digital mobile revolution are

1) Shannon’s framework (1948)

2) Satellite (Sparked by the Sputnik success in 1957)

3) Moores’s law, which is more of a socio economic law, which dramatically kept driving the industry so successfully.

The talk as such wasn’t too attention gathering, but he made a rather comprehensive presentation discussing the impact of  digital communication evolution spurred since Shannon’s days (and even early) knitting a dramatic success story of digital wireless world with millions of cell phones and similar devices, which showcased literally the realization of theoretical promise Shannon made in 1948. He himself has his name etched in part of that success story, at least in the form of Viterbi algorithm, which is (one of the instance of it) an algorithm used to detect sequences when perturbed by a medium.

Quite a lot of fun activities were organized by the committee. It was quite fun. Since many programs (especially the fun part) were in french, the appeal was considerably deaf to non-french speakers. But then the rationale given was that, the alumni in good percentage are french! I found it funfilled , mainly to see these successful people like Viterbi sharing their views in real. After all we can learn from history. Not many people can claim to have done so well in everything he touched. In the case of Viterbi, he is an academician, researcher, successful entrepreneur and now a venture capitalist, all scaled to the possible limits. Incredible role model, whichever way we look.

Today, there appeared an interestng (and perhaps general) question posted on the Linkedin Analog RFmixed signal group. The question was this “Regarding multi-mode multiband RF transmitters for handsets (CMOS), what do you think are the hot issues (besides PA)?” I have given a short overview of the challenges that I could see when a multi mode phone is to be designed on CMOS: The phone has to support a wide range of frequency bands as well as multiple standards/technologies/modulation/air interface. Here is what I wrote.  I am not sure whether the discussion is accessible to public. Hence I repost here. 

Integrating the RF transmitter and receiver circuits is a challenging thing since we have to support multiple bands (within a single mode. Say GSM/EDGE should support GSM900 to 1900 bands) as well as support for multiple phone modes. For instance a natural multi mode multi band phone supporting GSM/GPRS/EDGE/WCDMA/LTE will have to consider a wide frequency ranges from 850MHz to over 2GHz. If we were to consider incorporating GPS and WLAN, add that extra consideration. Not just the transceiver circuitry, but also other components such as oscillators, filters, passive components, frequency synthesizers and power amplifiers. Another thing is that, for multi mode, the sensitivity requirements are much more stringent than a single mode, multi band design. 

Since CMOS offers low cost, better performance and better scaling, to me that is the way forward. The natural choice of transceiver in CMOS would be the direct conversion/Zero IF, since it eliminates the costly SAW filters, and also reduce the number of on chip oscillators and mixers. Now, there would be several key design issues to be considered now with direct conversion architecture. Most notable ones are the well known ghost “DC offset” and the 1/f noise. Designers will have the task cut out to get a cleaner front end and as well as near ideal oscillators. 

Now I see another problem with multi mode, depending on what level of flexibility we prefer on this integration. Do we need the phone to operate in multiple modes simultaneously? Say a voice call on GSM and at the same time a multimedia streaming on LTE. In such a case, the question of sharing components are completely ruled out. If not, say some components such as synthesizers and mixers (if in the same band for multiple modes) can be shared. Clearly, simultaneous mode operation will ask for increased silicon die size as well as cost. Challenges may be there for circuit isolation for different modes as well. 

In all, depending on the level of sophistication (and of course all these things will have to be scaled economically too) the design,partitioning, architecture challenges are aplenty. Now the choice between a single chip (containing both analog baseband and digital baseband) versus two chips (analog and digital partitioned) will get a little more trickier with multiple modes. With multiple antennas (MIMO), add another dimension to this whole thing:-(. 

https://ratnuu.wordpress.com 
http://people.epfl.ch/rethnakaran.pulikkoonattu

Pages

September 2017
M T W T F S S
« Mar    
 123
45678910
11121314151617
18192021222324
252627282930  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 86 other followers

Like

%d bloggers like this: