You are currently browsing the category archive for the ‘Internet’ category.
Almost all the deployed and successful communication strategies till date are half duplex (HD). That is, we don’t have simultaneous transmission and reception on the same frequency band, aka, full duplex (FD). For example, 802.11 WiFi uses a time switch (TDD) between transmit and receive mode. Both transmission and reception takes place over the same frequency band as well. A single antenna is (typically) used for both tx and rx in this case. It is always either transmit or receive (or none!) that happen at any given time. In the cellular world, such as LTE the popular scheme is to have the frequency slice shared (FDD). In that case the up-link (link from a cell phone to base station) takes place in a range of frequency band different from that on link receiving signal from base station, while both transmit and receive can take place simultaneously. In both TDD and FDD cases, there is no overlap between the transmit and receive signals at a given frequency at the same time.
Let us posit this question. In a given frequency band, is it feasible at all to have simultaneous transmission and reception? One way of course is to find a domain where these two (transmit and receive) signals stay perfectly distinct. Say use some orthogonal codes. In theory yes, but there is an important practical hurdle here. It is the issue of the loudness (aka self interference) from own transmit signal! An analogy is like one tries to decipher a whisper coming from someone, while he/she is simultaneously shouting at top of his/her voice. In reality, the desired signal comes from a distant source after traveling through adverse medium/channel. More than anything else, the signal intensity level would have got severely degraded by the time signal arrives at the receiver unit. Well, let me put some numbers from a practical setup. In a (typical) WiFi scenario the incoming signal (from an AP) at your receiver antenna (of say tablet) may be around -70dBm, whereas, the power of (tablet PC’s) concurrent transmission power could be dBm! The task to fulfill the full duplex goal is really to recover the information from the relatively week signal in the presence of a self interference stronger by 80 to 90dB! In other words, we should hit a mechanism to suppress the self interference by 90dB! Getting a 90dB suppression is no easy, especially when we are constrained chip and board area to get deployed in portable devices! Traditional board layout tricks such as isolation, beam steering etc alone wouldn’t get us there.
OK, now what? the reason I suddenly brought this up is largely due to the increased momentum this one is gathering off later in both academia as well as industry. It still has enormous challenges ahead. Realizing FD on the other hand will bring in enormous benefits. Historically, we always mulled over capacity and throughput, with the strong assumption that all resources in the lot are available. Say for a given channel bandwidth , the capacity is and throughput is so much and so on. The reality is that, in most cases, to have information exchange, we need two way communication and that means double resources. Spectrum being pricey and scarce, getting the full duplex can potentially get up to double fold in throughput and several other benefits along the way such as remedy to the hidden node problem in current 802.11 MAC access. Now 802.11 standards front, we have a new study group on high efficiency wireless (HEW). I believe HD can play a role there too.
I am not prepared to discuss all the details involved here. Let me outline a rough problem formulation of FD. More general versions exists, but let me try with a simple case. Much more detailed formulation of the problem can be seen here and elsewhere. I kinda used the notations and problem statement from this. Let be the desired signal from a distant sender, arriving at the rx antenna. Simultaneously, a much high power signal is being sent . The signal is significantly higher power than . Now, the signal leaks through some path and produce an interference at the receive antenna. In other words, the effective signal at the receiver antenna port is . For sake of simplicity, let us assume that is modeled as a FIR filter. The sampled signal relationship can be then stated as follows.
Now here is the thing. We cannot simply pass the buck to digital domain and ask to recover the useful signal from powerful interference. Recall that, the A/D converter stands at the very interface of analog to digital partition. High power interference signal will severely saturate the A/D and result in irreversible clipping noise. So, first we must do a level of analog suppression of this interference and make sure that, the A/D is not saturated. Let us say, we go for an analog filter and do this job. Post analog cancellation using a filter we will have,
The A/D signal transformation can be decomposed to the following form (using Bussgang theorem for instance). . Now,
If we do a digital cancellation at the A/D output state with a filter , we can have . Incorporating all these, we will have
Now if we can adapt and find and such that , then we can hope to have a near perfect self noise cancellation and produce !
So, in theory there is a way to do this, by a hybrid approach where in some correction is done in analog domain (before A/D) followed by a more easily realizable digital cancellation circuit. There are many more practical hurdles. Some of them are:
- Performing correction/adaptation at RF frequency is not trivial
- If we are to do this post mixer (after downconversion), then the LNA nonlinearity (and a potential saturation) will come into play
- Channel/coupling path estimation error will degrade performance
- Calibrating analog correction is a little more involved
- A typical goal may be to have about 40dB suppression from analog correction and another 40dB from digital.
- Digital and analog correction, calibration time should be reasonably fast, so as not to spoil the set goal of simultaneity!
Some of the recent results, published are indeed promising. Some prototypes are also being developed. More general version involving multiple antennas’s are also being talked about. In that case, some beam forming can provide additional support. Let us hope that, with some more push and effort, we get to realize this one day into real world.
Most of you may have been following this new prototype being developed and deployed by Google. I am talking about project Loon, an idea conceived by Google to help connect the few billion friends around the world who are still deprived of internet benefits. The idea at first may spell like fiction, but this one is for real. Already, some pilot projects are on the way, in New Zealand. Let us watch out for this to spread its wings in the coming months and years!
Anyone remember the old Motorola/Iridium initiative? It scooped and failed for many a reasons, but the idea that time was to have the entire world voice connected, but project Loon is a bit more than that in intention, technology and economic viability. Besides, Loon is backed by a highly successful technology driven company. The goal in itself is to have pretty much every corner of the world to stay connected by internet, the holy grail of global networking. Whereas, Iridium needed sophisticated lower orbit satellites, project Loon can get the job done through a set of balloons equipped with wireless communication technologies. The number of balloons may be much larger than the number 66 or 70 satellites, but the latter is a lot less expensive and green than the failed initiative!
So what goes into the making of project Loon? Logistic wise it needs deployment of enough number of helium powered balloons into the sky, the stratosphere layer of earth atmosphere to be precise. Why stratosphere? Because, the balloons will make use of the wind flow that prevail at stratosphere layers to steer and position it around a certain location locked to ground. The balloons are not quite stationary; they instead will move around, but on the average a certain number of balloons will stay put up in location range to provide a reasonable coverage for any given location. All the balloons are equipped with enough circuitry to perform necessary wireless communication networking jobs. The balloons are all the time connected (wireless that is) to neighboring balloons and some of them will talk to an available ground station terminals through which it will establish connection to the internet backbone and thus to rest of the connected world!
The balloons may have varying shapes and orientation. The shape of the balloon and the wind pattern may come into the equation to steer them and stay around (or move around the earth) at the atmosphere. They may, not only move around the earth, but also can potentially move up and down in the stratosphere layers. Each of these balloons are of approximately 15 meters in diameter which will float at about 20 km altitude from earth surface. For record, this height is more than double the distance where we can spot the farthest cloud or for that matter the highest altitude where airplanes fly! The task involves gyration, ballon steering and of course quite a lot of wireless mesh networking as well as co-ordination prospects. At the user side, you will have specialized antenna (or antennas, depending on whether MIMO comes in) to talk to one of the balloons above your location and we are all set to go. When fully operational, everything else will be transparent! Pretty much the energy for operation at balloons all will come from solar energy. The other natural resource used is wind. Both are green and free and almost universal!
I am very excited about the prospect of this coming off in full force in the near future. On the beneficiary side, one it will help reaching the far corners of our planet. More than that this may well serve as an inexpensive way for many billion folks to reap the benefits of internet and staying connected. Of all, the children of a lesser world can as well get to bite a share of a better world. Imagine a remote village school in Burundi or Bangladesh getting access to better educational tools through internet! Wouldn’t that be a beautiful? Corporations will make money, but when less privileged ones also benefit, that is something to cheer. In the end a model will sustain and everyone can have a share, monetary or otherwise.
In a lighter vein, what is the main downside of this everywhere connectedness? Here is a potential spoilsport scenario! You will agree with me here:-)
During the past week, while at Hawaii for the IEEE 802.11 interim, I happened to glance this NY times article. The story is about a New Hampshire professor Yitang Zhang coming out with a recent proof on establishing a finite bound on the gap between prime numbers. While browsing the details, there are more details emerging as several pieces of blog and articles and reviews are being written (and some are still being written). Now, looks like the claim is more or less accepted by pundits in the field, and termed as a beautiful mathematical breakthrough. As an outsider sitting with curiosity, I feel nice to scratch the surface of this new finding.
The subject surrounding this news is number theory, prime numbers to be precise. The question of interest is on the gap between adjacent prime numbers. We know that and are prime with a gap of , but this is truly a special case and unique per definition. The gap between and is 2. Similarly and differ by . One may have thought that, the gap between successive primes go up as we flee along the number line. Not quite. For example, we can see that there are a lot of pairs with a gap of 2. The easy ones are (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71, 73), (101, 103), (107, 109), (137, 139) and the list goes on. It was conjectured that there are infinitely many such pairs, but the proof of that is not quite as easy as yet! It is known that there are precisely below , but infinity is still a lot far from ! An interesting quest was to really prove that there are infinitely many twin primes, but this still remain as an open conjecture.
Now the new discovery by Zhang is not quite proving the twin conjecture, but a close relative of that. Twin conjectures are strictly about prime pairs separated by . A related question is, how about prime pairs and which are separated by where could be a finite number. When , then we have the special case of the classical twin prime case. Can we at least prove mathematically that there exists infinitely many primes such as for some $k$. If so, what is the smallest where this holds true? Zhang now has a proof that for as small as million. Mathematically, if we denote is the th prime, then the new claim says (stated crisply in the paper abstract),
million is still a large gap, but as Dorian Goldfeld says, is still finite and nothing compared to infinity! In future, it is not unlikely that we may get to see this gap coming down and perhaps to the best case of . Who knows?
The result is still interesting, even to general interesting folks like us. This kind of says that, the gap between prime numbers is worst case bounded by a finite number. If we really plot the prime numbers, then we will see a saturation like behavior! Like many other things at asymptotic (for example, the eigenvalues of a large random matrices exhibit very interesting properties, when the size goes to infinity), things at infinity may exhibit some charm, after all!
The paper is accessible here, but as expected the proof is hard (for me at least). Hopefully we will have some diluted explanation of its essence from experts in the coming days. Already,Terrence Tao had this sktech, a couple of weeks ago on his google+ feed. Over the last week or so, finer details on the new break through are still emerging. Terry Tao also has initiated an online Wiki collaboration in an effort to improve upon from this work (For experts that is, not for me!).
Congratulations Professor Zhang.
I love Youtube. Every day, more or less on the average, I ends up spending (or at times wasting) some time there. Today was no exception, yet I was pleasantly surprised to hit upon some video taped lectures of Richard Hamming. These are apparently recorded in 1995 and are on a wide variety of topics. I didn’t get to go through all of them, which hopefully I will do sometime. I particularly liked this one on Discrete Evolution. The depth of knowledge these folks have is immense. What is astonishing is their ability and skill on connecting their point of expertise to vast majority of physical analogies. Long live internet!