You are currently browsing the category archive for the ‘science’ category.

As such, I cant tell much about the dynamics behind Google buying Boston Dynamics, but I am excited to hear this news. All I can say is that, there is still life for innovation! Knowing Google, they will do a good job with this new addition. We will have to wait and see the big day.

If you haven’t seen, you must watch the Cheetah Robot, the Usain Bolt equivalent in machine form:-)

The 2012 Turing award goes to two cryptography pioneers Shafi Goldwasser and Silvio Micali. I don’t do much of cryptography (for that matter I never did anything serious on that front, besides doing some C programming to demonstrate RSA and later on some class project involving elliptic curve cryptography and mathematica programming at EPFL). But I always was fascinated by the topic of Cryptography, largely for the many interesting toy like, yet deeply thought provocative combinatorial as well as probabilistic problems it addressed. Several of these problems stay at the very intersection of CS theory and cryptography.

One such classical problem that we all learned in graduate classes is the zero knowledge proof. The idea is pretty cute.  In the most simple scenario, this involve two parties Alice and Bob, who don’t trust each other, but one of them (say Alice) have to convince the other (bob) that she has knowledge (say a mathematical proof) of something without  really revealing it. The fact that it is not revealed explains why it is zero knowledge (Bob in the end never get to know what Alice know, but just gets convinced that she knows!).  So, it will work more like an interactive protocol wherein a lot of questions are being asked by Bob, chosen randomly, depending on prior answers from Alice. Doing this way for long, can Alice convince Bob, with an overwhelmingly high probability, that she knows what was claimed? Such an interactive protocol constitute what is coined as a zero knowledge proof. An example would be a novel ATM machine where, you don’t have to enter the PIN (unlike the ones that we have), but you can still convince the machine that you know your correct PIN. Sound cool and yet thought provoking, right? Well, that is why I said it is cute and interesting. This zero knowledge interactive proof idea was first conceived by the new Turning award winners.  The names didn’t strike me initially  with the Turning award news, but after reading the details, it occurred to me that, I had read their classic paper , as part of my coursework at EPFL.

A bit more formally stated, the two entities are namely a Prover and a Verifier. In the ATM example, the ATM machine is a verifier and you the user is a Prover. Prover knows the proof of a mathematical statement and verifier tries to verify whether the claim is indeed correct with high probability. The machinery of zero knowledge proof is that, if the prover is trying to trick, the verifier will be able to find that out as well, with high probability. There are many examples illustrating this beautiful idea. A classic example is the problem of helping a blind man in identifying whether two otherwise identical balls are of different colors? Can you convince him, without really telling which is which? Now there are variants of their original idea and myriads of practical significance have emerged or are still emerging.

The ACM award page has pretty enthralling account of these two pioneers (Shafi and Micali).  Now, here is an interesting family trivia. Shafi Goldwasser and her husband Nir Shavit together now keeps three Gödel Prize in their shelves, besides adding the Turing award aroma, now to their household!

It was interesting reading up on this piece of remake; somewhat a historical remake so to speak. That classic Paul Allen and Bill Gates photo shot as young geeks in 1981, now have a complementary remake with a new, yet ‘older’ avatar!

This year’s Marconi foundation prize is being awarded to our company founder Henry Samueli. With last year’s price awarded to the other connoisseur Irwin Jacob (jointly with another stalwart Jack Wolf), now we have the two stellar communication company founders getting the prestigious award in consecutive years!. Feel proud to be part of the company he founded. Broadcom simply has a lot of vibrancy and part of this must surely be due to the champion founder. You can see the energy when Henry Samueli talk. I could feel a similar charm when Aart De Geus (founder and CEO of my earlier employer Synopsys) talks too.  Congratulations Dr. Samueli, we are proud of you.

The first mail  this morning (from Nihar Jindal)  brought this very sad news that Tom Cover has passed away. A giant in this field who contributed immensely to many flavours of Information theory will be missed. Everything he touched had class written all over, gracefulness, simplicity, elegance and all the more depth.

A tremendous loss! His legacy will continue.

The SODA 2012 paper by Dina Katabi, Piotr Indyk et al promises a relatively faster way to compute DFT of sparse signals. This is getting traction from outside the community too, after the MIT broadcast. The well know FFT originated from Gauss and laregely resurrected by Cooley and Tukey has the time complexity of  \mathcal{O}(n\log n), which offers significant gains over the conventional DFT complexity of \mathcal{O}\left(n^2\right) when the size n is large.

If we really dissect the FFT complexity limits, it is already pretty good. With n points to compute, the complexity will be proportional to n and roughly, the per node complexity is \log n.

Now, what the new scheme promises is not a change in the order of complexity, but a clever way of simplifying the complexity depending on the inherent signal sparsity. When the signal is k sparse (i.e.,  only k among n is significantly different from zero 0 ), it is fanciful to ask whether we can indeed get to the complexity \mathcal{O}(k \log n) and Katabi, Indyk et al have indeed reached there. Quite remarkable achievement this is, considering that the gain could be as good as the compressibility limit in most of the real world signals we deal today. Signals such as audio and video are the leading candidates in the practical signal signal processing world and they both are sparse in some transform basis. Recall that, the recent compressed sensing results for k sparse signals showed the potential benefits of sparse signal processing and this new scheme will help to realize many things in a more meaningful way. One good thing with this is that this generalizes the conventional FFT. In that way, this is not just for sparse signals, but something which holds for any k and in the limit when k \to n, the complexity is as good as the old mate FFT!

I want to try this out sometime for some communication problem that  I have in mind. At the moment, I am hopeful!

Another stalwart, the founding father of Unix, C and in many ways one of the founding fathers of computing itself, Dennis Ritchie passed away. For me, Ritchie along with Kernigham was one of the first few names registered in mind, since learning computer programming. The first book I have ever seen on a programming language is this duo’s book on C. And boy wasn’t that most concise book in technology, every so compact and yet rich and clean!

Come to think of it, the impact of Ritchie to modern science and technology is enormous. He may not have been a very public figure, but his contributions indeed is the touchstone on the modern world, especially the information network world. Much of the internet, the Google, the iPhone’s and what more, almost everything we can think of runs on his stuffs or its variants. Quite remarkable.

I think the most apt summary of Ritchie’s contribution is heard from Brian Kernighan himself.  He said  “The tools that Dennis built -and their direct descendants – run pretty much everything today”

 

Happened to see a wonderful animation on the formation of human embryo and how a baby develops from almost nothing to the cute avatar!. First, I saw as it through a Facebook feed, but it is also there in you tube. I don’t know who the original creator of this nice animation is . Wonderful!

Other than the title tag of being the founding father of Western Philosophy, the first thing I remember about Socrates is the saying “I know nothing about me, other than that I know nothing”.

It is quite amazing that, Socrates has never bothered to write books or manuscript. It is largeley due to the remarks and subsequent referencing through works of his illustrious students, we got to know something about the great thinker and philosopher. Without Plato, we probably wouldnt have got to know much about him.

And by the way, Steve Jobs had this to say about Socrates: “I would trade all my technology for an afternoon with Socrates” (Newsweek, 2001). See the Wikipedia link for details.

Via Lance’s blog, I came across this hilarious prize known as Ig Nobel prize. The term “Ig” stands for “Ignoble”! The prize is apparently given to something which may appear to be funny, but has some serious reasoning behind. In other words, these are peculiar awards given to something  which”‘first make people laugh, and then make them think”. Quite amazing huh?

I am yet to explore a lot on this. Lance listed one very interesting one. I find it extremely noteworthy! Robert Faid of Greenville, South Carolina, farsighted and faithful seer of statistics, got the Ig Nobel prize for calculating the exact odds (710,609,175,188,282,000 to 1) that Mikhail Gorbachev is the Antichrist.I wonder how he arrived at this magical number! Didn’t Faid know how to play a game in the stock market then?

Wikipedia has an interesting entry on this topic. Would you believe, the young Russian physicist Andre Konstantinovich Geim who just won this years Physics Nobel for his work on graphene had also won the Ig Nobel in 2000! Quite amazing.

One of the term which often resonates inside a semiconductor company is “split lots”. Even though I vaguely knew what it referred to, the haziness around was deeper than the meat of the term. Borrowing Mark Twain, ” So throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover”. If not fully,  I should know what I am completely dumb about. In the end it is a fairly simple terminology. Here is what I understood. Most of what I gathered are a result of an information gathering attempt, after repeated bugging on many of my VLSI colleagues coupled with my dump fight with Mr. Google.

Firstly, the hero term, “corner lot”. Corner lot is a term referring to the semiconductor fabrication process corners. Basically, the fabrication parameters are deliberately (and carefully) changed to create extreme corners of a circuit etched in semiconductor wafer. These corner chips are expected to run slower or faster than the nominal (average behaviour of) chip produced in large volume. These corner lot chips also function at lower or higher temperature and voltages than a nominal chip. In short they differ from the typical parts, in terms of performance.

Why do this? Well, the simple answer is: To study the robustness of semiconductor chips mass manufactured out of a complicated fabrication process. When millions of chips are produced at the assembly, statistics come into play (think of the law of large numbers and central limit theorems). In order to ensure that, the manufactured chips function within a certain confidence interval  (within certain variance from the typical parts), it is a practice to fabricate corner lots.

When the volume of samples is large, the manufactured VLSI chips are likely to have performance variation, which admit a Gaussian statistical distribution. The mean of the distribution is the nominal performance of the chips. The three sigma or six sigma performance from the nominal one is that entails upon the corner lot chips. The process parameters are carefully adjusted to the three sigma (or six sigma depending on the need) from the nominal doping concentration in transistors on a silicon wafer. This way, one can deliberately mimick the corner chips which comes out in the volume production. In the manufacturing routine, the variation in performance may occur for many reasons, due to minor changes in temperature or humidity present in the clean room. The variation can also happen with variation in the die position relative to the center of the wafer.

In essence, the corner lots are groups of wafers whose process parameters are carefully adjusted according to chosen extremes. How are these corners produced? In other words, what exactly is changed to carefully achieve these extreme wafers? The exact details are not presented here. From this forum, I infer that the main parameters are 1) the doping concentration. 2) The process variation. 3) The resistance of the actives 4) the properties and thickness of oxides 5) The effective width, length of the stray capacitances etc.

What do we do with these corner lot chips? These extreme corners are characterized at various conditions such as temperature, voltages etc. Once the characterization across these corners is proved to be within accepted limits, then the mass manufactured volume of semiconductor chips falls within the required confidence interval. If all the corner lot chips meet he performance, it is safe to assume that, the huge volume of chips mass manufactured also will fall within the performance limits. That way, humongous saving of time and effort from testing of each chips is achieved.

What are the different process corner types? There is no end to the terminologies here. The nomenclature of the corners are based on two letters (we limit attention to CMOS semiconductors alone). The first letter attributed to the NMOS corner and the second letter for the PMOS. Three types exist in each, namely typical (T), fast (F) and slow (S).Here, slow and fast refers to the speed (mobility) of electrons and holes. In all {3 \choose 2 } (i.e., 6) corners and they are FF,FS,FT,SS,ST,TT. Among these, FF,SS and  TT are even corners since both PMOS and NMOS are affected equally. The corners FS and SF are skewed. The even corners are expected to function less adversely compared to the skewed corners, for valid reasons.

By the way, the obvious impact of the “corner” feature is the difference in device switching speed, which is related to the mobility of electrons and holes. The mobility of electron is related to the doping concentration (among others). A rough empirical (as shown) model shows the following relationship. The mobility \mu depends also on the impurity and doping concentration \rho. The parameters vary, depending on the type of impurity. The three common impurity elements are arsenic, phosphorus, boron (See this link for further details)

\mu=\mu_{0}+\frac{\mu_{1}-\mu_{0}}{1+\left(\frac{\rho}{\rho_{0}}\right)^{\alpha}}

where, \mu_{0} and \mu_{1} are the minimum and maximum limits of the mobility. \alpha is a fitting parameter.

In the figure (below), the mobility of electrons due to different impurities and doping concentration are shown. The three commonly used dopants are arsenic, boron and phosphorus.


It is an irony that, the dopant which is deliberately added  to increase conductivity of semiconductor itself slows down the mobility of electrons and holes due to collision (of electrons or holes) with the dopants.

A touch of note on the term mobility too. Broadly speaking, the device switching speed is directly related to the mobility of charged particles (electrons and holes). So, higher mobility somewhat implies that better and faster switching logic etc. When an electric field E is applied to a semiconductor, the electrostatic force will drive the carriers to a constant average velocity v, when the carriers scatter though the impurities and lattice vibrations. The ratio of the velocity to the applied electric field is called the mobility \mu. That is., \mu=v/E. Initially, the velocity increases with increase in electric field and finally reach a saturation velocity at high electric field. When the carriers flow at the surface of semiconductor, additional scattering may occur and that will pull down the mobility.

The kind of interactions which happens at atomic and sub atomic levels within and outside a transistor are way too complex to comprehend (in a blog for sure!). Besides, the millions (and these days billions) of these transistors all must work in tandem to make the system function as desired; And that too, with not a single one failing! To make business, certain economic sense should prevail as well. It goes without saying that, time is extremely critical to get the product into action. The split lot is one heck of a way not to stretch that time window.

(Photo courtesy)

Stumbled upon this site http://www.bordalierinstitute.com/target1.html

A cool presentation I liked there is about the evolution of the universe tagged against the timeline since the big bang. It goes to show how fast things moved in the beginning; yet how slowly it took to get into this fabulous shape (whatever is known as of today) that we live in. No doubt this is a continual process of marvel.

Pages

April 2017
M T W T F S S
« Mar    
 12
3456789
10111213141516
17181920212223
24252627282930

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 84 other followers

Like

%d bloggers like this: