You are currently browsing the category archive for the ‘science’ category.
As such, I cant tell much about the dynamics behind Google buying Boston Dynamics, but I am excited to hear this news. All I can say is that, there is still life for innovation! Knowing Google, they will do a good job with this new addition. We will have to wait and see the big day.
If you haven’t seen, you must watch the Cheetah Robot, the Usain Bolt equivalent in machine form:-)
This year’s Marconi foundation prize is being awarded to our company founder Henry Samueli. With last year’s price awarded to the other connoisseur Irwin Jacob (jointly with another stalwart Jack Wolf), now we have the two stellar communication company founders getting the prestigious award in consecutive years!. Feel proud to be part of the company he founded. Broadcom simply has a lot of vibrancy and part of this must surely be due to the champion founder. You can see the energy when Henry Samueli talk. I could feel a similar charm when Aart De Geus (founder and CEO of my earlier employer Synopsys) talks too. Congratulations Dr. Samueli, we are proud of you.
The first mail this morning (from Nihar Jindal) brought this very sad news that Tom Cover has passed away. A giant in this field who contributed immensely to many flavours of Information theory will be missed. Everything he touched had class written all over, gracefulness, simplicity, elegance and all the more depth.
A tremendous loss! His legacy will continue.
The SODA 2012 paper by Dina Katabi, Piotr Indyk et al promises a relatively faster way to compute DFT of sparse signals. This is getting traction from outside the community too, after the MIT broadcast. The well know FFT originated from Gauss and laregely resurrected by Cooley and Tukey has the time complexity of , which offers significant gains over the conventional DFT complexity of when the size is large.
If we really dissect the FFT complexity limits, it is already pretty good. With points to compute, the complexity will be proportional to and roughly, the per node complexity is .
Now, what the new scheme promises is not a change in the order of complexity, but a clever way of simplifying the complexity depending on the inherent signal sparsity. When the signal is sparse (i.e., only among is significantly different from zero ), it is fanciful to ask whether we can indeed get to the complexity and Katabi, Indyk et al have indeed reached there. Quite remarkable achievement this is, considering that the gain could be as good as the compressibility limit in most of the real world signals we deal today. Signals such as audio and video are the leading candidates in the practical signal signal processing world and they both are sparse in some transform basis. Recall that, the recent compressed sensing results for sparse signals showed the potential benefits of sparse signal processing and this new scheme will help to realize many things in a more meaningful way. One good thing with this is that this generalizes the conventional FFT. In that way, this is not just for sparse signals, but something which holds for any and in the limit when , the complexity is as good as the old mate FFT!
I want to try this out sometime for some communication problem that I have in mind. At the moment, I am hopeful!
Another stalwart, the founding father of Unix, C and in many ways one of the founding fathers of computing itself, Dennis Ritchie passed away. For me, Ritchie along with Kernigham was one of the first few names registered in mind, since learning computer programming. The first book I have ever seen on a programming language is this duo’s book on C. And boy wasn’t that most concise book in technology, every so compact and yet rich and clean!
Come to think of it, the impact of Ritchie to modern science and technology is enormous. He may not have been a very public figure, but his contributions indeed is the touchstone on the modern world, especially the information network world. Much of the internet, the Google, the iPhone’s and what more, almost everything we can think of runs on his stuffs or its variants. Quite remarkable.
I think the most apt summary of Ritchie’s contribution is heard from Brian Kernighan himself. He said “The tools that Dennis built -and their direct descendants – run pretty much everything today”
Happened to see a wonderful animation on the formation of human embryo and how a baby develops from almost nothing to the cute avatar!. First, I saw as it through a Facebook feed, but it is also there in you tube. I don’t know who the original creator of this nice animation is . Wonderful!
Other than the title tag of being the founding father of Western Philosophy, the first thing I remember about Socrates is the saying “I know nothing about me, other than that I know nothing”.
It is quite amazing that, Socrates has never bothered to write books or manuscript. It is largeley due to the remarks and subsequent referencing through works of his illustrious students, we got to know something about the great thinker and philosopher. Without Plato, we probably wouldnt have got to know much about him.
And by the way, Steve Jobs had this to say about Socrates: “I would trade all my technology for an afternoon with Socrates” (Newsweek, 2001). See the Wikipedia link for details.
Via Lance’s blog, I came across this hilarious prize known as Ig Nobel prize. The term “Ig” stands for “Ignoble”! The prize is apparently given to something which may appear to be funny, but has some serious reasoning behind. In other words, these are peculiar awards given to something which”‘first make people laugh, and then make them think”. Quite amazing huh?
I am yet to explore a lot on this. Lance listed one very interesting one. I find it extremely noteworthy! Robert Faid of Greenville, South Carolina, farsighted and faithful seer of statistics, got the Ig Nobel prize for calculating the exact odds (710,609,175,188,282,000 to 1) that Mikhail Gorbachev is the Antichrist.I wonder how he arrived at this magical number! Didn’t Faid know how to play a game in the stock market then?
Wikipedia has an interesting entry on this topic. Would you believe, the young Russian physicist Andre Konstantinovich Geim who just won this years Physics Nobel for his work on graphene had also won the Ig Nobel in 2000! Quite amazing.
One of the term which often resonates inside a semiconductor company is “split lots”. Even though I vaguely knew what it referred to, the haziness around was deeper than the meat of the term. Borrowing Mark Twain, ” So throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover”. If not fully, I should know what I am completely dumb about. In the end it is a fairly simple terminology. Here is what I understood. Most of what I gathered are a result of an information gathering attempt, after repeated bugging on many of my VLSI colleagues coupled with my dump fight with Mr. Google.
Firstly, the hero term, “corner lot”. Corner lot is a term referring to the semiconductor fabrication process corners. Basically, the fabrication parameters are deliberately (and carefully) changed to create extreme corners of a circuit etched in semiconductor wafer. These corner chips are expected to run slower or faster than the nominal (average behaviour of) chip produced in large volume. These corner lot chips also function at lower or higher temperature and voltages than a nominal chip. In short they differ from the typical parts, in terms of performance.
Why do this? Well, the simple answer is: To study the robustness of semiconductor chips mass manufactured out of a complicated fabrication process. When millions of chips are produced at the assembly, statistics come into play (think of the law of large numbers and central limit theorems). In order to ensure that, the manufactured chips function within a certain confidence interval (within certain variance from the typical parts), it is a practice to fabricate corner lots.
When the volume of samples is large, the manufactured VLSI chips are likely to have performance variation, which admit a Gaussian statistical distribution. The mean of the distribution is the nominal performance of the chips. The three sigma or six sigma performance from the nominal one is that entails upon the corner lot chips. The process parameters are carefully adjusted to the three sigma (or six sigma depending on the need) from the nominal doping concentration in transistors on a silicon wafer. This way, one can deliberately mimick the corner chips which comes out in the volume production. In the manufacturing routine, the variation in performance may occur for many reasons, due to minor changes in temperature or humidity present in the clean room. The variation can also happen with variation in the die position relative to the center of the wafer.
In essence, the corner lots are groups of wafers whose process parameters are carefully adjusted according to chosen extremes. How are these corners produced? In other words, what exactly is changed to carefully achieve these extreme wafers? The exact details are not presented here. From this forum, I infer that the main parameters are 1) the doping concentration. 2) The process variation. 3) The resistance of the actives 4) the properties and thickness of oxides 5) The effective width, length of the stray capacitances etc.
What do we do with these corner lot chips? These extreme corners are characterized at various conditions such as temperature, voltages etc. Once the characterization across these corners is proved to be within accepted limits, then the mass manufactured volume of semiconductor chips falls within the required confidence interval. If all the corner lot chips meet he performance, it is safe to assume that, the huge volume of chips mass manufactured also will fall within the performance limits. That way, humongous saving of time and effort from testing of each chips is achieved.
What are the different process corner types? There is no end to the terminologies here. The nomenclature of the corners are based on two letters (we limit attention to CMOS semiconductors alone). The first letter attributed to the NMOS corner and the second letter for the PMOS. Three types exist in each, namely typical (T), fast (F) and slow (S).Here, slow and fast refers to the speed (mobility) of electrons and holes. In all (i.e., ) corners and they are FF,FS,FT,SS,ST,TT. Among these, FF,SS and TT are even corners since both PMOS and NMOS are affected equally. The corners FS and SF are skewed. The even corners are expected to function less adversely compared to the skewed corners, for valid reasons.
By the way, the obvious impact of the “corner” feature is the difference in device switching speed, which is related to the mobility of electrons and holes. The mobility of electron is related to the doping concentration (among others). A rough empirical (as shown) model shows the following relationship. The mobility depends also on the impurity and doping concentration . The parameters vary, depending on the type of impurity. The three common impurity elements are arsenic, phosphorus, boron (See this link for further details)
where, and are the minimum and maximum limits of the mobility. is a fitting parameter.
In the figure (below), the mobility of electrons due to different impurities and doping concentration are shown. The three commonly used dopants are arsenic, boron and phosphorus.
It is an irony that, the dopant which is deliberately added to increase conductivity of semiconductor itself slows down the mobility of electrons and holes due to collision (of electrons or holes) with the dopants.
A touch of note on the term mobility too. Broadly speaking, the device switching speed is directly related to the mobility of charged particles (electrons and holes). So, higher mobility somewhat implies that better and faster switching logic etc. When an electric field is applied to a semiconductor, the electrostatic force will drive the carriers to a constant average velocity , when the carriers scatter though the impurities and lattice vibrations. The ratio of the velocity to the applied electric field is called the mobility . That is., . Initially, the velocity increases with increase in electric field and finally reach a saturation velocity at high electric field. When the carriers flow at the surface of semiconductor, additional scattering may occur and that will pull down the mobility.
The kind of interactions which happens at atomic and sub atomic levels within and outside a transistor are way too complex to comprehend (in a blog for sure!). Besides, the millions (and these days billions) of these transistors all must work in tandem to make the system function as desired; And that too, with not a single one failing! To make business, certain economic sense should prevail as well. It goes without saying that, time is extremely critical to get the product into action. The split lot is one heck of a way not to stretch that time window.
A cool presentation I liked there is about the evolution of the universe tagged against the timeline since the big bang. It goes to show how fast things moved in the beginning; yet how slowly it took to get into this fabulous shape (whatever is known as of today) that we live in. No doubt this is a continual process of marvel.