You are currently browsing the category archive for the ‘Semiconductor’ category.

This year’s Marconi foundation prize is being awarded to our company founder Henry Samueli. With last year’s price awarded to the other connoisseur Irwin Jacob (jointly with another stalwart Jack Wolf), now we have the two stellar communication company founders getting the prestigious award in consecutive years!. Feel proud to be part of the company he founded. Broadcom simply has a lot of vibrancy and part of this must surely be due to the champion founder. You can see the energy when Henry Samueli talk. I could feel a similar charm when Aart De Geus (founder and CEO of my earlier employer Synopsys) talks too.  Congratulations Dr. Samueli, we are proud of you.

I wasn’t surprised at all with this. To be honest, I did expect this to happen, a long long ago, as early as 2006 or so. In 2009, I was seeing the writing on the wall. So, when my colleague sent the note yesterday afternoon, I was pointing him this blog to him!

Now then, Magma is now part of Synopsys. With Extreme-DA too in the kitty, Synopsys is clearly staying ahead in the EDA leadership. The Analog is where Cadence still have the thrust above them.

Tom Kailath is a genius when it come to presenting concepts. His talk at USC during the annual Viterbi lecture in 2011 was no exception. He talked about the connection between radiative transfer and how some of the results there beautifully connected to algorithms used in communications. As always, the master of many had a lot of stories to tell and it was mind blowing, to say the least.

So, things are going great for the high tech. Hopefully things will stay like this for a while. Intel just reported a strong 4th quarter result. Q4,10 to Q4,11 increase in revenue is 40%. Wow! The forecast is looking good too. Clearly, the tablet market is set to explode. The upbeat was reflective in the market as well. The good thing with big brother companies like Intel is that, they can take the market along. When they produce great results, it generally strengthens the health of the industry as a whole. When they take a hit, the impact is disastrous for the high tech houses and semiconductor industry in particular.

If this is true, then this has to be one of the heaviest buy in the communication industry. Atheros buy may well be a WLAN entry for Qualcomm. Fingers crossed!

I can never have enough of Aart De Geus, my former CEO who still very much remain as my role model. Every time, I hear something from him, it is inspirational and mind blowing. No wonder  Daniel Nenni is damn impressed by Aart’s presentation at the EDA CEO’s meet last month (A detailed account is here in Nenni’s blog). Well, the point Aart stress is the need of collaboration and more so at these times, where the social networking has spurred by the internet shaping. It happens everywhere these days, more so in research. Many years ago, it was a norm to have single author publications, but things have changed off late. Now we have authors collobarate across boundaries and continents, sometime even without seeing them personally. I think this is a good trend. Everybody benefits. Aart ofcourse was stressing that semiconductor industry need no less. Gone are the days, when discussing problems were considered unethical. In a free world, one needs to be fearless in asking questions. After all,  talking is good!

As always, Aart has that super skill to put things in an eye catching manner. Daniel phrased it more aptly in his blog, as follows: “Aart also introduced the word systemic (yes I had to look it up) and a mathematical equation correction: Semiconductor design enabled results are not a SUM but a PRODUCT. As in, if anywhere in the semiconductor design and manufacturing equation there is a zero, the results will be a bad wafer, die, chip, or electronic device, which supports GFI’s vision for a new type of collaboration between partners and customers.” Beautifully put and phrased.

If you have ever listened to Aart’s talks, it is a no brainier to guess the kind of super presentation slides he makes. Here is the one from this talk (Again, please read Daniel’s blog for elaborate discussion on this). The analogy is the task of finding a vegetarian restaurant without the service of a vegan mother in-law. The point is that, at the moment it is still long and expensive a route. We need smarter ways to speedup (and cheaper of course).  I leave you to Daniel’s blog for further read. It indeed is a fabulous read.

Wireless transmission at rate in the order of 1000Mbps! It was once considered to be  like the holygrail in wireless transmission. Well, now we have the Wireless HD and WiGig, which can scale these mountains.  The new WiGig standard is coping with the possibility of multiple of 1000Mbps. We can transmit up to 7Gbps, albeit short range (in the form of 10 meters or so), all without any wires using WiGig over the 60GHz spectrum available, largely unused across the world. Come to think of it, 7Gbps is hell a lot of data for a tick duration of time. Just about 10 years ago, we would have easily brushed away the need for something like these, because we never really could fathom an application which needs these sack of data. But things changed since then. Now, we have the blue ray players and uncompressed HD video eminent for a wireless transfer to the high-definition displays.

Couple of months ago, the WiGig alliance and WiFi announced a co operative agreement to share the technical specification and compliance testing for the 60GHz and WiFi. So, from a standard point of view things are moving pretty fast. Afterall we seem to have learned from the atrocious delay in the many earlier standard evolution, most notoriously the IEEE 802.11n. A product compliant to WiGig/IEEE 802.11ad is still far away, but there is serious motivation to get the standard spec evolve.

There are two parallel drives on the 60GHz spectrum. In terms of productization, the non standard, some flavour of proprietary solutions are  kind of available in the market.  The Sibeam’s WirelessHD™ and Amimon’s  propritery WHDI solutions on 5GHz spectrum are  available now. On 60GHz only one product (as far as I know) is available and that is compliant to WirelessHD™.

By the way, the WirelessHD™ also published the 1.1 version of their consortium spec. The IEEE spec and WirelessHD™ are now showing no sighs of consensus, which is  abad sign. Hopefully, at some stage these two merge and get one standard spec. My concern is that, in the event dual standard, there is potential interference between the  two standard compliant products. The one chipset WirelessHD™ compliant which is available (not sure whether it is selling) is damn too expensive. So, we need tremendous price scaling down to make these things viable from a business point of view.

The WiGig product is unlikely to hit the market in the next 2 years, but it will come sooner than later. The three main applications of WiGig are  (1) Short range streaming of uncompressed video for HDMI to HDMI devices (2) Desktop storage (which is much like the wireless USB once talked about highly during the UWB days).The much talked about USB3.0 will become an important requirement for this to happen. Intel will have to abide this transition on all processors, which I am sure will happen at some stage  (3) Docking stations: Wireless transfer between monitor and docking station.

Pricing is going to be the single most bottleneck for WiGig to get into the mass market. Under $10 chipset is a bare minimum requirement to have any kind of penetration into the consumer electronic market. Learning from the way, things moved in the past, pricing problem can be scaled in a few years.

In my opinion, the killer need for 60GHz to succeed will be to get serious power savings. The antenna size will be significantly small (because of the higher carrier frequency) and perhaps that may perhaps be a silicon based integrated antenna. To get into portable devices, we may have a solution which stress less on battery. Can we look that ahead now, say 5 years from now?

The new spec has some very interesting features. While it consume 1.6GHz of bandwidth, with multiple antennas it calls for some sophisticated signal processing techniques to scale the 1GHz mountain. The radio design is extremely challenging. Above all, we need backward compatibility with the WiFi. I hope by then we can get away with those annoying IEEE 802.11b out of the box!

So, the days ahead are exciting. It is natural to pose this question: how much more on wireless? As Marconi’ said. “It is dangerous to put limit on wireless”. So true!

One of the term which often resonates inside a semiconductor company is “split lots”. Even though I vaguely knew what it referred to, the haziness around was deeper than the meat of the term. Borrowing Mark Twain, ” So throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover”. If not fully,  I should know what I am completely dumb about. In the end it is a fairly simple terminology. Here is what I understood. Most of what I gathered are a result of an information gathering attempt, after repeated bugging on many of my VLSI colleagues coupled with my dump fight with Mr. Google.

Firstly, the hero term, “corner lot”. Corner lot is a term referring to the semiconductor fabrication process corners. Basically, the fabrication parameters are deliberately (and carefully) changed to create extreme corners of a circuit etched in semiconductor wafer. These corner chips are expected to run slower or faster than the nominal (average behaviour of) chip produced in large volume. These corner lot chips also function at lower or higher temperature and voltages than a nominal chip. In short they differ from the typical parts, in terms of performance.

Why do this? Well, the simple answer is: To study the robustness of semiconductor chips mass manufactured out of a complicated fabrication process. When millions of chips are produced at the assembly, statistics come into play (think of the law of large numbers and central limit theorems). In order to ensure that, the manufactured chips function within a certain confidence interval  (within certain variance from the typical parts), it is a practice to fabricate corner lots.

When the volume of samples is large, the manufactured VLSI chips are likely to have performance variation, which admit a Gaussian statistical distribution. The mean of the distribution is the nominal performance of the chips. The three sigma or six sigma performance from the nominal one is that entails upon the corner lot chips. The process parameters are carefully adjusted to the three sigma (or six sigma depending on the need) from the nominal doping concentration in transistors on a silicon wafer. This way, one can deliberately mimick the corner chips which comes out in the volume production. In the manufacturing routine, the variation in performance may occur for many reasons, due to minor changes in temperature or humidity present in the clean room. The variation can also happen with variation in the die position relative to the center of the wafer.

In essence, the corner lots are groups of wafers whose process parameters are carefully adjusted according to chosen extremes. How are these corners produced? In other words, what exactly is changed to carefully achieve these extreme wafers? The exact details are not presented here. From this forum, I infer that the main parameters are 1) the doping concentration. 2) The process variation. 3) The resistance of the actives 4) the properties and thickness of oxides 5) The effective width, length of the stray capacitances etc.

What do we do with these corner lot chips? These extreme corners are characterized at various conditions such as temperature, voltages etc. Once the characterization across these corners is proved to be within accepted limits, then the mass manufactured volume of semiconductor chips falls within the required confidence interval. If all the corner lot chips meet he performance, it is safe to assume that, the huge volume of chips mass manufactured also will fall within the performance limits. That way, humongous saving of time and effort from testing of each chips is achieved.

What are the different process corner types? There is no end to the terminologies here. The nomenclature of the corners are based on two letters (we limit attention to CMOS semiconductors alone). The first letter attributed to the NMOS corner and the second letter for the PMOS. Three types exist in each, namely typical (T), fast (F) and slow (S).Here, slow and fast refers to the speed (mobility) of electrons and holes. In all {3 \choose 2 } (i.e., 6) corners and they are FF,FS,FT,SS,ST,TT. Among these, FF,SS and  TT are even corners since both PMOS and NMOS are affected equally. The corners FS and SF are skewed. The even corners are expected to function less adversely compared to the skewed corners, for valid reasons.

By the way, the obvious impact of the “corner” feature is the difference in device switching speed, which is related to the mobility of electrons and holes. The mobility of electron is related to the doping concentration (among others). A rough empirical (as shown) model shows the following relationship. The mobility \mu depends also on the impurity and doping concentration \rho. The parameters vary, depending on the type of impurity. The three common impurity elements are arsenic, phosphorus, boron (See this link for further details)

\mu=\mu_{0}+\frac{\mu_{1}-\mu_{0}}{1+\left(\frac{\rho}{\rho_{0}}\right)^{\alpha}}

where, \mu_{0} and \mu_{1} are the minimum and maximum limits of the mobility. \alpha is a fitting parameter.

In the figure (below), the mobility of electrons due to different impurities and doping concentration are shown. The three commonly used dopants are arsenic, boron and phosphorus.


It is an irony that, the dopant which is deliberately added  to increase conductivity of semiconductor itself slows down the mobility of electrons and holes due to collision (of electrons or holes) with the dopants.

A touch of note on the term mobility too. Broadly speaking, the device switching speed is directly related to the mobility of charged particles (electrons and holes). So, higher mobility somewhat implies that better and faster switching logic etc. When an electric field E is applied to a semiconductor, the electrostatic force will drive the carriers to a constant average velocity v, when the carriers scatter though the impurities and lattice vibrations. The ratio of the velocity to the applied electric field is called the mobility \mu. That is., \mu=v/E. Initially, the velocity increases with increase in electric field and finally reach a saturation velocity at high electric field. When the carriers flow at the surface of semiconductor, additional scattering may occur and that will pull down the mobility.

The kind of interactions which happens at atomic and sub atomic levels within and outside a transistor are way too complex to comprehend (in a blog for sure!). Besides, the millions (and these days billions) of these transistors all must work in tandem to make the system function as desired; And that too, with not a single one failing! To make business, certain economic sense should prevail as well. It goes without saying that, time is extremely critical to get the product into action. The split lot is one heck of a way not to stretch that time window.

(Photo courtesy)

Great to find and read an article/report in EEtimes about a company founded by many of my ex colleagues and friends. Saankhya labs seem to be in good shape to make that big impact in the fabless startup  arena. So far, the success of Indian startups have been mainly in the service sector and a few in the IP/networking boxes.  Saankhya is targetting a niche market, thrived by the software defined programmable radios, targetting for the digital TV market.  It is beyond doubt that, a universal demodulator is of tremendous potential in the consumer TV market, yet largely untapped.  With so many different standards running around the world for digital tv transmission itself, it is of heavy interest to have one decoder which does work for all locations. Saankhya also have analog decoder (for the US ATSC schemes) which will be handy during the period of  transition  when  the service providers swtich from analog to digital. Best wishes to Saankhya.

Wireless gigabit alliance (WiGig) has a new(updated) website. For a first up, there is a link How WiGig Works which nicely explain what  WiGig is all about, in a clear layman’s terms. If you ever wondered whether we saw the finale of the wireless rate surge, just re-think. We are still a lot far from drafting even a proposal, but there is surely plenty of light seen in the wireless horizon. As an example, HDTV would require about 3Gbps rate. WiGig is addressing applications such as this which demand rates beyond 3 giga bits per second. The brief tutorial is a compelling read.

…and it is Oracle! Quite a surprise! Thats the least I felt, when the news broke out stating that Oracle is buying Sun Microsystems. The once great and proud maker of some of the best servers and computing power houses is now leading to the hands of a software giant, largely focused on database solutions. There is no natural connection to the obvious eye But who knows? Oracle may be eying something big! I cant see a justification of spending 7.4Billion $ to get hold of Java and MySQL alone. These are the big software solutions from Sun, apart from Solaris.  Anyway both these are open source software too. Afterall Sun is known for its champion make of servers right? Is it that Oracle feared an imminent acquisition by some other competitor, which might have distracted their lead? For a good amount of time the speculation was on whether IBM would still buy Sun. Then it was the Cisco, and the HP taking rounds as potential buyers. None of these materialized, but Oracle, the one choice with maximum entropy!

Would it be that, Oracle saw something big with Solaris? Are they eying on a solid operating system market? In any case, a decision to buy a company for 7.4Billion cant be for fun. Surely there got to be a plan, at least in theory!As someone opined in some article recently about possible consolidation of SAP and a possible buy over by one of he bigger fishes like IBM or HP. Now, that would take some shape too. Nothing can be ruled out at the moment. This is the sort of indication floating around.

It was almost unthinkable that a single company would rule the EDA world. At least this is what I strongly perceived, a few years ago. Now, put the present dishes on the table and I see that, Synopsys is giving nightmares to all other EDA shops. While working with Synopsys, we always saw Cadence as the rival company to get floored on. All of that, was in the wish list and not many of us thought we could do that, ever so easily. Cadence was the obvious leader of EDA for many years and Synopsys strongly stood at the second position. Then there were the Mentors and the Magmas, at a fair distance down. Magma was the emerging company with a strong future predicted by many pundits within and outside the EDA world. It was imminent that Magma one day would give a stronger competition to both the big brothers Synopsys and Cadence. They may still be a force to reckon, but sadly they tried to act over smart and it all triggered a downfall. I am not sure whether their, rather peculiar sue attempt on Synopsys was wholly responsible for their slide. Definitely that may have had a role. 

Now it appears that, the discounts offered by the EDA big fellows are giving more aches to smaller players. It is well known that the EDA tools are phenomenally expensive and the marketing always revolved around giving deals for bulk purchase of tools. What is more colourful is that the buyers offer to make the deal public in exchange of more discounts. The concept of primary EDA vendor was not that prevalent a few years ago. However, the trend these days is to grab that extra mileage by roping with leading semiconductor houses. It is a big win for both the buyer and seller. Synopsys for sure  is going to enjoy this. First they are among the very few making profit even in these difficult economy. They are perhaps the only one from EDA. Considering that the EDA market itself is only about 4 or 5Billion dollar market, the impact of a near 1.5billion dollar Synopsys doing too well is going to give more headache to other little fellows, in the coming days.

Cadence is literally having a plate of their own problems and now with the whole semiconductor market trying to minimize their R&D spending, it is double advantage for Synopsys; That too with newer friends adding to their primary EDA friends list. Magma is becoming more or less a prospective buying target than a rival. A few years ago, Synopsys had worries about a growing Magma. Now I wouldnt rule out a potential buy over by Synopsys itself, may be Cadence or Mentor Graphics! 

Some people say that Synopsys is going to be the next Microsoft in EDA. Aart perhaps rightly said they want to be the Apple of EDA. I would prefer Aarts view here. Not just because Synopsys was my breadwinner for a while and not because I attended the same grad school as De geus, nor because of the well known fact that yours truly is an ardent fan of Aart de Geus. But because Synopsys is  well managed by a great management team with great work ethics. When the ratable (subscription) revenue/ licensing model was announced there were lot of eyebrows, but it was a long term vision and Synopsys is really reaping the fruits now. 

Having said all these, like many of you, I am too worried by this single monopoly trend in EDA. We need smaller players in every market and we need more innovation. From Synopsys standpoint having less competition would yield relaxed days ahead, but for the market we need better products and superior innovation. We need Cadence to revive and at the same time companies to emerge to take position for the next Magma. At this stage, I am worried about Magma. Is Magma to follow the Avant! route to get merged with Synopsys?

Aart has aptly mentioned that “I understand that the entire world is under economic pressure,” he said. “When that happens, some will do better than others”. One thing for sure. Among all the EDA executives, Synopsys folks must be getting better sleep these days.

While the talk and boom about multimode multiband phone in CMOS is turning greener, there should be a natural question around it.  How about doing all these in software? Rather add a level of programmability such that a great deal of issues from a  hardwired implementation are shifted to more flexible firmware.  Without contention, pros and cons with the idea of programmability still prevail. Clearly, one definite advantage I see with programmable design is the significant cost reduction and reuse.  Additionally a migration or upgrade, which is imminent from a future gadget design point of view, can get done with relative ease with a programmable multimode chip. Building a suitable processor architecture to suit the modulations schemes (say an OFDM based scheme can have an inbuilt FFT engine or a WCDMA can have a correlator engine). Aren’t anyone working seriously in these directions? I am sure there are many, atleast startup ventures.  Vaanu and Icera indeed are two things coming to my mind.  How about the big boys? There were lot of furies about software programmable baseband chips being developed. Not quite sure what is the latest in that front.  Isn’t it the next big thing in the offing? I am sure the EDA big houses have thought ahead for building tools for a heavily software oriented design, at least for years ahead. Or is it that, I am jumping the gun a little too far? However,  I see some top level bottlenecks in making this programmable multimode chips realizable at an easier pace than a textbook concept. One of them is difficulty in getting away the analog front end. As a matter of fact, now I feel that, analog is going to stay.

So where are we heading to? Clearly, an all CMOS multiband multimode single chip (baseband and analog) with a near perfect RF and a software architecture would be the ultimate holy grail of cellular chip design. How many bands and how many modes to be incorporated becomes less important, if the programmability aspect is assured. Challenges within a single chip concept are themselves many.  Clearly the RF portion is expected to take up lesser share of the overall chip size. An all digital front end is aimed in that direction. While a direct digitization of radio signal of high frequency eliminates analog life process significantly, there are several practical bottlenecks with this Utopian design model.  We are not quite there to say good bye to analog entirely. Analog signal processing is still critical and inevitable, even for a programmable multimode dream.  I will give you some numerical facts to substantiate my claim:

Suppose we decide to build  programmable all digital zero if receiver for a 2GHz system (around the UMTS band). Then,  Shannon Nyqusit sampling would demand at-least 4 G samples/second.  Even with  a processor which clocks 4Ghz and say 8 operations per cycle, our full steam purchase is going to be a maximum 32000000 operations per second. This theoretical figure is based on the assumption that processor memory is fully utilized. At the sampling rate of 4G samples/second, we only are going to get \frac{32 \times 10^{9}}{4\times 10^{9}}=8 operations per sample. How are we going to have all the fancy radio algorithms shape life with this? Even to implement realistic functionality of a typical modern radio, this is inadequate. Another important thing is the imminent power dissipation to run a processor at 4 GHz. For a portable gadget, where  these chip are targeted for, we still need more and more hand in hand optimization and integration with analog processing, software as well as digital processing, in addition to an optimized system architecture. My feeling is that, the analog front end is going to stay for some more time, if not for ever. At least on the immediate future, we need more inroads from analog processing, to realize the small size, cost effective multiband multi mode chip dream.

Today, there appeared an interestng (and perhaps general) question posted on the Linkedin Analog RFmixed signal group. The question was this “Regarding multi-mode multiband RF transmitters for handsets (CMOS), what do you think are the hot issues (besides PA)?” I have given a short overview of the challenges that I could see when a multi mode phone is to be designed on CMOS: The phone has to support a wide range of frequency bands as well as multiple standards/technologies/modulation/air interface. Here is what I wrote.  I am not sure whether the discussion is accessible to public. Hence I repost here. 

Integrating the RF transmitter and receiver circuits is a challenging thing since we have to support multiple bands (within a single mode. Say GSM/EDGE should support GSM900 to 1900 bands) as well as support for multiple phone modes. For instance a natural multi mode multi band phone supporting GSM/GPRS/EDGE/WCDMA/LTE will have to consider a wide frequency ranges from 850MHz to over 2GHz. If we were to consider incorporating GPS and WLAN, add that extra consideration. Not just the transceiver circuitry, but also other components such as oscillators, filters, passive components, frequency synthesizers and power amplifiers. Another thing is that, for multi mode, the sensitivity requirements are much more stringent than a single mode, multi band design. 

Since CMOS offers low cost, better performance and better scaling, to me that is the way forward. The natural choice of transceiver in CMOS would be the direct conversion/Zero IF, since it eliminates the costly SAW filters, and also reduce the number of on chip oscillators and mixers. Now, there would be several key design issues to be considered now with direct conversion architecture. Most notable ones are the well known ghost “DC offset” and the 1/f noise. Designers will have the task cut out to get a cleaner front end and as well as near ideal oscillators. 

Now I see another problem with multi mode, depending on what level of flexibility we prefer on this integration. Do we need the phone to operate in multiple modes simultaneously? Say a voice call on GSM and at the same time a multimedia streaming on LTE. In such a case, the question of sharing components are completely ruled out. If not, say some components such as synthesizers and mixers (if in the same band for multiple modes) can be shared. Clearly, simultaneous mode operation will ask for increased silicon die size as well as cost. Challenges may be there for circuit isolation for different modes as well. 

In all, depending on the level of sophistication (and of course all these things will have to be scaled economically too) the design,partitioning, architecture challenges are aplenty. Now the choice between a single chip (containing both analog baseband and digital baseband) versus two chips (analog and digital partitioned) will get a little more trickier with multiple modes. With multiple antennas (MIMO), add another dimension to this whole thing:-(. 

https://ratnuu.wordpress.com 
http://people.epfl.ch/rethnakaran.pulikkoonattu

Phew! After the heck of debates and discussions (over years) on the standard evolution of the IEEE 802.11n for multiple antennas (MIMO), now it appears that we are all in for a single stream (single antenna) chip. It sounds more like an 11g upgrade, or perhaps a conservative lead from there on? If Atheros believe this is is the way to go I have my belief that Broadcom and Marvell have it in the delivery line too.  Here is that interesting new story at EEtimes.

Pages

September 2017
M T W T F S S
« Mar    
 123
45678910
11121314151617
18192021222324
252627282930  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 86 other followers

Like

%d bloggers like this: