You are currently browsing the category archive for the ‘Mathematics’ category.

So, the number has descended down from 70 million to a mortal 600! Expectedly, there is a new Polymath initiative to improve this further and who knows, perhaps 2 indeed is that number. Well, we are talking about the new result by James Maynard on the asymptotic gap between prime numbers.

Just to recap, the problem statement is as follows: If and are the th and th prime numbers (e.g., ). On cursory counting, the gap appears to grow larger as , but not necessarily, because there are these well known twin prime pairs such as . Will this gap of stay good forever as ? If not as low as , will that be something low enough? i.e., How small can be, where

Earlier this year, Zhang proved that is no more than $70$ million. That in a way proved that, the prime gap is bounded as we move along the number line. A bunch of mathematicians including Terrance tao worked further (polymath project on this) and improved that gap to as a few thousands. The latest result from Maynard brings in an independent proof for . Maynard also claims that if the Elliott–Halberstam conjecture (See this nice blog post on prime tuple theory this by Terry Tao) is indeed true, then, . Stunning!

What is stated here is just one avatar of the prime tuple theorem. More general results are also being discussed within the community. Terrance Tao again has this nicely articulated and maintains a polymath page for us. As an onlooker, I am as excited as many others to see this progress.

For consecutive (th and th)primes and , the asymptotic gap has got a fresh renewal off late. The famous twin prime conjecture says , but that is still a conjecture and not a proof. Recently, Zhang proved that is a finite number and a number definitely not bigger than . It was hoped that one day, the mathematical community will find a number lower than this and perhaps even the holy grail mark of .

Now what? Within a span of a month or so, the established gap of 70 million has improved to a thousand odd number and is still on a path of decent. Still some distance to the ultimate mark of 2, but boy, does collaboration work? Ever since the now famous breakthrough from Zhang touched the broad light (I had scribed my little thoughts on that earlier!), Terrance Tao and his team steadily managed to improve the bound. Tao has already knitted a nice and detailed summary on the progress. As of last week, the proven gap was , but now the updated gap could be as small as (which is still under verification as per polymath8 project page). Let us hope that, they can go all he way to prove the twin prime conjecture, one day!

It is interesting to read, from Tao’s blog (which any day is a treasure trove of many topics, thankfully written with a wider audience in mind) the several connections they made, including that to Elliott–Halberstam conjecture, for improving this fascinating distance between prime successors.

The 2012 Turing award goes to two cryptography pioneers Shafi Goldwasser and Silvio Micali. I don’t do much of cryptography (for that matter I never did anything serious on that front, besides doing some C programming to demonstrate RSA and later on some class project involving elliptic curve cryptography and mathematica programming at EPFL). But I always was fascinated by the topic of Cryptography, largely for the many interesting toy like, yet deeply thought provocative combinatorial as well as probabilistic problems it addressed. Several of these problems stay at the very intersection of CS theory and cryptography.

One such classical problem that we all learned in graduate classes is the zero knowledge proof. The idea is pretty cute. In the most simple scenario, this involve two parties Alice and Bob, who don’t trust each other, but one of them (say Alice) have to convince the other (bob) that she has knowledge (say a mathematical proof) of something without really revealing it. The fact that it is not revealed explains why it is zero knowledge (Bob in the end never get to know what Alice know, but just gets convinced that she knows!). So, it will work more like an interactive protocol wherein a lot of questions are being asked by Bob, chosen randomly, depending on prior answers from Alice. Doing this way for long, can Alice convince Bob, with an overwhelmingly high probability, that she knows what was claimed? Such an interactive protocol constitute what is coined as a zero knowledge proof. An example would be a novel ATM machine where, you don’t have to enter the PIN (unlike the ones that we have), but you can still convince the machine that you know your correct PIN. Sound cool and yet thought provoking, right? Well, that is why I said it is cute and interesting. This zero knowledge interactive proof idea was first conceived by the new Turning award winners. The names didn’t strike me initially with the Turning award news, but after reading the details, it occurred to me that, I had read their classic paper , as part of my coursework at EPFL.

A bit more formally stated, the two entities are namely a Prover and a Verifier. In the ATM example, the ATM machine is a verifier and you the user is a Prover. Prover knows the proof of a mathematical statement and verifier tries to verify whether the claim is indeed correct with high probability. The machinery of zero knowledge proof is that, if the prover is trying to trick, the verifier will be able to find that out as well, with high probability. There are many examples illustrating this beautiful idea. A classic example is the problem of helping a blind man in identifying whether two otherwise identical balls are of different colors? Can you convince him, without really telling which is which? Now there are variants of their original idea and myriads of practical significance have emerged or are still emerging.

The ACM award page has pretty enthralling account of these two pioneers (Shafi and Micali). Now, here is an interesting family trivia. Shafi Goldwasser and her husband Nir Shavit together now keeps three Gödel Prize in their shelves, besides adding the Turing award aroma, now to their household!

During the past week, while at Hawaii for the IEEE 802.11 interim, I happened to glance this NY times article. The story is about a New Hampshire professor Yitang Zhang coming out with a recent proof on establishing a finite bound on the gap between prime numbers. While browsing the details, there are more details emerging as several pieces of blog and articles and reviews are being written (and some are still being written). Now, looks like the claim is more or less accepted by pundits in the field, and termed as a beautiful mathematical breakthrough. As an outsider sitting with curiosity, I feel nice to scratch the surface of this new finding.

The subject surrounding this news is number theory, prime numbers to be precise. The question of interest is on the gap between adjacent prime numbers. We know that and are prime with a gap of , but this is truly a special case and unique per definition. The gap between and is 2. Similarly and differ by . One may have thought that, the gap between successive primes go up as we flee along the number line. Not quite. For example, we can see that there are a lot of pairs with a gap of 2. The easy ones are (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71, 73), (101, 103), (107, 109), (137, 139) and the list goes on. It was conjectured that there are infinitely many such pairs, but the proof of that is not quite as easy as yet! It is known that there are precisely below , but infinity is still a lot far from ! An interesting quest was to really prove that there are infinitely many twin primes, but this still remain as an open conjecture.

Now the new discovery by Zhang is not quite proving the twin conjecture, but a close relative of that. Twin conjectures are strictly about prime pairs separated by . A related question is, how about prime pairs and which are separated by where could be a finite number. When , then we have the special case of the classical twin prime case. Can we at least prove mathematically that there exists infinitely many primes such as for some $k$. If so, what is the smallest where this holds true? Zhang now has a proof that for as small as million. Mathematically, if we denote is the th prime, then the new claim says (stated crisply in the paper abstract),

million is still a large gap, but as Dorian Goldfeld says, is still finite and nothing compared to infinity! In future, it is not unlikely that we may get to see this gap coming down and perhaps to the best case of . Who knows?

The result is still interesting, even to general interesting folks like us. This kind of says that, the gap between prime numbers is worst case bounded by a finite number. If we really plot the prime numbers, then we will see a saturation like behavior! Like many other things at asymptotic (for example, the eigenvalues of a large random matrices exhibit very interesting properties, when the size goes to infinity), things at infinity may exhibit some charm, after all!

The paper is accessible here, but as expected the proof is hard (for me at least). Hopefully we will have some diluted explanation of its essence from experts in the coming days. Already,Terrence Tao had this sktech, a couple of weeks ago on his google+ feed. Over the last week or so, finer details on the new break through are still emerging. Terry Tao also has initiated an online Wiki collaboration in an effort to improve upon from this work (For experts that is, not for me!).

Congratulations Professor Zhang.

Since this is year end holiday time, I got a fair share of my time to read up on several random topics. Amidst the many sad out comings in the past couple of weeks starting from the shocking events from Newtown Connecticut elementary school shootout, then on had a few more disturbing news all over the world, particularly this one from the other side of globe. A rather non disturbing news is on the new claim on proving/establishing the so called Ramanujan’s deathbed conjecture. I was doing a bit of follow up read up on this to first understand the premise of the problem in news.

OK, the subject deals with what is known as mock theta functions (Note: There is no credible Wikipedia entry to it and hence I cannot add one here. Also remember that this mock theta functions are not the same as Ramanujan’s theta functions; Just to not confuse these subtle mathematical terms!). These are not quite the Riemann Zeta functions, which are well known, but more similar to the Jacobi theta functions. The mathematical prodigy Srinivasa Ramanujan while on deathbed had shared some quick tidbits to the famous Cambridge mathematician (and I should say a hard core cricket enthusiasts! If you guys have not read his memoir *A mathematician’s Apology* already, step out and grab it soon. A fabulous quick read that!) G.H.Hardy who was paying a visit to his now famous student. After their famous meet up (or may be during their meeting itself), somewhere in 1920, before his death, Ramanujan wrote a letter to Hardy in Cambridge in which he introduced as many as seventeen new functions. These functions, as per Ramanujan, had shared some distinct similarities with the theta functions (The Zeta functions were itself had been studied before before by mathematicians like Jacobi, Riemann and several others. Now of course the connection between Jacobi theta functions and Riemann Zeta functions is already established!).

The Jacobi theta function itself is pretty complicated to lesser mortals like us. It looks harmless for a starter, but the properties, its various avatars and connections to many real world applications is what makes it monstrous. And then there are also its generalized cousin Riemann Zeta functions , which as well, appear as a simple and elegant looking form for , for higher changes the form beyond what we can fathom (For example, it is not even known whether for larger such a number is transcendental!). I remember playing with (I think it was 2000 or 2001) a proof of using some integral calculus in two variables, which again turns out to be rather well known as easy. There are other ways to prove as well for , but it stops being that simplistic there!) Anyway, Jacobi’s theta function has the form , which in words can be roughly stated as some form of elliptic form of exponentials.

Much like Fermat did for his famous last theorem, Ramanujan too didn’t give much hints beyond listing and stating that they are elegant. For example, he didn’t reveal where they come from or how to formulate them or for that matter what use having them. Like many of his other famous mathematical findings, Ramanjunan, a self taught genius, made a quick listing of these. Rest of the curiosity has enthralled and haunted the mathematical minds of coming generations, all over the world. A brilliant spark from the great genius helped to stir an intellectual stir for almost 10 years!

The latest is that, two Wisconsin mathematician, Ken Ono and University of Cologne professor Kathrin Bringmann came up with a framework explaining what mock theta functions are and how to derive them. They connect modular forms or more specifically the mass forms to Ramanujans’ functions. If you recall, the modular forms also played a big part in the proof of the other famous problem of this kind Fermat’s last theorem.

It will take a while for me to understand (if at all I manage to digest it) all the details of this new claim. I must say, it is likely that I will fail to comprehend it in full . As always, I will be curious to read it up and grab as much as I can. As and Indian, the connection to India is always an interesting pursuit!

I leave you with a nice documentary video on Ramanjuan that I found in youtube. If you have not see this already, check this out. Pretty good documentary, unfortunately tarred by a lot of useless and unrelated comments.

The SODA 2012 paper by Dina Katabi, Piotr Indyk et al promises a relatively faster way to compute DFT of sparse signals. This is getting traction from outside the community too, after the MIT broadcast. The well know FFT originated from Gauss and laregely resurrected by Cooley and Tukey has the time complexity of , which offers significant gains over the conventional DFT complexity of when the size is large.

If we really dissect the FFT complexity limits, it is already pretty good. With points to compute, the complexity will be proportional to and roughly, the per node complexity is .

Now, what the new scheme promises is not a change in the order of complexity, but a clever way of simplifying the complexity depending on the inherent signal sparsity. When the signal is sparse (i.e., only among is significantly different from zero ), it is fanciful to ask whether we can indeed get to the complexity and Katabi, Indyk et al have indeed reached there. Quite remarkable achievement this is, considering that the gain could be as good as the compressibility limit in most of the real world signals we deal today. Signals such as audio and video are the leading candidates in the practical signal signal processing world and they both are sparse in some transform basis. Recall that, the recent compressed sensing results for sparse signals showed the potential benefits of sparse signal processing and this new scheme will help to realize many things in a more meaningful way. One good thing with this is that this generalizes the conventional FFT. In that way, this is not just for sparse signals, but something which holds for any and in the limit when , the complexity is as good as the old mate FFT!

I want to try this out sometime for some communication problem that I have in mind. At the moment, I am hopeful!

Another stalwart, the founding father of Unix, C and in many ways one of the founding fathers of computing itself, Dennis Ritchie passed away. For me, Ritchie along with Kernigham was one of the first few names registered in mind, since learning computer programming. The first book I have ever seen on a programming language is this duo’s book on C. And boy wasn’t that most concise book in technology, every so compact and yet rich and clean!

Come to think of it, the impact of Ritchie to modern science and technology is enormous. He may not have been a very public figure, but his contributions indeed is the touchstone on the modern world, especially the information network world. Much of the internet, the Google, the iPhone’s and what more, almost everything we can think of runs on his stuffs or its variants. Quite remarkable.

I think the most apt summary of Ritchie’s contribution is heard from Brian Kernighan himself. He said “The tools that Dennis built -and their direct descendants – run pretty much everything today”

One thing I like (besides it being an absolute gem of a search genie) about Google is their customized logos, usually marking the day in history. Today, on Pierre de Fermat’s b’day they setup a nice and cute logo. On the logo, the famous Fermat’s last theorem is pictured. I think, it was in 1993 I first, heard of this famous problem when England born Oxford/Princeton mathematician Andrew Wiles unveiled the solution. In 1996 or so, there was this documentary telecast in BBC on Andrew Wiles and his journey to solving one of the greatest (if not greatest, one of the most talked about problem) problem in the history of mathematics. A high class documentary is archived in youtube. I am not sure whether it was BBC who made it originally. I see that some youtube link shows UTV.

Here is another piece of writing/blog on this subject (by **Chloe Albanesius**).

On the Christmas day, out of blue I bumped across an old archive of Robert Fanos’s interview (oral history). Beautiful one.

Henri Padé wrote his PhD thesis in 1867 at the École Normale Supérieure in Paris. The dissertation was on what we know today as the Pade approximant. Come to think if it today, it is really remarkable that 100 years ago, mathematicians had thought about function approximations using rational polynomials. Now it may appear all too simple, but without the aid of a serious computing machine, one would have to rely on the mathematical rigour on every small argument. In comparison, these days, we can quickly check the validity using some computer program before venturing into a formal proof.

Anyway, the sudden recollection of this French mathematician happened, as I was looking for a good curve fit for a problem I need as part of my work. I needed to minimize the maximum error than the average or squared error. I thought about Legendre polynomial fit, which gave me the minimum root mean squared error and Chebychev who minimized the worst case error. The order of the polynomial and the number of sample points seemed to have dependency. Pade approximant is a cool technique which reduces the polynomial order while using a rational polynomial. I am not too convinced about the statistical properties of this method. The data points I have at disposal may have some measurement error attributed. I need to investigate a bit more before taking a decision on the potential optimality for the given problem! Happy knowing this scheme though. In retrospect, I remember hearing it in my statistical signal processing books, but never paid any serious attention then! Silly.

Lipschitz continuity is a stronger property than continuity. First, when I heard this (through Ruediger’s treatment on expectation analysis of codes), it was all haze. For the second time, I made some sense during the Applied Stochastic process course. Anyway, a final appreciation of this concept kicked my brain only when I started working on a problem. Recently, I came across this again and hence thought of penning a few easy lines on this.

Here is a beautiful explanation of this concept.

I was playing a few tricks on a few centrality measures on a few graphs. SO, thought of penning down a quick piece of notes on the notion of centrality (A quick introduction can be found here along with Wikipiedia page on centrality)

Just heard from my EPFL folks that Mathematica 8 is just released. I am yet to get a chance to see it working, but I will look forward to it someday. I am quite happy with Mathematica7 already, but Wolfram always bring radical new stuffs which is special. In fact I am mighty pleased with version 7, but who knows what all new stuffs there in 8? I have been a huge fan of Mathematica since the early 2000, ever since Nandu (Nandakishore Santhi) introduced his new tool. tI use it pretty much for every computational mathematics job. These days, I even use it for plotting, much more than Matlab. Looks like there are a lot of new features added in 8. One of the claim from Wolfram is that, they added a lot of new stuffs on aids to work with (probability) distributions, which I think is going to help me a lot. One stellar new thing I notice (from the announcement) is linguistic argument support. Boy, that is hell a lot killer application. Forget the syntax then. If you want to plot a sin(x) with grid on, type just that sentence! That’s it! Rest Mathematica will do. Wow! Wow! How much is it for an upgrade? Or should I go for a trial? I can’t wait!

Does anyone believe that the difference between the Lebesgue and Riemann integrals can have physical significance, and that whether, say, an airplane would or would not fly could depend on this difference? If such were claimed, I should not care to fly in that plane.

I recall hearing this interesting quote. If my memory is correct, I first heard it from Martin Vetterli, who either mentioned this during a talk/class or it was there in the footnote of his forthcoming book. It sounded funny that time, but I didn’t really know to whom this quote is originally attributed to. To my surprise, this has its origin dates to the story man Hamming. Well, why Hamming? Emre Telatar did tell me couple of funny stories about Hamming besides the one famous scream on the computers which eventually led to the discovery of error correcting codes! By the way, Emre is a walking encyclopedia when it comes to stories. He is a treasure in many ways!

Ok, coming back to where we started. Springer has published this nice conversation piece online for free. The title is “Mathematics and Design: Yes, But Will it Fly?”. It is not really a book, but a very interesting conversation discussing the above mentioned quote by Richard Hamming. The preface of the discussion itself couldn’t be more apt, which reads:

“Martin Davis and Matt Insall discuss a quote by Richard W. Hamming about the physical effect of Lebesgue and Riemann integrals and whether it made a difference whether one or the other was used, for example, in the design of an airplane. The gist of Hamming’s quote was that the fine points of mathematical analysis are not relevant to engineering considerations.”

A very fascinating read indeed. An even more fascinating, formal defense on Lebesgue’s right on why aeroplane fly is here. Well, the answer to aeroplane question: here is what Andrew Lewis has to say, “In the event that the reader is consulting this paper in a panic just prior to boarding an airplane, let us answer the question posed in the title of the paper. The answer is, “The question is meaningless as the distinctions between the Riemann and Lebesgue integrals do not, and should not be thought to, contribute to such worldly matters as aircraft design.” However, the salient point is that this is not a valid criticism of the Lebesgue integral.

“

A nice intuitive to way to explain what a Martingale is found here (Venkat’s Computational Probability course notes). Perhaps it was known already to many. To a layman, this is the best way to introduce Martingale, first up!

A sequence is called a martingale if , for all . An intuitive way of thinking about martingales (and indeed the origins of martingales) is to imagine as your total profit after the th round of a gambling game. Then the martingale condition states that knowing your winnings in the first games should not bias your winnings in the th round. See the Wikipedia page for examples of martingales. The course notes is fabulous. I am following it on and off!

Just heard about the 2010 Hay festival was held last week in Thiruvananthapuram. Although I couldn’t have gone there, it felt nice to have had such a great global art and literary event in my own home state in India. The official website hosted some amazing scenes from various parts of Kerala, which to an ardent Kerala fan like me wished to see all times. The Hay festival of arts and literature has become quite prominent in the public media, recently and what better place to have it, than the beautiful and literary rich Kerala!

Thanks to Youtube, I could gather glimpses from the event. Part of Vikram Seth‘s Storypooja is captured here. The one session, I would have liked to attend is Marcus du Sautoy on “The Number Mysteries: A Mathematical Odyssey Through Everyday Life“. I hope to see a video tape of this program online sometime soon! After all, Marcus claimed to have had a reason for everything, including why bend it like Beckam! Another of my favourite is ONV Kuruppu. His candid and lively talk is an anytime favorite of mine. He was supposed to have had a conversation with poet Sachidanandan. And, how much I missed Bob Geldof‘s conversation, let alone his concert!

Via Lance’s blog/Twitter, I came to know that Wolfram alpha can now input TeX. This is very useful. In fact, I believe this is a great addition to the search tool. For someone who is familiar with TeX syntax, it is pretty easy to manipulate with functions, plots and datasets.

This came up again for the nth time, as I hear at various discussions and debates at various places including the internet. But as always there is learning (at least for me) for the taking. Here is a nice and cute summary on the Bayesian philosophy, drafted Radford Neal . Ah, the latest revisit to this Bayesian versus Non Bayesian came up when I was chatting with a Finance friend. I cannot divulge the details of the conversation here, but I thought of writing a bit about the Bayesian inference, which to me is always the formal and correct statistical method (I must admit that, I am somewhat novice to have a strong counter argument, which is tame, but frankly I need to learn more!)

The Human Resource Department of India, aka HRD ministry made a mockery of itself by doing a foot in mouth call on the nationality of the world chess champion Viswanathan Anand. The Indian citizen who lives in Madrid for logistic reasons is denied a honorory doctorate for insane mess created by the HRD ministry. The reason: He is not Indian! Anand’s wife Aruna, understandably upset by the wrong call from the HRD, duly sent a fax of the Indian passport that Anand holds, but the HRD ministry was still not convinced. What a piece of joke! This is not the way to salute a world champion and our own citizen. All this happened in the eve of the ongoing International Congress of Mathematicians held in Hyderabad. Only a blog post earlier I wrote about the thrill of India getting a chance to hold ICM. Now, the hopeless HRD ministry found a way to make a mess of it. What a shame! They also managed to rebuke the famous Algebraic geometry guru David Mumford. Neither Anand nor professor Mumford needs these extra certificates to proclaim what they are. They are already stellar figures in their respective fields of expertise. The manner in which they are toyed is what annoys people like me and it is simply unacceptable. These are not the signs of a country claiming to become the next super power. Neither this is the way we as children were learned and taught to respect the elders and guests. Athithi devo bhava, but not to be!

As an Indian, I take a lot of pride to learn that the International congress of Mathematicians 2010 is being held in India. It is currently ongoing at Hyderabad, India between 19th to 27th August. Even though I am many a mile away from there now and in spite of myself not really directly or indirectly participating, it brings immense joy to see India getting a chance to show a piece of International math progress in modern times too. The Fields medalists , Nevanlinna prize as well as the Gauss medals and Chern medals are announced during the congress. It is great to hear that Dan Spielman getting the Nevanlinna honor. As a coding theory enthusiast, it is doubly exciting to hear that for the second time, someone from the same field (after Madhu Sudan received one in 2001) get to hold the coveted prize.

The award winners are:

Fields medal: Elon Lindenstrauss, Ngo Bao Chau, Stanislav Smirnov and Cedric Villani. One Israeli, a Vietnamese, a Russian who lives in Switzerland and a Frenchman. Here is a nice interview transcript where Robert Siegel talks to Julie Rehmeyer, discussing the award winners. A pretty nice brief bio introduction to the fabulous medalists.

Congratulations to all winners. The Fields Medal announcement is here. Nevalinna, Gauss and Chern medals announcement is here.

The P versus NP has popped up again in to the mainstream. The HP scientist Vinay Deolaikar has recently prepared a manuscript which claims, it is after all P ≠ NP. Greg’s blog seems to be the one which first broadcasted this to general public. I have not had much time to digest the proof claims or counter arguments in detail. Here is a fascinating series of arguments discussing the proof claims. I am going to do some digesting on this in days to come! We will wait for the experts to have their say. I am going to relish this discussion.

On a lighter note, here is a very interesting blog piece, bringing some sort of similarity to the giant component behaviour on a pile of staple pins. This is sort of a layman view of the Erdos Renyi explanation.

In one version of the Erdős-Rényi process, one start with a set of isolated vertices and then keep adding random edges one at a time; More specifically, at each stage one choose two vertices at random from among all pairs that are not already connected, then draw an edge between them. It turns out there’s a dramatic change in the nature of the graph when the number of edges reaches . Below this threshold, the graph consists of many small, isolated components; above , the fragments coalesce into one giant component that includes almost all the vertices. “The Birth of the Giant Component” was later described in greater detail in an even bigger paper–it filled an entire issue of *Random Structures and Algorithms* (1993, 4:233–358)–by Svante Janson, Donald E. Knuth, Tomasz Luczak and Boris Pittel.

Ofcourse Joel Spencer has written a beautiful article on AMS commemurating the golden anniversity of the giant component.

Here is an interesting recap of the Newcomb and Benford findings on the distribution of the first digit in data records.

A century ago, Simon Newcomb observed an unexpected pattern in the first digits of logarithm tables: The digit 1 is significantly more likely to occur than 2, 2 than 3, and so on. More than a half-century later, Frank Benford rediscovered the first-digit phenomenon and found that it applied to many tables of numerical data, including the stock market, census statistics and accounting figures. New mathematical insights establish the empirical law developed by Newcomb and Benford as part of modern probability theory, and recent applications include testing of mathematical models, design of computers and detection of fraud in accounting.

Here is another interesting finding on temperature data and the final digits. A proof and formal review of the original work is available, presented by Hall.

Here is an intuitive description of the Benford’s law.

Alex Bellos listed what he call as the ten best mathematicians (of all time). His list, published in The Guardian, is based on his assessment that the individual contributions of the listed ten mathematicians changed the world, rather revolutionized the mathematics in some sense.

Alex, firstly listed 10 great minds whose contribution to mathematics is simply awesome. However, the list to me is far from complete and it is grossly incorrect to call the selected 10 are the best. Perhaps we should have named it “10 amazing mathematicians”. I am sure Alex didn’t do an exhaustive research on the mathematicians who lived on this planet. If we are to limit the number 10, then there are many other mathematicians whose contribution will any day stand taller than some of these. Newton, Shannon, Kologomorov, Fourier, Galois, Shannon, Abel, Euclid and even Ramanujan are some of the names which comes to my mind at once. Many French mathematicians are given a miss as well. Some may argue that, these are not pure mathematicians, but they all had produced more than one mathematical theory which for sure changed the world; sometimes beyond mathematical world. Newton for his Calculus can never be omitted from the top ten. Shannon’s mathematical theory is kind of the underpinning of literary everything in communication including the internet . Fourier’s once ignored (and painfully debated) work on heat, which eventually spurred harmonic analysis and the young Galois’s spike all takes merit.

Among the list, Euler, Copernicus, and Gauss will pick themselves any day. I reckon Cantor will also find a wider consensus to be within top ten. Perelman and Tao are clearly two of the modern greats, but to say that they are among the top ten of the over 3000 year old formal mathematical history comes with a touch of boredom. Erdos is one of the popular mathematician in the 20th century, who has been an immense contributor to many different areas of mathematics. Moreover, he has been a maverick among the modern mathematicians because of the travel crazy, migrating style life. He is one of my favourite mathematician as well. Whether he ranks himself in the all time top ten is not clear to me.

Nevertheless, Alex has spawn an article which will spur some curiosity to the general reader on the history of mathematics. Mathematicians from 500BC, Copernicus to this date (Perelman and Terrence Tao) are there in the list.

The Italian fame Cardano is apparently one of the original mathematician who paved the probability theory. To my ignorance I have not heard much about him. I am curious about that part of the history now. I had heard of the Hypatia legacy in mathematics, and had since forgotten about that Greek tragedy. Reading the new list, it reminded me of the atrocities of a state indulged in certain dogma or religious view do to any new scientific claim. This is clearly not the only instance of such a thing happened. Galelio also had difficulty convincing the orthodox society of his time. And there are many. Why then, even now, the religious fanaticism in many states mock the Charles Darwin’s theory of evolution. The religious monks and the administration there to, all over the world still latch to the belief that it is some hand of god who created the universe and then every single creatures and plants. Ludicrous to say the least! But then, religious beliefs are often personal and one can be blind sometime.

An interesting and twisted version of the twenty question game came across recently. I saw it in the 2009 ISIT paper on the error correction under Feedback with list decoding, by Ofer Shayevitz. The problem goes like this (I am going by the nice narration of Ofer with the famous Alice and Bob duo).

The game is being played by Alice and Bob. Alice selects one of possible objects, and Bob can ask binary questions (where the answer can be either yes or no only), in a quest to identify the object selected. If Alice is always truthful, then questions are obviously both necessary and sufficient. Now, suppose Alice is allowed to lie up to times. How many questions does Bob need to get the correct object? Alternatively, given questions, what is the maximal number of objects that Bob can separate? This version of the “twenty questions game” with lies is known as Ulam’s game.

It is official now. The clay memorial millenium prize is to be awarded to Grigory Perelman for proving the great Poincarre conjecture. It is a different matter whether he decides to accept this. The man who dared to ditch the Fields medal may have an opinion of his own on matters pertaining to award and its credentials. Lesser mortals like us get amuzed by his decision, but then he is special and different.

Some interesting tips on binary random matrices again. Kolchin’s book also discusses some aspects of this (see Page 126 for example). It is interesting to see the asymptotic behaviour.

The probability that a random matrix binary matrix with each element chosen uniformly ( and picked equally likely), is of full rank for is

The probability that a random matrix binary matrix with each element chosen uniformly ( and picked equally likely), is of full rank for is

Glanced upon a paper from the 2009 ISIT, titled “LT Codes Decoding: Design and Analysis” by Feng Lu et al. The authors discusses LT decoding strategy which is Peeling decoder (conventional simple LT decoding) followed by an extension based on the well known Widemann algorithm solving a sparse homogeneous linear system. Here is a brief summary as I understood.

On packet erasure channels (internet being one close realization of a model of this kind, where, the TCP/IP suit, one can either gets packets correctly delivered or lost in the wild), Luby Transform (LT) codes forms an easy way to transfer information. It is well known that, asymptotically (When the number of information symbols (or packets) and the number of packets received $r$ are large) near full recovery is made possible by simple peeling decoder which grows in complextity at . The claim is that, for or so, with , on average we can, with high probability, recover all of information packets by simple unwrapping of xor-ed packets. A more precise statement will read as: If from information packets, each coded packet constructed independently by xor operation (the number of packets taken for xor operation follows a distribution; Plain vanila version of LT codes uses Soliton distribution), then the content of the original information source can be recovered from any packets with probability by an average of symbol operations. However, when the number of input symbols is not large (say less, ) the overhead term is not small. It is not too surprising, since we need sufficiently large and to converge the performance in expectation. One can think of solving the system of linear equations using Gaussian elimination and get the best figures, but the question mark stays on the complexity of this method over the cheaper peeling decoder complexity .

Conventional LT decoding based on Peeling decoding rule terminate, once the number of degree -1 packets (ripple as it is called in graph theory parlance) are void. One can hope to continue from there, by using Gaussian elimination and decode a a few more (if the graph matrix rank permits it). If the original graph is full rank, one can technically decode all of them. Gauissian eliminatio being heavy, one hope to find an easy, preferably recursive method. Indeed there comes Wiedemann and that is what the authors use. The authors propose what they call as full rank decoding, which essentially is LT decoding, until there are no more degree one packets. From there, they borrow a packet and then uses Wiedemann algorithm which is somewhat light on computational complexity.

The larger aspect on whether this scheme do any better than conventional LT peeling decoder is the other question they answer. To achieve full recovery we need to have a full rank matrix. Now, the matrix being a sparse matrix defined over where is the number of received packets. The probability of the matrix having a full rank will directly help us to infer the decoding merits. This is an interesting piece since I had looked into the computation of the rank of a binary matrix and also the probability of a random matrix rank being equal to an arbitrary value. All this was done during the Modern Coding Theory doctoral course by Ruediger Urbanke at EPFL. For binary random matrix, we can also setup a recursive formula and thereby a good inference on the probability of decoding can be arrived at.

Todays IPG seminar had Fritz Eisenbrand (the Disctete Opt chair, Math department EPFL) talking about Diameter of Polyhedra:Limits of Abstraction. I don’t think I followed the topic too well, but this is a share of what I understood.

The topic is about a convex geometric problem on the diameter of a polyhedra. The question of whether the diameter of a polyhedron is polynomial or not seemed to be a longstanding open problem. The largest diameter of a dimensional polyhedron with facets has known upper and lower bounds.

.

The lower bound is due to Klee and Walkup and upper bound to Kalai and Kleitman. These bounds also hold good for combinatorial abstractions of the 1-skeleton of non-degenerate polyhedra (Polyhedron here is called non-degenrate). What Fritz and his colleagues have done is to look into the gap between these known lower and upper bounds. Apparently, the gap is wide and they have made some progress to get a super linear lower bound if is allowed to grow with .

The way they showed this bound is by establishing the bound for the largest diemeter of a graph in a base abstraction family. Let us say, the abstraction family of connected graphs be denoted by .The largest diameter of a graph in is denoted by . They find that, and then using the fact that , they conclude the bound

I have not had a chance to see their paper yet. I must say, the proof was not all that within my grab during the talk. However it appeared that it is based on some layering and combinatorics. He said some applications to covering problem, in particular disjoint covering design which I didn’t follow that well. Sometimes I get the feeling that I am a little dumb to grasp these ideas during a talk. I wonder whether others understand it very well on a first shot presentation. I have put it in my agenda (among the millions of other papers to read) to see through this problem and proof, one day! His presentation was very clear and legible though.

Today, during the evening chat, Emmanuel Abbe threw an interesting question: Whether the sum of square roots of consecutive binomial coefficients converge to some closed form! That is, . We tried a few known combinatorics tweak, but no meaningful solution arrived. We were also wondering whether this has some asymptotic limit, but that too did not yield anything. A quick check on Mathematica wasn’t helpful either. Now the question is: Does this sum yield some closed form expression.

While playing with this sum in Mathematica, I found that for the sum of squares of binomial coefficients, there is a nice simple closed form.

I was toying with a proof. It turns out that, the proof is extremely simple and is a one line tweak of the Vandermonde identity . Simply substitute and we have the results on table. The natural question then would be: Is there a generalization for for any . Ofcourse now for it is trivial.

Apparently, it turns out that, there is no closed form expression for a general (all) . There are some interesting divisibility properties of these sums. An interesting account of that is addressed by Neil Calkin (Factors of sums of powers of binomial coefficients).

At the moment, I get a feeling that sum of fractional powers of binomial coefficients is not trivial. May be there is no closed form. May be not!

I’ve finished reading the memoirs of Walter Rudin. It was a quick read for a few hours. His autobiography is titled *The way I remember it*, published by AMS in the history of mathematics series. It wasn’t particularly interesting, to say the least. From a mathematician who wrote excellent books on functional analysis and several others, I was expecting a much better story. Of course one cant write an imaginary story in an autobiography, but then the incidents in his life is pretty much the story of any European intellectual during the war days. The best I liked is the one from Karl Popper. However, I could connect many incidents from Rudin’s life, primarily because of the geography. There is a chapter on his days in Switzerland, which also touched upon Lausanne. That part for once enthused me! Was wondering how Lausanne would have been 70 years ago! If you are completely unaware of the life in Europe around the WW period, then this will give you a perspective. Like many scientific minds of that era, he had a long route to the United States. He discusses the path and family traits of that journey, in a somehat uncomplicated language.

In his autobiography, Rudin has discussed some of his contributions to mathematics as well. That part appeared a little informative, but technical read. If you know his work already, you would connect it nicely. I particularly liked the chapter on *Function Theory in the Unit Ball of* **C**^{n. }

In all, not a book I would recommend, unless you are a Walter Rudin fan and knows his contributions in much more detail. However, this may be a motivating read for a young school kid aspiring to be a mathematician. Why did I say that? I don’t know! Don’t ask me why either!

An interesting bound (problem from Rudi’s Modern coding theory book) regarding the chromatic number of a random graph. I first attempted it during the course. Here is the problem statement:

C.5 (Concentration of the Chromatic Number – Spencer and Shamir). Consider a graph on vertices. The chromatic number of a graph , denoted by , is the smallest number of colors needed to color all vertices so that no two vertices which are joined by an edge have the same color. Consider the standard ensemble of random graphs on vertices with parameter : to sample from this ensemble, pick vertices and connect each of the ordered pairs of vertices independently from all other connections with probability . Show that for this ensemble

.

My solution is as folllows: (PDF typeset as single file is available here. Look for problem 3)

Let is a random graph with vertices. If is the probability that a given edge is in . The probability space be . Let be a filter on the set of all such random graphs. We can define a Martingale as follows:

where

and is the Chromatic number of . Chromatic number changes at most by one , when the information about the new edge comes in. Clearly, satisfies the conditions for Azuma’s inequality.

is a Martingale, with ). Let . Clearly

Now we can use the Azuma’s inequality on to get,

.

Since , the result

follows.

Here is an interesting riddle on random matrices.

(Rank of Random Binary Matrix). Let denote the number of binary matrices of dimension and rank , so that by symmetry . This is a repost of the solution that I have arrived at (certainly not the first!) and submitted as part of a homework (9) problem from the doctoral course Modern coding theory (by Rudiger Urbanke) at EPFL. The sumbitted solution in PDF is available here.

Rank of a matrix is essentially the number of nonzero rows when the matrix is expressed in echelon form. So, we just need to compute the ways these matrices can be created with non zero rows. Since the elements of the matrix are binary (from ), we can simply do a counting.

It is trivial to compute for and . For , only all zero matrix possible, and only one such matrix exist. Hence . For , since , no matrix exist, which means .

Now we consider . How many ways? We have non zero rows of the matrix, which means all rows must be nonzero. Without loss of generality, for counting, we could assume that, the rows are ordered. The last row ( row can be be done in , since there anything other than all vector (of size ) is allowed. On -th row, anything other than that of row is allowed. There are ways here. -th row can have anything except any linear combination of the rows and . This is nothing but . Row then have and so on. In all, Following the same procedure, we can have a total of

ways. For , we can construct a rank matrix of size in any of the following ways:

- Take a rank matrix of size and add an independent row.
- Take a rank matrix of size and add a dependent row.

For every matrix,

and hence,

ways. (Essentially avoid all possible linear combinations of existing rows). Using the second (item 2 above) method, we can have and

different ways a rank matrix can be formed. Where,the first term () is when the all zero row is picked as the new row. In ways we can pick any one of the exisiting row as a dependent (new row). In general for we can have combination of existing rows out of in different ways to make a dependent (new) row.

So using (1) and (2) we get,

Putting everything together,

I am caught with a little trouble to figure out an easier way to set up and solve a summation of a function of several variables. I just realized that Mathematica doesnt allow me to add a constraint in the summation operation. My problem is that, the summation should be performed over the ordered partition set of a number.

An example, will illustrate the problem definition a lot better. Suppose I want to compute the sum of a function of variables over all variables such that, the sum of the index variables always equal to a constant .

Clearly, the indices here runs over all the ordered partitions of the number . Since is not that big a number, we can simply write these partitions by hand and somehow get the job done. Let us say denote the ordered partitions and unordered partitions of the number by and respectively.

. The ordered partition will have the permutations of each of these on the three positions! So and are different sets of whereas, they are considered as one in . The summation needs to be carried over the ordered partition list . Since both and grow pretty big with even modest , the job of manual summation is not that appealing. I really hoped mathematica to aid me here. Unfortunately, I don’t see a way out here, other than the painful individual partition sum. Anyway, for the curious reader, the sum I attempt are of the following types:

typical values for is and is around . If any enthused reader finds a way/trick, I will be happy to sponsor a coffee 🙂 (Sorry at this recession time, nothing more 😦 sadly…).

In fact, the first one is easy (Interestingly, I just found a way, while typesetting the blog). I can do a differential with respect to and then scale it to get the requisite sum. It can be written as follows:

Second one is the trouble maker 😦

Oh man! power of a coffee. As I am typing this on a moody Lausanne swiss weather, here comes the trick. I just made a coffee and that seemed to have worked. I guess I can apply a similar trick to the second one too. Basically, we can split them into two expressions and then write each as differential versions of a multinomial sum. Here it is:

Amazingly, we can simplify both to get a simple looking expression. I am glad! Here is what I finally got:

While searching for the book Information theory, Coding theorems for discrete memoryless systems by Imre Csiszar and Janos Korner, I came across several sites which echoes the fact that this book is one of the rarest specimen on earth. However in the process, I found a blog forum which lists a whole lot of out of print books. This book, as expected is in demand already. We can even vote to request to bring the out of print books to a possible reprint. I am not sure how effective this is, but there is no harm in trying! We can suggest any books you may want to have a reprint. To me, this is a whole new and welcome idea. For learning, we should have access to the good books. Already quite a few demands (See this blog for instance) for the Csiszar and Korner book. Man, the world sometimes think alike!

Last winter Etienne Perron, Suhas Diggavi and myself together, have developed a tool suit to prove inequalities in information theory. The tool is adapted from the previous work of Raymond Yeung and Ying-On Yan at Cornell. We have made it a complete C based software and removed the matlab dependency in the back end. There is also a pre-parser (using lex and yacc) built in to have flexibility on choosing random variable names. More importantly, a graphical front end is developed (using Gtk), which works well across the platform. Even though the beta version was ready in late 2007, for many reasons, including exhaustive testing (we always find scope for improvement) it was delayed. Last month, we finally made an official release. The original xitip project page in IPG has a short description and pointer to the exclusive Xitip page in EPFL (http://xitip.epfl.ch). A lot of things still need to be done, before we could say it is satisfactory. One of the main thing pending is the user guide and some kind of exemplified documentation. There is a technical report, I have prepared, but that is a bit too technical at the moment. Of course Raymond yeung’s amazing papers introducing the theoretical idea behind this prover and his book are valuable resources. I have tried to provide a little more easy understanding of the concept using some illustration and toy examples. I hope to put this report anyway in the EPFL repository sometime. The first version of the project discussing the background is available here in PDF form.

The software is open source. If you are not bothered to compile and make an executable yourself, then please download the binary executable and just run. It is just a matter of double click in the latter case. We have Linux, Windows, Windows(Cygwin) and Mac versions available. There are two different linear programming software used. One is a Gnu open source GLPK and the other one is Qsopt (developed at Gatech). The Qsopt version is faster than the GLPK. Just in case you are obsessed with a perfect open source model, you could avail the GLPK [5] version.

Hopefully during this summer we will get to complete the pending work on this project. If any of you happen to find it interesting please don’t forget to update us, on what you though about the software (Comments can be good, bad and ugly!).

Aside, I better mention this: Xitip is a software useful for proving (verifying) Information theoretic inequalities [7] only. Such inequalities contain expressions involving measures such as entropy, mutual information etc. It is a pretty handy tool if you are trying to prove some limiting bounds in information theory. In reality, there is broad classification of Shannon type and non-Shannon type inequalities. Non-Shannon type inequalities are not many, but they exist. Xitip at the moment is equipped to solve only the Shannon type inequalities. You can expect more information on this at the Xitip home page [2]

[1]http://ipg.epfl.ch/doku.php?id=en:research:xitip

[2]http://xitip.epfl.ch

[3]http://www2.isye.gatech.edu/~wcook/qsopt/

[4]http://user-www.ie.cuhk.edu.hk/~ITIP/

[5]http://www.gnu.org/software/glpk/

[6]http://en.wikipedia.org/wiki/Information_theory

[7]http://en.wikipedia.org/wiki/Inequalities_in_information_theory

On the day before yesterday, while on travel I picked up two books from the Bangalore airport book shop. One was the Helen Keller autobiography “The Story of My Life”. This was perhaps the first full volume autobiography book I had read, when I was a child. I still remember, the used copy of this book being given to me, by a family friend and teacher, when I was in early school. The vivid recollection of Helen Keller learning the alphabets and notably, words like ‘rain’ sill echoes somewhere in the back of my mind. I decided to buy this book, with the plan to gift it to someone, who I come across, preferably young kid in school or so. Eventually, I gifted this to a friend of mine who, when I mentioned about the same, was all excited to grab it. In a way I felt happy because I could instigate interest in some one about Helen Keller. Keller’s life has been a message in itself to humanity. Her strives and constant efforts to learn to react to this world is touching to say the least.

The other book which I have picked along is on the story of “Nicolas Bourbaki”. It is titled “The artist, the mathematician: The story of Nicolas Bourbaki, The genius mathematician who never existed”. It was authored by Amir Aczel, the author of the book on ‘The Fermats Last theorem’. I never heard about this book before, either in reviews or from friends. Well, I must confess that I didnt really search for books or reviews on a topic of this kind, to really hear about such a book. Anyway, the book is a recent print and the news may be just coming out only.

This book turned out to be quite an interesting one. On the flight and while on wait, I could quickly finish reading the book. Being a math and mathematician related book, I didn’t feel sleepy over the book. That helped to complete the reading at a stretch. The book is about Nicolas Bourbaki. Out of my innocence, I never heard about this character before, either. As the title says, there was no specimen of this kind in human form lived upon earth. Yet, there was an identity to this Nicolas Bourbaki. The real truth about this name is revealed a little later in the book. Nicolas Bourbaki was not just one person, but an identity for a collection of mathematicians, mostly French. Together they have published a lot of work in mathematics, under the identity of one name “Nicolas Bourbaki”. How does that sound? Well, it sounds a little weird and a little interesting to me. Can we say, Nicolas Bourbaki was an institution itself? (Well there is a story about Nicolas Bourbaki being applied to get an AMS, American Mathematical Society and gets a reply to pay the institutional pay to get the same. Of course, the AMS president that time, had known the truth about the name Nicolas Bourbaki!)

So, Nicolas Bourbaki represented a group of Mathematicians, post 1935 (between first and second world war period), who formulated a systematic structure to the study of mathematics itself. They have worked predominantly on the Modern mathematics (modern algebraic concepts) including the concept of ‘sets’. As you would expect (because of the time line and French connection), the star mathematician André Weil was one of the founding members of this group. Some of the other well known French (connected) mathematicians also figured in the celebrated group. They iuncluded names like Henri Cartan, Claude Chevalley, Jean Coulomb, Jean Delsarte, Jean Dieudonné, Charles Ehresmann, René de Possel, Szolem Mandelbrojt. Some later entrants to the Bourbaki group had the likes of Laurent Schwartz, Jean-Pierre Serre, Alexander Grothendieck, Samuel Eilenberg, Serge Lang and Roger Godement. The group would meet regularly to discuss and debate on modern mathematical topics and eventually publish under one name Nicolas Bourbaki!

Even though Bourbaki does not exist (at least actively) today, it is believed that this has brought some ‘structure’ in not only mathematics, but in European culture in general. Besides, the stellar contributions on the field of algebra and number theory by this group is acknowledged very well.

Now, another reason, why this book took my interest is because of the presence of the mysterious Alexander Grothendieck. The book talks rather in detail about this mystery man. He was regarded as a genius and had contributed immensely to the world of mathematics. His fields medal in 1966 and related absence from the Moscow event on his own Fields medal confer perhaps tells more volumes about this man. I took by surprise to hear about the current life of this once world star of mathematics. Post 1991 he is believed to be living in a remote jungle/outskirts of Southern France, far from any worldly contacts. I quickly recollected the 2006 Fields medal winner Grigori Perelman, when he declined the medal and began to live away from mathematics crowd. Like Perelman, who (now accepted by everyone) proved the Poincaré Conjecture, Grothendieck as well, produced some monumental contributions. He was considered as a Genius and it was recognized too. But in close resemblance to what Perelman did in 2006, he as well decided to quit mathematics altogether and decided to live a mysterious and aloof life.

What makes, genius folks like Perelman and Grothendieck to abandon themselves from the world? It is surely not fear, simply because they have contributed so much to us and can only be proud of it. What else then? There may be arguments that Grothendieck was disappointed by the fact that, he couldn’t get the results he wanted from his political activities (He had strong political views on certain things against the establishment). Others say that, the childhood miseries and leading an abandoned life early on triggered him to do an isolated life, out of frustration and despair. In Perelman’s case, it is surely a hurt feeling and the one that of ignorance by the mathematical community, for questioning his credentials on the Poincaré conjuncture proof. But can the human mind, which is otherwise so highly brilliant all of a sudden taken aback by a feel of hurt? I feel sorry for these two fine mathematicians, who could have produced much more (They have already done so much, but….) to the world. The life and fate of these two tells the story of the society we all are in, where jealousy, politics, lack of respect to others can completely eradicate a gem, which we should have preserved at all cost.

Tears and anxiety glues through my mind on the current whereabouts of Grothendieck. I hope he lives a happy life around Montpelier! Human mind, man is a highly complex black box. What all it takes and what all it breaks?

## Recent Comments