You are currently browsing the category archive for the ‘Mathematics’ category.

So, the number has descended down from 70 million to a mortal 600! Expectedly, there is a new Polymath initiative to improve this further and who knows, perhaps 2 indeed is that number. Well, we are talking about the new result by James Maynard on the asymptotic gap between prime numbers.

Just to recap, the problem statement is as follows: If p_n and p_{n+1} are the nth and (n+1)th prime numbers (e.g., p_1=2,p_2=3,p_3=5, \ldots). On cursory counting, the gap p_{n+1}-p_{n} appears to grow larger as n, but not necessarily, because there are these well known twin prime pairs such as \left(3756801695685 \times 2^{666669} \pm 1 \right). Will this gap of 2 stay good forever as n \to \infty? If not as low as 2, will that be something low enough? i.e.,  How small can G be, where

G=\lim_{n \to \infty} \inf \left(p_{n+1}-p_{n}\right).

Earlier this year, Zhang proved that G is no more than $70$ million. That in a way proved that, the prime gap is bounded as we move along the number line. A bunch of mathematicians including Terrance tao worked further (polymath project on this) and improved that gap to as a few thousands. The latest result from Maynard brings in an independent proof for G=600. Maynard also claims that if the Elliott–Halberstam conjecture (See this nice blog post on prime tuple theory this by Terry Tao) is indeed true, then, G=12. Stunning!

What is stated here is just one avatar of the prime tuple theorem. More general results are also being discussed within the community. Terrance Tao again has this nicely articulated and maintains a polymath page for us. As an onlooker, I am as excited as many others to see this progress.

For consecutive (nth and n+1th)primes p_n and p_{n+1}, the asymptotic gap \mathcal{G}= \lim_{n \to \infty}{\inf}\left({{p_{n+1}}-{p_{n}}}\right) has got a fresh renewal off late. The famous twin prime conjecture says \mathcal{G}=2, but that is still a conjecture and not a proof. Recently, Zhang proved that \mathcal{G} is a finite number and a number definitely not bigger than 70,000,000. It was hoped that one day, the mathematical community will find a number lower than this and perhaps even the holy grail mark of \mathcal{G}=2.

Now what? Within a span of a month or so, the established gap of 70 million has improved to a thousand odd number and is still on a path of decent. Still some distance to the ultimate mark of 2, but boy, does collaboration work? Ever since the now famous breakthrough from Zhang touched the broad light (I had scribed my little thoughts on that earlier!), Terrance Tao and his team steadily managed to improve the bound.  Tao has already knitted a nice and detailed summary on the progress. As of last week, the proven gap was 12,006, but now the updated gap could be as small as 5414 (which is still under verification as per polymath8 project page). Let us hope that, they can go all he way to prove the twin prime conjecture, one day!

It is interesting to read, from Tao’s blog (which any day is a treasure trove of many topics, thankfully written with a wider audience in mind) the several connections they made, including that to Elliott–Halberstam conjecture, for improving this fascinating distance between prime successors.

The 2012 Turing award goes to two cryptography pioneers Shafi Goldwasser and Silvio Micali. I don’t do much of cryptography (for that matter I never did anything serious on that front, besides doing some C programming to demonstrate RSA and later on some class project involving elliptic curve cryptography and mathematica programming at EPFL). But I always was fascinated by the topic of Cryptography, largely for the many interesting toy like, yet deeply thought provocative combinatorial as well as probabilistic problems it addressed. Several of these problems stay at the very intersection of CS theory and cryptography.

One such classical problem that we all learned in graduate classes is the zero knowledge proof. The idea is pretty cute.  In the most simple scenario, this involve two parties Alice and Bob, who don’t trust each other, but one of them (say Alice) have to convince the other (bob) that she has knowledge (say a mathematical proof) of something without  really revealing it. The fact that it is not revealed explains why it is zero knowledge (Bob in the end never get to know what Alice know, but just gets convinced that she knows!).  So, it will work more like an interactive protocol wherein a lot of questions are being asked by Bob, chosen randomly, depending on prior answers from Alice. Doing this way for long, can Alice convince Bob, with an overwhelmingly high probability, that she knows what was claimed? Such an interactive protocol constitute what is coined as a zero knowledge proof. An example would be a novel ATM machine where, you don’t have to enter the PIN (unlike the ones that we have), but you can still convince the machine that you know your correct PIN. Sound cool and yet thought provoking, right? Well, that is why I said it is cute and interesting. This zero knowledge interactive proof idea was first conceived by the new Turning award winners.  The names didn’t strike me initially  with the Turning award news, but after reading the details, it occurred to me that, I had read their classic paper , as part of my coursework at EPFL.

A bit more formally stated, the two entities are namely a Prover and a Verifier. In the ATM example, the ATM machine is a verifier and you the user is a Prover. Prover knows the proof of a mathematical statement and verifier tries to verify whether the claim is indeed correct with high probability. The machinery of zero knowledge proof is that, if the prover is trying to trick, the verifier will be able to find that out as well, with high probability. There are many examples illustrating this beautiful idea. A classic example is the problem of helping a blind man in identifying whether two otherwise identical balls are of different colors? Can you convince him, without really telling which is which? Now there are variants of their original idea and myriads of practical significance have emerged or are still emerging.

The ACM award page has pretty enthralling account of these two pioneers (Shafi and Micali).  Now, here is an interesting family trivia. Shafi Goldwasser and her husband Nir Shavit together now keeps three Gödel Prize in their shelves, besides adding the Turing award aroma, now to their household!

During the past week, while at Hawaii for the IEEE 802.11 interim, I happened to glance this NY times article. The story is about a New Hampshire professor Yitang Zhang coming out with a recent proof on establishing a finite bound on the gap between prime numbers.  While browsing the details, there are more details emerging as several pieces of blog and articles and reviews are being written (and some are still being written). Now, looks like the claim is more or less accepted by pundits in the field, and termed as a beautiful mathematical breakthrough. As an outsider sitting with curiosity, I feel nice to scratch the surface of this new finding.

The subject surrounding this news is number theory, prime numbers to be precise. The question of interest is on the gap between adjacent prime numbers. We know that 2 and 3 are prime with a gap of 1, but this is truly a special case and unique per definition. The gap between 3 and 5 is 2. Similarly 5 and 7 differ by 2. One may have thought that, the gap between successive primes go up as we flee along the number line. Not quite. For example, we can see that there are a lot of pairs with a gap of 2.  The easy ones are (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71, 73), (101, 103), (107, 109), (137, 139) and the list goes on. It was conjectured that there are infinitely many such pairs, but the proof of that is not quite as easy as yet! It is known that there are precisely 808,675,888,577,436 below 10^{18}, but infinity is still a lot far from 10^{18}! An interesting quest was to really prove that there are infinitely many twin primes, but this still remain as an open conjecture.

Now the new discovery by Zhang is not quite proving the twin conjecture, but a close relative of that. Twin conjectures are strictly about prime pairs separated by 2. A related question is, how about prime pairs p and q which are separated by k where k could be a finite number. When k=2, then we have the  special case of the classical twin prime case. Can we at least prove mathematically that there exists infinitely many primes such as (p,q=p+k) for some $k$. If so,  what is the smallest k where this holds true? Zhang now has a proof that for k as small as 70 million. Mathematically, if we denote p(n) is the nth prime, then the new claim says (stated crisply in the paper abstract),

\lim_{n \to \infty} {} \left(p_{n+1}-p_{n}\right) <70 \times 10^{6}.

70 million is still a large gap, but as Dorian Goldfeld says, is still finite and nothing compared to infinity! In future, it is not unlikely that  we may get to see this gap coming down and perhaps to the best case of k=2. Who knows?

The result is still interesting, even to general interesting folks like us. This kind of says that, the gap between prime numbers is worst case bounded by a finite number. If we really plot the prime numbers, then we will see a saturation like behavior!  Like many other things at asymptotic (for example, the eigenvalues of a large random matrices exhibit very interesting properties, when the size goes to infinity), things at infinity may exhibit some charm, after all!

The  paper is accessible here, but as expected the proof is hard (for me at least). Hopefully we will have some diluted explanation of its essence from experts in the coming days. Already,Terrence Tao had this sktech, a couple of weeks ago on his google+ feed. Over the last week or so, finer details on the new break through are still emerging. Terry Tao also has initiated an online Wiki collaboration in an effort to improve upon from this work (For experts that is, not for me!).

Congratulations Professor Zhang.

Since this is year end holiday time, I got a fair share of my time to read up on several random topics. Amidst the many sad out comings in the past couple of weeks starting from the shocking events from Newtown Connecticut elementary school shootout, then on had a few more disturbing news all over the world, particularly this one from the other side of globe. A rather non disturbing news is on the new claim on proving/establishing the  so called Ramanujan’s deathbed conjecture. I was doing a bit of follow up read up on this to first understand the premise of the problem in news.

OK, the subject deals with what is known as mock theta functions (Note: There is no credible Wikipedia entry to it and hence I cannot add one here. Also remember that this mock theta functions are not the same as Ramanujan’s theta functions; Just to not confuse these subtle mathematical terms!). These are not quite the Riemann Zeta functions, which are well known, but more similar to the Jacobi theta functions. The mathematical prodigy Srinivasa Ramanujan while on deathbed had shared some quick tidbits to the famous Cambridge mathematician (and I should say a hard core cricket enthusiasts! If you guys have not read his memoir A mathematician’s Apology already, step out and grab it soon. A fabulous quick read that!)  G.H.Hardy who was paying a visit to his now famous student. After their famous meet up (or may be during their meeting itself), somewhere in 1920, before his death, Ramanujan wrote a letter to Hardy in Cambridge in which he introduced as many as seventeen new functions. These functions, as per Ramanujan, had shared some distinct similarities with the theta functions (The Zeta functions were itself had been studied before before by mathematicians like Jacobi, Riemann and several others. Now of course the connection between Jacobi theta functions and Riemann Zeta functions is already established!).

The Jacobi theta function itself is pretty complicated to lesser mortals like us. It looks harmless for a starter, but the properties, its various avatars and connections to many real world applications is what makes it monstrous. And then there are also its generalized cousin Riemann Zeta functions \zeta(n)=1+\frac{1}{2^{n}}+\frac{1}{3^{n}}+\ldots + \infty, which as well, appear as a simple and elegant looking form for n<3 , for higher n >2 changes the form beyond what we can fathom (For example, it is not even known whether for larger n such a number is transcendental!). I remember playing with (I think it was 2000 or 2001) a proof of  \zeta(2)=\frac{\pi^2}{6}  using some integral calculus in two variables, which again turns out to be rather well known as easy. There are other ways to prove as well for n=2, but it stops being that simplistic there!) Anyway, Jacobi’s theta function has the form \theta(x,t)=\displaystyle \sum_{n=\infty}^{\infty}{\exp\left(i\pi t n^{2} + i2\pi n x\right)}, which in words can be roughly stated as some form of elliptic form of exponentials.

Much like Fermat did for his famous last theorem, Ramanujan too didn’t give much hints beyond listing and stating that they are elegant. For example, he didn’t reveal where they come from or how to formulate them or for that matter what use having them. Like many of his other famous mathematical findings, Ramanjunan, a self taught genius, made a quick listing of these. Rest of the curiosity has enthralled and haunted the mathematical minds of coming generations, all over the world. A brilliant spark from the great genius helped to stir an intellectual stir for almost 10 years!

The latest is that, two Wisconsin mathematician, Ken Ono and University of Cologne professor Kathrin Bringmann came up with a framework explaining what mock theta functions are and how to derive them. They connect modular forms or more specifically the mass forms to Ramanujans’ functions. If you recall, the modular forms also played a big part in the proof of the other famous problem of this kind Fermat’s last theorem.

It will take  a while for me to understand (if at all I manage to digest it) all the details of this new claim. I must say, it is likely that I will fail to comprehend it in full . As always, I will be curious to read it up and grab as much as I can. As and Indian, the connection to India is always an interesting pursuit!

I leave you with a nice documentary video on Ramanjuan that I found in youtube. If you have not see this already, check this out. Pretty good documentary, unfortunately tarred by a lot of useless and unrelated comments.

The SODA 2012 paper by Dina Katabi, Piotr Indyk et al promises a relatively faster way to compute DFT of sparse signals. This is getting traction from outside the community too, after the MIT broadcast. The well know FFT originated from Gauss and laregely resurrected by Cooley and Tukey has the time complexity of  \mathcal{O}(n\log n), which offers significant gains over the conventional DFT complexity of \mathcal{O}\left(n^2\right) when the size n is large.

If we really dissect the FFT complexity limits, it is already pretty good. With n points to compute, the complexity will be proportional to n and roughly, the per node complexity is \log n.

Now, what the new scheme promises is not a change in the order of complexity, but a clever way of simplifying the complexity depending on the inherent signal sparsity. When the signal is k sparse (i.e.,  only k among n is significantly different from zero 0 ), it is fanciful to ask whether we can indeed get to the complexity \mathcal{O}(k \log n) and Katabi, Indyk et al have indeed reached there. Quite remarkable achievement this is, considering that the gain could be as good as the compressibility limit in most of the real world signals we deal today. Signals such as audio and video are the leading candidates in the practical signal signal processing world and they both are sparse in some transform basis. Recall that, the recent compressed sensing results for k sparse signals showed the potential benefits of sparse signal processing and this new scheme will help to realize many things in a more meaningful way. One good thing with this is that this generalizes the conventional FFT. In that way, this is not just for sparse signals, but something which holds for any k and in the limit when k \to n, the complexity is as good as the old mate FFT!

I want to try this out sometime for some communication problem that  I have in mind. At the moment, I am hopeful!

Another stalwart, the founding father of Unix, C and in many ways one of the founding fathers of computing itself, Dennis Ritchie passed away. For me, Ritchie along with Kernigham was one of the first few names registered in mind, since learning computer programming. The first book I have ever seen on a programming language is this duo’s book on C. And boy wasn’t that most concise book in technology, every so compact and yet rich and clean!

Come to think of it, the impact of Ritchie to modern science and technology is enormous. He may not have been a very public figure, but his contributions indeed is the touchstone on the modern world, especially the information network world. Much of the internet, the Google, the iPhone’s and what more, almost everything we can think of runs on his stuffs or its variants. Quite remarkable.

I think the most apt summary of Ritchie’s contribution is heard from Brian Kernighan himself.  He said  “The tools that Dennis built -and their direct descendants – run pretty much everything today”

 

One thing I like (besides it being an absolute gem of  a search genie) about Google is their customized logos, usually marking the day in history. Today, on Pierre de Fermat’s b’day they setup a nice and cute logo. On the logo, the famous Fermat’s last theorem is pictured.  I think, it was in 1993 I first, heard of this famous problem when England born Oxford/Princeton mathematician Andrew Wiles unveiled the solution. In 1996 or so, there was this documentary telecast in BBC on Andrew Wiles and his journey to solving one of the greatest (if not greatest, one of the most talked about problem) problem in the history of mathematics. A high class documentary is archived in youtube. I am not sure whether it was BBC who made it originally. I see that some youtube link shows UTV.

Here is another piece of writing/blog on this subject (by Chloe Albanesius).

On the Christmas day, out of blue I bumped across an old archive of Robert Fanos’s interview (oral history). Beautiful one.

Henri Padé wrote his PhD thesis in 1867 at the École Normale Supérieure in Paris. The dissertation was on what we know today as the Pade approximant. Come to think if it today, it is really remarkable that 100 years ago, mathematicians had thought about function approximations using rational polynomials. Now it may appear all too simple, but without the aid of a serious computing machine, one would have to rely on the mathematical rigour on every small argument. In comparison, these days, we can quickly check the validity using some computer program before venturing into a formal proof.

Anyway, the sudden recollection of this French mathematician happened, as I was looking for a good curve fit for a problem I need as part of my work. I needed to minimize the maximum error than the average or squared error. I thought about Legendre polynomial fit, which gave me the minimum root mean squared error and Chebychev who minimized the worst case error. The order of the polynomial and the number of sample points seemed to have dependency. Pade approximant is a cool technique which reduces the polynomial order while using a rational polynomial. I am not too convinced about the statistical properties of this method. The data points I have at disposal may have some measurement error attributed. I need to investigate a bit more before taking a decision on the potential optimality for the given problem! Happy knowing this scheme though. In retrospect, I remember hearing it in my statistical signal processing books, but never paid any serious attention then! Silly.

Lipschitz continuity is a stronger property than continuity. First, when I heard this (through Ruediger’s treatment on expectation analysis of codes), it was all haze. For the second time, I made some sense during the Applied Stochastic process course. Anyway, a final appreciation of this concept kicked my brain only when I started working on a problem. Recently, I came across this again and hence thought of penning a few easy lines on this.

Here is a beautiful explanation of this concept.

I was playing a few tricks on a few centrality measures on a few graphs. SO, thought of penning down a quick piece of notes on the notion of centrality (A quick introduction can be found here along with Wikipiedia page on centrality)

Just heard from my EPFL folks that Mathematica 8 is just released. I am yet to get a chance to see it working, but I will look forward to it someday. I am quite happy with Mathematica7 already, but Wolfram always bring radical new stuffs which is special. In fact I am mighty pleased with version 7, but who knows what all new stuffs there in 8? I have been a huge fan of Mathematica since the early 2000, ever since Nandu (Nandakishore Santhi) introduced his new tool. tI use it pretty much for every computational mathematics job. These days, I even use it for plotting, much more than Matlab. Looks like there are a lot of new features added in 8. One of the claim from Wolfram is that, they added a lot of new stuffs on aids to work with (probability) distributions, which I think is going to help me a lot. One stellar new thing I notice (from the announcement) is linguistic argument support. Boy, that is hell a lot killer application. Forget the syntax then. If you want to plot a sin(x) with grid on, type just that sentence! That’s it! Rest Mathematica will do. Wow! Wow! How much is it for an upgrade? Or should I go for a trial? I can’t wait!

Does anyone believe that the difference between the Lebesgue and Riemann integrals can have physical significance, and that whether, say, an airplane would or would not fly could depend on this difference? If such were claimed, I should not care to fly in that plane.

I recall hearing this interesting quote. If my memory is correct, I first heard it from Martin Vetterli, who either mentioned this during a talk/class or it was there in the footnote of his forthcoming book. It sounded funny that time, but I didn’t really know to whom this quote is originally attributed to. To my surprise, this has its origin dates to the story man Hamming. Well, why Hamming? Emre Telatar did tell me couple of funny stories about Hamming besides the one famous scream on the computers which eventually led to the discovery of error correcting codes! By the way, Emre is a walking encyclopedia when it comes to stories. He is a treasure in many ways!

Ok, coming back to where we started. Springer has published this nice conversation piece online for free. The title is “Mathematics and Design: Yes, But Will it Fly?”. It is not really a book, but a very interesting conversation discussing the above mentioned quote by Richard Hamming. The preface of the discussion itself couldn’t be more apt, which reads:

“Martin Davis and Matt Insall discuss a quote by Richard W. Hamming about the physical effect of Lebesgue and Riemann integrals and whether it made a difference whether one or the other was used, for example, in the design of an airplane. The gist of Hamming’s quote was that the fine points of mathematical analysis are not relevant to engineering considerations.”

A very fascinating read indeed. An even more fascinating, formal defense on Lebesgue’s right on why aeroplane fly is here. Well, the answer to aeroplane question: here is what Andrew Lewis has to say, “In the event that the reader is consulting this paper in a panic just prior to boarding an airplane, let us answer the question posed in the title of the paper. The answer is, “The question is meaningless as the distinctions between the Riemann and Lebesgue integrals do not, and should not be thought to, contribute to such worldly matters as aircraft design.” However, the salient point is that this is not a valid criticism of the Lebesgue integral.

A nice intuitive to way to explain what a Martingale is found here (Venkat’s Computational Probability course notes). Perhaps it was known already to many. To a layman, this is the best way to introduce Martingale, first up!

A sequence X_{1}^{n} =X_1,X_2,X_3,\ldots,X_n  is called a martingale if  \mathbb{E}\left[X_{k+1}|X_{k} \right]=X_{k}, for all 1< k <n. An intuitive way of thinking about martingales (and indeed the origins of martingales) is to imagine X_k as your total profit after the k th round of a gambling game. Then the martingale condition states that knowing your winnings in the first k games should not bias your winnings in the k+1th round. See the Wikipedia page for examples of martingales. The course notes is fabulous. I am following it on and off!

Just heard about the 2010 Hay festival was held last week in Thiruvananthapuram. Although I couldn’t have gone there, it felt nice to have had such a great global art and literary event in my own home state in India. The official website hosted some amazing scenes from various parts of Kerala, which to an ardent Kerala fan like me wished to see  all times. The Hay festival of arts and literature has become quite prominent in the public media, recently and what better place to have it, than the beautiful and literary rich Kerala!

Thanks to Youtube, I could gather glimpses from the event. Part of Vikram Seth‘s Storypooja is captured here. The one session, I would have liked to attend is Marcus du Sautoy on “The Number Mysteries: A Mathematical Odyssey Through Everyday Life“. I hope to see a video tape of this program online sometime soon! After all, Marcus claimed to have had a reason for everything, including why bend it like Beckam! Another of my favourite is ONV Kuruppu. His candid and lively talk is an anytime favorite of mine. He was supposed to have had a conversation with poet Sachidanandan. And, how much I missed Bob Geldof‘s conversation, let alone his concert!

Via Lance’s blog/Twitter, I came to know that Wolfram alpha can now input TeX. This is very useful. In fact, I believe this is a great addition to the search tool. For someone who is familiar with TeX syntax, it is pretty easy to manipulate with functions, plots and datasets.

This came up again for the nth time, as I hear at various discussions and debates at various places including the internet. But as always there is learning (at least for me) for the taking. Here is a nice and cute summary on the Bayesian philosophy, drafted Radford Neal . Ah, the latest revisit to this Bayesian versus Non Bayesian came up when I was chatting with a Finance friend. I cannot divulge the details of the conversation here, but I thought of writing a bit about the Bayesian inference, which to me is always the formal and correct statistical method (I must admit that, I am somewhat novice to have a strong counter argument, which is tame, but frankly I need to learn more!)

The Human Resource Department of India, aka HRD ministry made a mockery of itself by doing a foot in mouth call on the nationality of the world chess champion Viswanathan Anand. The Indian citizen who lives in Madrid for logistic reasons is denied a honorory doctorate for insane mess created by the HRD ministry. The reason: He is not Indian! Anand’s wife Aruna, understandably upset by the wrong call from the HRD, duly sent a fax of the Indian passport that Anand holds, but the HRD ministry was still not convinced. What  a piece of joke!  This is not the way to salute a world champion and our own citizen. All this happened in the eve of the ongoing International Congress of Mathematicians held in Hyderabad. Only a blog post earlier I wrote about the thrill of India getting a chance to hold ICM. Now, the hopeless HRD ministry found a way to make a mess of it. What  a shame! They also managed to rebuke the famous Algebraic geometry guru David Mumford. Neither Anand nor professor Mumford needs these extra certificates to proclaim what they are. They are already stellar figures in their respective fields of expertise. The manner in which they are toyed is what annoys people like me and it is simply unacceptable. These are not the signs of a country claiming to become the next super power. Neither this is the way we as children were learned and taught to respect the elders and guests.  Athithi devo bhava, but not to be!

As an Indian, I take a lot of pride to learn that the International congress of Mathematicians 2010 is being held in India. It is currently ongoing at Hyderabad, India between 19th to 27th August. Even though I am many a mile away from there now and in spite of  myself not really directly or indirectly participating, it brings immense joy to see India getting a chance to show a piece of International math progress  in modern times too. The Fields medalists , Nevanlinna prize as well as the Gauss medals and Chern medals are announced during the congress.  It is great to hear that Dan Spielman getting the Nevanlinna honor. As a coding theory enthusiast, it is doubly exciting to hear that for the second time, someone  from the same field (after Madhu Sudan received one in 2001) get to hold the coveted prize.

The award winners are:

Fields medal: Elon Lindenstrauss, Ngo Bao Chau, Stanislav Smirnov and Cedric Villani. One Israeli, a Vietnamese, a Russian who lives in Switzerland and a Frenchman. Here is a nice interview transcript where Robert Siegel talks to Julie Rehmeyer, discussing the award winners. A pretty nice brief bio introduction to the fabulous medalists.

Congratulations to all winners. The Fields Medal announcement is here. Nevalinna, Gauss and Chern medals announcement is here.

The P versus NP has popped up again in to the mainstream. The HP scientist Vinay Deolaikar has recently prepared a manuscript which claims, it is after all P ≠ NP. Greg’s blog seems to be the one which first broadcasted this to general public. I have not had much time to digest the proof claims or counter arguments in detail. Here is a fascinating series of arguments discussing the proof claims. I am going to do some digesting on this in days to come! We will wait for the experts to have their say. I am going to relish this discussion.

On a lighter note, here is a very interesting blog piece, bringing some sort of similarity to the giant component behaviour on a pile of staple pins. This is sort of a layman view of the Erdos Renyi explanation.

In one version of the Erdős-Rényi process, one start with a set of n isolated vertices and then keep adding random edges one at a time;  More specifically, at each stage one choose two vertices at random from among all pairs that are not already connected, then draw an edge between them. It turns out there’s a dramatic change in the nature of the graph when the number of edges reaches n/2 . Below this threshold, the graph consists of many small, isolated components; above n/2, the fragments coalesce into one giant component that includes almost all the vertices. “The Birth of the Giant Component” was later described in greater detail in an even bigger paper–it filled an entire issue of Random Structures and Algorithms (1993, 4:233–358)–by Svante Janson, Donald E. Knuth, Tomasz Luczak and Boris Pittel.

Ofcourse Joel Spencer has written a beautiful article on AMS commemurating the golden anniversity of the giant component.

Here is an interesting recap of the Newcomb and Benford findings on the distribution of the first digit in data records.

A century ago, Simon Newcomb observed an unexpected pattern in the first digits of logarithm tables: The digit 1 is significantly more likely to occur than 2, 2 than 3, and so on. More than a half-century later, Frank Benford rediscovered the first-digit phenomenon and found that it applied to many tables of numerical data, including the stock market, census statistics and accounting figures. New mathematical insights establish the empirical law developed by Newcomb and Benford as part of modern probability theory, and recent applications include testing of mathematical models, design of computers and detection of fraud in accounting.

Here is another interesting finding on temperature data and the final digits.  A proof and formal review of the original work is available, presented by Hall.

Here is an intuitive description of the Benford’s law.

Alex Bellos listed what he call as the ten best mathematicians (of all time). His list, published in The Guardian, is based on his assessment that the individual contributions of the listed ten mathematicians changed the world, rather revolutionized the mathematics in some sense.

Alex, firstly listed 10 great minds whose contribution to mathematics is simply awesome.  However, the list to me is far from complete and it is grossly incorrect to call the selected 10 are the best.  Perhaps we should have named it “10 amazing mathematicians”.  I am sure Alex didn’t do an exhaustive research on the mathematicians who lived on this planet. If we are to limit the number 10, then there are many other mathematicians whose contribution will any day stand taller than some of these. Newton, Shannon, Kologomorov, Fourier, Galois, Shannon, Abel, Euclid and even Ramanujan are some of the names which comes to my mind at once.  Many French mathematicians are given a miss as well. Some may argue that, these are not pure mathematicians, but they all had produced more than one mathematical theory which for sure changed the world; sometimes beyond mathematical world. Newton for his Calculus can never be omitted from the top ten.  Shannon’s mathematical theory is kind of the underpinning of literary everything in communication including the internet .  Fourier’s once ignored (and painfully debated) work on heat, which eventually  spurred harmonic analysis and the young Galois’s spike all takes merit.

Among the list, Euler, Copernicus, and Gauss will pick themselves any day.  I reckon Cantor will also find a wider consensus to be within top ten. Perelman and Tao are clearly two of the modern greats, but to say that they are among the top ten of the over 3000 year old formal mathematical history comes with a touch of boredom.  Erdos is one of the popular mathematician in the 20th century, who has been an immense contributor to many different areas of mathematics. Moreover, he has been a maverick among the modern mathematicians because of the travel crazy, migrating style life. He is one of my favourite mathematician as well. Whether he ranks himself in the all time top ten is not clear to me.

Nevertheless, Alex has spawn an article which will spur some curiosity to the general reader on the history of mathematics. Mathematicians from 500BC, Copernicus to this date (Perelman and Terrence Tao) are there in the list.

The Italian fame Cardano is apparently one of the original mathematician who paved the probability theory. To my ignorance I have not heard much about him.  I am curious about that part of the history now.  I had heard of the Hypatia legacy in mathematics, and had since forgotten about that Greek tragedy. Reading the new list, it reminded me of the atrocities of a state indulged in certain dogma or religious view do to any new scientific claim. This is clearly not the only instance of such a thing happened.  Galelio also had difficulty convincing the orthodox society of his time. And there are many. Why then, even now, the religious fanaticism in many states mock the Charles Darwin’s theory of evolution. The religious monks and the administration there to, all over the world still latch to the belief that it is some hand of god who created the universe and then every single creatures and plants. Ludicrous to say the least! But then, religious beliefs are often personal and one can be blind sometime.

An interesting and twisted version of the twenty question game came across recently. I saw it in the 2009 ISIT paper on the error correction under Feedback with list decoding, by Ofer Shayevitz.  The problem goes like this (I am going by the nice narration of Ofer with the famous Alice and Bob duo).

The game is being played by Alice and Bob. Alice selects one of M possible objects, and Bob can ask n  binary questions (where the answer can be either  yes or no only), in a quest to identify the object selected. If Alice is always truthful, then n=\lceil \log M \rceil questions are obviously both necessary and sufficient. Now, suppose Alice is allowed to lie up to t times. How many questions n(M, t) does Bob need to get the correct object? Alternatively, given n questions, what is the maximal number of objects M(n, t) that Bob can separate? This version of the “twenty questions game” with lies is known as Ulam’s game.

It is official now. The clay memorial millenium prize is to be awarded to Grigory Perelman for proving the great Poincarre conjecture. It is a different matter whether he decides to accept this. The man who dared to ditch the Fields medal may have an opinion of his own on matters pertaining to award and its credentials. Lesser mortals like us get amuzed by his decision, but then he is special and different.

Some interesting tips on binary random matrices again. Kolchin’s book also discusses some aspects of this (see Page 126 for example). It is interesting to see the asymptotic behaviour.

The probability that a random matrix {k \times (k+m)} binary matrix with each element chosen uniformly ({0} and {1} picked equally likely), is of full rank { k} for {m \ge 0} is

\displaystyle\rho =\prod_{i=m+1}^{\infty}{\left(1-\frac{1}{2^{i}}\right)}, \quad m=0,1,2,\ldots\ \ \ \ \ (1)

The probability that a random matrix k \times (k+m) binary matrix with each element chosen uniformly ({0} and 1 picked equally likely), is of full rank k+m  for m < 0 is

\displaystyle\rho =\prod_{i=1}^{k+m}{\left(1-\frac{1}{2^{k-i+1}}\right)}, \quad m=-1,-2,\ldots\ \ \ \ \ (2)

Glanced upon a paper from the 2009 ISIT, titled “LT Codes Decoding: Design and Analysis” by Feng Lu et al. The authors discusses LT decoding strategy which is Peeling decoder (conventional simple LT decoding) followed by an extension based on the well known Widemann algorithm solving a sparse homogeneous linear system. Here is a brief summary as I understood.

On packet erasure channels (internet being one close realization of a model of this kind, where, the TCP/IP suit, one can either gets packets correctly delivered or lost in the wild), Luby Transform (LT) codes forms an easy way to transfer information.  It is well known that, asymptotically (When the number of information symbols  (or packets)  k and the number of packets received $r$ are large) near full recovery is made possible by simple peeling decoder which grows in complextity at \mathcal{O}(k) . The claim is that, for n >10000  or so, with r= k(1+\delta) , on average  we can, with high probability, recover all of  k information packets  by simple unwrapping of xor-ed packets. A more precise statement will read as: If from k information packets, each coded packet constructed independently by xor operation (the number of packets taken for xor operation follows a distribution; Plain vanila version of LT codes uses Soliton distribution), then the content of the original information source can be recovered from any k+\mathcal{O} \left(\sqrt{k} \ln^{2} \left(\frac{k}{\epsilon}\right)\right) packets with probability 1-\epsilon by an average of \mathcal{O} \left(k \ln \frac{k}{\epsilon}\right) symbol operations. However, when the number of input symbols is not large (say less, n < 1000 ) the overhead term \delta is not small. It is not too surprising, since we need sufficiently large n and r to converge the performance in expectation.  One can think of solving the system of linear equations using Gaussian elimination and get the best figures, but the question mark stays on the complexity \mathcal{O}(k^{3}) of this method over the cheaper peeling decoder complexity \mathcal{O}(k) .

Conventional LT decoding based on Peeling decoding rule terminate, once the number of degree -1 packets (ripple as it is called in graph theory parlance) are void. One can hope to continue from there, by using Gaussian elimination and decode a a few more (if the graph matrix rank permits it). If the original graph is full rank, one can technically decode all of them.  Gauissian eliminatio being heavy, one hope to find an easy, preferably recursive method. Indeed there comes Wiedemann and that is what the authors use.  The authors propose what they call as full rank  decoding, which essentially is LT decoding, until there are no more degree one packets. From there, they borrow a packet and then uses Wiedemann algorithm which is somewhat light on computational complexity.

The larger aspect on whether this scheme do any better than conventional LT peeling decoder is the other question they answer. To achieve full recovery we need to have a full rank matrix. Now, the matrix being a sparse matrix defined over \mathbb{F}_{2}^{k \times r} where r is the number of received packets. The probability of the matrix having a full rank will directly help us to infer the decoding merits. This is an interesting piece since I had looked into the computation of the rank of a binary matrix and also the probability of a random matrix rank being equal to an arbitrary value. All this was done during the Modern Coding Theory doctoral course by Ruediger Urbanke at EPFL.  For binary random matrix, we can also setup a recursive formula and thereby a good inference on the probability of decoding can be arrived at.

Martin Gardner‘s 95-th Birth day today. I dont think any other soul can claim to own the legacy of aspiring enthusiasm, among children and adults alike, on the subject of mathematics, in the most interesting and playful mode; Recreational mathematics transcended to newer heights thanks to Gardner’s amazing production of puzzles and games. A philosopher by education and a Navy man by profession, Gardner’s transition post World War II is something worth a story telling! I was (still remain so) a huge fan of Gardner ever since romping on to his old articles in the Scientific American volumes, which I greedily grabbed from NIT Calicut library. There was a time (in the pre internet and digital era) when I used to maintain a notebook of Martin Gardner puzzles, where I had handwritten the riddles and games. It is incredible to know that he is still active and steaming. Thank you Martin Gardner for spurring enthusiasm to many a generations. His “Colossal book of mathematics” is one of the worthy possessions in my library!

Many many happy returns of the day Martin. NY times has a nice page published on his birthday!

Todays IPG seminar had Fritz Eisenbrand (the Disctete Opt chair, Math department EPFL) talking about Diameter of Polyhedra:Limits of Abstraction. I don’t think I followed the topic too well, but this is a share of what I understood.

The topic is about a convex geometric problem on the diameter of a polyhedra. The question of whether the diameter of a polyhedron is polynomial or not seemed to be a longstanding open problem. The largest diameter {\Delta_{u}(d,n)} of a {d} dimensional polyhedron with {n} facets has known upper and lower bounds.

{n-d+\lfloor d/5 \rfloor \le \Delta_{u}(d,n) \le n^{\log d +1}}.

The lower bound is due to Klee and Walkup and upper bound to Kalai and Kleitman. These bounds also hold good for combinatorial abstractions of the 1-skeleton of non-degenerate polyhedra (Polyhedron here is called non-degenrate). What Fritz and his colleagues have done is to look into the gap between these known lower and upper bounds. Apparently, the gap is wide and they have made some progress to get a super linear lower bound {\Delta_{u}(d,n) \le \Omega\left(n^{3/2}\right)} if {d} is allowed to grow with {n}.

The way they showed this bound is by establishing the bound for the largest diemeter of a graph in a base abstraction family. Let us say, the abstraction family of connected graphs be denoted by {\mathcal{B}_{d,n}}.The largest diameter of a graph in {\mathcal{B}_{d,n}} is denoted by {D(d,n)}. They find that,{D(d,n) =\Omega\left(n^{3/2}\right)} and then using the fact that {\Delta_{u}(d,n) \le D(d,n)}, they conclude the bound {\Delta_{u}(d,n) \le \Omega\left(n^{3/2}\right)}

I have not had a chance to see their paper yet. I must say, the proof was not all that within my grab during the talk. However it appeared that it is based on some layering and combinatorics. He said some applications to covering problem, in particular disjoint covering design which I didn’t follow that well. Sometimes I get the feeling that I am a little dumb to grasp these ideas during a talk. I wonder whether others understand it very well on a first shot presentation. I have put it in my agenda (among the millions of other papers to read) to see through this problem and proof, one day! His presentation was very clear and legible though.

Today, during the evening chat, Emmanuel Abbe threw an interesting question: Whether the sum of square roots of consecutive binomial coefficients converge to some closed form! That is, {S(n)=\displaystyle \sum_{k=0}^{n}{\sqrt{\binom{n}{k}}}}. We tried a few known combinatorics tweak, but no meaningful solution arrived. We were also wondering whether this has some asymptotic limit, but that too did not yield anything. A quick check on Mathematica wasn’t helpful either. Now the question is: Does this sum yield some closed form expression.

While playing with this sum in Mathematica, I found that for the sum of squares of binomial coefficients, there is a nice simple closed form.

{S_{2}(n)=\displaystyle \sum_{k=0}^{n}{{\binom{n}{k}}^{2}}=\binom{2n}{n}}

I was toying with a proof. It turns out that, the proof is extremely simple and is a one line tweak of the Vandermonde identity {\binom{p+q}{m}=\displaystyle \sum_{i=0}^{m}{\binom{p}{i}\binom{q}{m-i}}}. Simply substitute {p=q=m=n} and we have the results on table. The natural question then would be: Is there a generalization for { S_{r}(n)=\displaystyle \sum_{k=0}^{n}{{\binom{n}{k}}^{r}}} for any {r\in \mathbb{N}_{\ge 1}}. Ofcourse now for {r=1,2} it is trivial.

Apparently, it turns out that, there is no closed form expression for a general (all) {r}. There are some interesting divisibility properties of these sums. An interesting account of that is addressed by Neil Calkin (Factors of sums of powers of binomial coefficients).

At the moment, I get a feeling that sum of fractional powers of binomial coefficients is not trivial. May be there is no closed form. May be not!

I’ve finished reading the memoirs of Walter Rudin. It was a quick read for a few hours. His autobiography is titled The way I remember it, published by AMS in the history of mathematics series.  It wasn’t particularly interesting, to say the least. From a mathematician who wrote excellent books on functional analysis and several others,  I was expecting a much better story. Of course one cant write an imaginary story in an autobiography, but then the incidents in his life is pretty much the story of any European intellectual during the war days. The best I liked is the one from Karl Popper. However, I could connect many incidents from Rudin’s life, primarily because of the geography. There is a chapter on his days in Switzerland, which also touched upon Lausanne. That part for once enthused me! Was wondering how Lausanne would have been 70 years ago! If you are completely unaware of the life in Europe around the WW period, then this will give you a perspective.  Like many scientific minds of that era, he had a long route to the United States. He discusses the path and family traits of that journey, in a somehat uncomplicated language.

In his autobiography, Rudin has discussed some of his contributions to mathematics as well. That part appeared a little informative, but technical read. If you know his work already, you would connect it nicely.  I particularly liked the chapter on Function Theory in the Unit Ball of Cn. 

In all, not a book I would recommend, unless you are a Walter Rudin fan and knows his contributions in much more detail. However, this may be a motivating read for a young school kid aspiring to be a mathematician. Why did I say that? I don’t know! Don’t ask me why either!

A few years ago, during undergrad days, myself and  friend Ramani during our lazy 75 paise mini canteen tea outing, were discussing a small riddle. It was motivated from a real world experience from our computer center in NIT Calicut (REC Calicut). In REC those days, we students almost exclusively used rubber slippers (Yes, those Paraqon brand which used to cost 20 rupees or so), usually called by the name ‘chappels’. With that, we were not only comfortable while walking and running around, but we’re equally at ease playing cricket and badminton with the very same foot support; and many other things too, including jogging. Those thin hard rubber slippers used to last an year or more without giving much trouble, other than perhaps an occasional tearing of the rubber tie. In all, we were at peace with that.

But there was an issue, not exclusively for this brand, but for chappals in general (shoes were a luxury of sort in the campus;atleast it wasnt very common). Not for everyone though! If and only if you were fancied of visiting the computer center! Well, computer center wasn’t all that fanciful then, since we were provided with only graphics less Unix terminals (no colour monitors!). You might wonder, huh! what age am I talking about? Besides, Internet and Emails were only taking shape then. Chats and browsing were not quite there yet;Unless you felt a touch inferior to the computer wizkid around, that was not a compelling centre de visite. As, ‘would be‘ electronics and communication engineers we had that occasional inferiority complex!. Computer center was air conditioned and was strictly slippers free. We were expected to keep our valuable slippers outside (no clock room luxury! well that was not a necessity either) before entering to that cooler room, filled with monochromatic terminals. Since most of the chappals dropped outside were alike (in size and also sometimes color) there was a good chance that at the time return, we ended up with a different pair of slippers (Some folks found happy for themselves by a visit to the computer center, just for a pair change, often to an improved lot!).  Sometimes, we ended up having differently colored ones, say left foot white and right foot blue. That wasn’t a problem socially either, as long as you stayed within the campus. It was socially accepted within the walls!

Anyway, coming back to the riddle we were busy conjecturing on. We wanted to automate a clock room. The idea then would be to just deposit the chappals there at random. The clock room work automatically. Upon asking (at the time of return, say) it will select a pair at random and give it to you. Sorry, you cant have a choice. Just accept and hope for the best. We asked the questions:

1) What is the probability that everyone gets their own chappals

2) What is the probability that none of them get their submitted pairs

Assume n number of  people (and hence n pairs). We can assume that, a pair is a single entity (say both left and right slippers are tied and submitted as one) . This simplified the problem to n people n slipper scenario. A simplistic model assumeed that all n people submit their slippers at the same time. We wanted to build that great randomized clocker machine! And we wanted that to work for any n, which means, the algorithm had to be implementable and to work well in expectation!

We had thought and pondered about it for a while, then. In the end, we had found that the first one is easy, but the second one a little harder to generalize for beyond n=10 or something.  As busy undergrads, we left the problem after an hour of discussion, probably until we had finished sipping the tea. Aside, we were busy with many other extra curricular activities including a 3 hour daily cricket match at the lush green international hostel ground. The megadeth team, as we proudly grouped ourselves, the electronics and communication batch hardly missed those cricket matches. We were electronics engineers and had taken pride in ourselves by not really bothered to ask any fellow discrete math or combinatorics folks! That perhaps helped in some sense.  Ramani found management more interesting than those technical details of counting. I am sure he took the right career. Anyway…too much digressing already!

Now, it turns out that, the very same problem is akin to a well known problem in combinatorics. It is called the Hatcheck lady problem. It is fairly easy to solve it using the inclusion exclusion principle. The proof outline is shown below. As I type, memory fetches that discussion,  sitting leg-folded on the cement bench at the REC mini-canteen, perhaps an occasional cool breeze around too. 

The inclusion exclusion principle is the following:

\lvert \bigcup_{i=1}^{n} A_{i} \rvert=\displaystyle\sum_{i=1}^{n}{\lvert A_{i}\rvert}-\displaystyle\sum_{1\le i_{1}<i_{2}\le n}^{n}{\lvert A_{i1}\cap A_{i2} \rvert}+\displaystyle\sum_{1\le i_{1}<i_{2}\le n}^{n}{\lvert A_{i1}\cap A_{i2}\cap A_{i3} \rvert}

                 +\displaystyle\sum_{1\le i_{1}<i_{2}\le n}^{n}{\lvert A_{i1}\cap A_{i2}\cap A_{i3} \rvert}+\ldots+

                 +(-1)^{n-1}{\lvert A_{1}\cap A_{2}\cap A_{3}\cap\ldots\cap A_{n} \rvert}

The Hatchek lady problem can be stated with a similar story as the random clocker machine. (From Harris, Mossinghoff, Hirst’s book on Combinatorics and Graph Theory)

A lazy professor gives a quiz to a class of n students, then collects the papers, shuffles them, and redistribute them randomly to the class for grading. The professor would prefer that no student receives his or her own paper to grade. What is the probability that this occurs? This indeed is an equivalent statement of the well known Hatcheck lady problem (I guess the exact name come from a hatcheck lady who collects hats and absentmindedly return them)

For Hatcheck lady problem, the probability P(n)=\frac{D(n)}{n!}.

D(n)=n!-\lvert A_{1}\cup A_{2}\ldots\cup A_{n}\rvert=n!-\frac{n!}{1!}+\frac{n!}{2!}-\ldots+(-1)^{n}\frac{n!}{n!}

= n!-\displaystyle\sum_{k=1}^{n}{(-1)^{k-1}\binom{n}{k}(n-k)!}=n!-\displaystyle\sum_{k=1}^{n}{(-1)^{k-1}\frac{n!}{k!}}

P(n)= 1-\displaystyle\sum_{k=1}^{n}{(-1)^{k-1}\frac{1}{k!}}

When n gets larger and larger it converges asymptotically to a constant!

\displaystyle\lim_{n\to\infty} P(n)=\displaystyle\lim_{n\to\infty}{\displaystyle \sum_{k=1}^{n}{\frac{1}{k!}}}=\frac{1}{e}

The popular documentary on this fascinating mathematical prodigy of 20th century is now available on you tube. Personally, while watching the video, the cam river and the row boat brought a touch of nostalgia! I have heard mountains of stories about Paul Erdős. This documentary is a must watch for not only mathematicians and mathematically curious guys (or guys like me who are just curious about mathematics, mathematicians and mathematical minds or for that matter about anything in this world!), but for everyone interested to know about such an extra ordinary mind of our times.  What a fascinating experience it would have been to listen to one of his lectures live. Now this youtube brought the gap down to finite length/time reality.

I have never seen Erdős. Now that he is no more warrants any thoughts anyway.  In a away I am lucky this semester to attend courses of another prolific mathematician of this era Janos Pach. Interestingly, Janos Pach is one of the few living mathematicians with Erdős number 1.  His lectures on Graph theory as well as the one on geometrical graph theory are truly fascinating. 

Anyway, if you have not seen the documentary yet, here is the link. It is a must watch. I bet, you wouldnt miss the time. On many occasions, the Cam river and its slow movement etches something in the backdrop of those days.  Didnt I like that place?

An interesting bound (problem from Rudi’s  Modern coding theory book) regarding the chromatic number of a random graph.  I first attempted it during the course.  Here is the problem statement:

C.5 (Concentration of the Chromatic Number – Spencer and Shamir). Consider a graph G on n vertices.  The chromatic number of a graph G, denoted by \chi(G), is the smallest number of colors needed to color all vertices so that no two vertices which are joined by an edge have the same color. Consider the standard ensemble of random graphs on n vertices with parameter p: to sample from this ensemble, pick n vertices and connect each of the \binom{n}{2}Ž ordered pairs of vertices independently from all other connections with probability p. Show that for this ensemble 

\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}\left[\chi(G)\right] \rvert >\lambda \sqrt{n-1}\right) \le 2e^{\frac{-\lambda^2}{2}}.

My solution is as folllows: (PDF typeset as single file is available here. Look for problem 3)

Let G is a random graph with n vertices.  If p is the probability that a given edge is in G. The probability space be G(n,p).  Let \chi be a filter on the set of all such random graphs. We can define a Martingale as follows:

X_{0}=\mathbb{E}[X(G)]

X_{i}=\mathbb{E}[X(G)\lvert 1_{1},1_{2},\ldots,1_{i}],\forall 1\le i\le \binom{n}{2}

where

1_{i}=\begin{cases}1,&\text{if edge} \quad e_{i}\in G\\ 0, &\text{if edge} \quad e_{i}\notin G\end{cases}

and \chi(G) is the Chromatic number of G. Chromatic number changes at most by one , when the information about the new edge comes in. Clearly, \chi satisfies the conditions for Azuma’s inequality. \{X_i\}_{i\ge 0}

is a Martingale, with \lvert X_i-X_0\rvert \le 1).  Let Z_i=X_i-E[X_i]. Clearly 

E[Z_i]=E[X_i]-E[x_i]=0

Z_m=X_m-E[X_m]=\mathbb{E}[\chi(G)|1_1,1_2,\ldots,1_m]-\mathbb{E}[\chi(G)]

=\chi(G)-\mathbb{E}[\chi(G)]

Now we can use the Azuma’s inequality on \{Z_i\} to get,

\mathbb{P}\left(\lvert Z_n-Z_0\rvert \ge \lambda \sqrt{n}\right)=\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert \ge \lambda \sqrt{n}\right)\le 2e^{\frac{-\lambda^2}{2}}.

Since \mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert \ge \lambda \sqrt{n}\right)=\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert > \lambda \sqrt{n-1}\right), the result 

\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}\left[\chi(G)\right] \rvert >\lambda \sqrt{n-1}\right) \le 2e^{\frac{-\lambda^2}{2}} 

follows.

Here is an interesting riddle on random matrices.  

(Rank of Random Binary Matrix). Let R(l,m,k) denote the number of binary matrices of dimension l \times m and rank k, so that by symmetry R(l,m,k)=R(m,l,k).  This is a repost of the solution that I have arrived at (certainly not the first!) and submitted as part of a homework (9) problem from the doctoral course  Modern coding theory (by Rudiger Urbanke) at  EPFL.  The sumbitted solution in PDF is available here.

Rank of a matrix G is essentially the number of nonzero rows when the matrix G is expressed in echelon form. So, we just need to compute the ways these matrices can be created with k non zero rows. Since the elements of the matrix are binary (from \mathbb{F}_{q=2}), we can simply do a counting.

It is trivial to compute R(l,m,k) for k=0 and k>l. For  k=0, only all zero matrix possible, and only one such matrix exist. Hence R(l,m,0)=1. For  l>k>0, since  k>\min(l,m), no matrix exist, which means R(l,m,k)=0 . 

Now we consider l=k>0.  How many ways? We have l=k  non zero rows of the l\times m  matrix, which means all rows must be nonzero. Without loss of generality, for counting, we could assume that, the rows are ordered. The last row (l^{th} row can be be done in 2^{m}-1,  since there anything other than all 0 vector (of size m) is allowed. On (l-1)-th row, anything other than that of row l is allowed. There are 2^{m}-2 ways here. l-2-th row can have anything except any linear combination of the rows l and l-1. This is nothing but 2^m-\left({\binom{2}{0}+\binom{2}{1}+\binom{2}{2}}\right)=2^m-2^2. Row l-3 then have 2^m-\left(\binom{3}{0}+\binom{3}{1}+\binom{3}{2}\right)=2^m-2^3 and so on. In all, Following the same procedure, we can have a total of  

= \left(2^m-1\right) \left(2^m-2^1\right)\left(2^m-2^2\right)\ldots \left(2^m-2^{l-1}\right)

=\left(2^m-1\right) 2^{1} \left(2^{m-1}-1\right) 2^{2} \left(2^{m-2}-1\right) \ldots 2^{l-1}\left(2^{m-l+1}-1\right)

=2^{0} 2^{1} 2^{2} \ldots 2^{l-1}\left(2^m-1\right)\left(2^{m-1}-1\right)\left(2^{m-2}-1\right)\ldots\left(2^{m-l+1}-1\right)

=\prod_{i=0}^{l-1}{{2^i}\left(2^{m-i}-1\right)}

=\prod_{i=0}^{l-1}{\left(2^{m}-2^{i}\right)}

=\prod_{i=0}^{l-1}{2^m \left(1-2^{i-m}\right)}

=2^{ml} \prod_{i=0}^{l-1}{ \left(1-2^{i-m}\right)}

 ways.  For l>k>0, we can construct a rank k matrix of size l \times m in any of the following ways:

  1.  Take a rank k-1 matrix of size (l-1) \times m and add an independent row.
  2.  Take a rank k matrix of size (l-1) \times m and add a dependent row.

For every (l-1) \times m matrix, 

2^{m}-1+\binom{k-1}{1}+\binom{k-1}{2}+\ldots +\binom{k-1}{k-1}=\left(2^m-2^{k-1}\right)

and hence,

R(l-1,m,k-1) \left(2^m-2^{k-1}\right)= R_{1}(l,m,k)

ways. (Essentially avoid all possible linear combinations of existing k-1 rows).  Using the second (item 2 above) method, we can have 1+\binom{k}{1}+\binom{k}{2}+\ldots +\binom{k}{k} = 2^k and 

R_{2}(l,m,k)= 2^k R(l-1,m,k) 

different ways a rank k matrix can be formed. Where,the first term (=1) is when the all zero row is picked as the new row. In\binom{k}{1} ways we can pick any one of the exisiting row as a dependent (new row). In general for 0\le j\le k we can have combination of j existing rows  out of k in \binom{k}{j} different ways to make a dependent (new) row.

So using (1) and (2) we get,

R(l,m,k)=2^k R(l-1,m,k)+\left(2^m-2^{k-1}\right)R(l-1,m,k-1)

Putting everything together,

R(l,m,k) = \begin{cases} 1, & k=0, \\2^{ml} \displaystyle \prod_{i=0}^{l-1}{ \left(1-2^{i-m}\right)} , & l=k>0 \\ 2^k R(l-1,m,k) + \left(2^m-2^{k-1}\right) R(l-1,m,k-1) &l>k>0 \\ 0 & k>l>0 \end{cases}

I am caught with a little trouble to figure out an easier way to set up and solve a  summation of a function of several variables.  I just realized that Mathematica doesnt allow me to add a constraint in the summation operation.  My problem  is that, the summation should be performed over  the ordered partition set of a number

An example, will illustrate the problem definition a lot better.  Suppose I want to compute the sum of a function of 3 variables over all variables such that, the sum of the index variables always equal to a constant n

\displaystyle \sum_{\underset{i+j+k=n}{i,j,k}}f(i,j,k)

Clearly, the indices here runs over all the ordered partitions of the number 3.  Since 3 is not that big a number, we can simply write these partitions by hand and somehow get the job done.  Let us say denote the ordered partitions and unordered partitions of the number n by P(n) and Q(n) respectively. 

Q(3) =( (3), (2, 1),(1, 1, 1)). The ordered partition will have the permutations of each of these on the three positions! So (0,0,3) and (0,3,0) are different sets of P(3) whereas, they are considered as one (3) in Q(3).  The summation needs to be carried over the ordered partition list P(n).  Since both Q(n)  and P(n) grow pretty big with even modest n, the job of manual summation is not that appealing. I really hoped mathematica to aid me here. Unfortunately, I don’t see a way out here, other than the painful individual partition sum.  Anyway, for the curious reader, the sum I attempt are of the following types:

\displaystyle \sum_{n_1+n_2+\ldots+n_k=n}{(n_1+1)a_1\frac{n!}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }

\displaystyle \sum_{n_1+n_2+\ldots+n_k=n}{(n_1+1-n_2)a_1\frac{n!}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }

typical values for n is 20 and k is around 10. If any enthused reader finds a way/trick, I will be happy to sponsor a coffee 🙂 (Sorry at this recession time, nothing more 😦 sadly…).

In fact, the first one is easy (Interestingly, I just found a way, while typesetting the blog).  I can do a differential with respect to a_1 and then scale it to get the requisite sum.  It can be written as follows:

\displaystyle \sum_{n_1+n_2+\ldots+n_k=n}{(n_1+1)a_1\frac{n!}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }

=\frac{a_1}{{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}}} \frac{\partial}{\partial a_1}\left[a_1 \left(a_1+a_2+\ldots+a_k\right)^{n}\right]

=\frac{a_1}{{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}}} \left[n a_1 \left(a_1+a_2+\ldots+a_k\right)^{n-1}+\left(a_1+a_2+\ldots+a_k\right)^{n}\right]

Second one is the trouble maker 😦

Oh man! power of a coffee.  As I am typing this on a moody Lausanne swiss weather, here comes the trick.  I just made a coffee and that seemed to have worked.  I guess I can apply a similar trick to the second one too.  Basically, we can split them into two expressions and then write each as differential versions of a multinomial sum. Here it is:

\displaystyle \sum_{n_1+n_2+\ldots+n_k=n}{(n_1+1-n_2)a_1\frac{n!}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }

=\displaystyle \sum_{n_1+n_2+\ldots+n_k=n}{(n_1+1)a_1\frac{n!}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }

-\displaystyle \sum_{n_1+n_2+\ldots+n_k=n}{(n_2)a_1\frac{n!}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }

=\frac{a_1}{{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}}} \frac{\partial}{\partial a_1}\left[a_1 \left(a_1+a_2+\ldots+a_k\right)^{n}\right]

-\frac{a_1 a_2}{{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}}} \frac{\partial}{\partial a_2}\left[\left(a_1+a_2+\ldots+a_k\right)^{n}\right]

=\frac{a_1}{{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}}} \left[n a_1 \left(a_1+a_2+\ldots+a_k\right)^{n-1}+\left(a_1+a_2+\ldots+a_k\right)^{n}\right]

-\frac{a_1 a_2}{{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}}} \left[n \left(a_1+a_2+\ldots+a_k\right)^{n-1}\right]

Amazingly, we can simplify both to get a simple looking expression. I am glad! Here is what I finally got:

\displaystyle \sum_{\sum_{j}{ n_j}=n}{\frac{n!(n_1+1)a_1}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }=\frac{a_1\left(na_1+\sum_{i}{a_i}\right)}{{\left(\sum_{i=1}^{k}{a_i}\right)^{2}}}

\displaystyle \sum_{\sum_{j}{ n_j}=n}{\frac{n!(n_1+1-n_2)a_1}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }=\frac{a_1\left(n(a_1-a_2) +\sum_{i}{a_i}\right)}{{\left(\sum_{i=1}^{k}{a_i}\right)^{2}}}

\displaystyle \sum_{\sum_{j}{ n_j}=n}{\frac{n!(n_l)a_1}{n_1!n_2!\ldots n_k!}\frac{a_{1}^{n_1} a_{2}^{n_2} \ldots a_{k}^{n_k}}{\left(\sum_{i=1}^{k}{a_i}\right)^{n+1}} }=\frac{a_1 a_l n}{{\left(\sum_{i=1}^{k}{a_i}\right)^{2}}}

This content is password protected. To view it please enter your password below:

While searching for the book Information theory, Coding theorems for discrete memoryless systems by Imre Csiszar and Janos Korner, I came across several sites which echoes the fact that this book is one of the rarest specimen on earth. However in the process, I found a blog forum which lists a whole lot of out of print books. This book, as expected is in demand already. We can even vote to request to bring the out of print books to a possible reprint. I am not sure how effective this is, but there is no harm in trying! We can suggest any books you may want to have a reprint. To me, this is a whole new and welcome idea. For learning, we should have access to the good books. Already quite a few demands (See this blog for instance) for the Csiszar and Korner book. Man, the world sometimes think alike!

This content is password protected. To view it please enter your password below:

Last winter Etienne Perron, Suhas Diggavi and myself together, have developed a tool suit to prove inequalities in information theory. The tool is adapted from the previous work of Raymond Yeung and Ying-On Yan at Cornell. We have made it a complete C based software and removed the matlab dependency in the back end. There is also a pre-parser (using lex and yacc) built in to have flexibility on choosing random variable names. More importantly, a graphical front end is developed (using Gtk), which works well across the platform. Even though the beta version was ready in late 2007, for many reasons, including exhaustive testing (we always find scope for improvement) it was delayed. Last month, we finally made an official release. The original xitip project page in IPG has a short description and pointer to the exclusive Xitip page in EPFL (http://xitip.epfl.ch). A lot of things still need to be done, before we could say it is satisfactory. One of the main thing pending is the user guide and some kind of exemplified documentation. There is a technical report, I have prepared, but that is a bit too technical at the moment. Of course Raymond yeung’s amazing papers introducing the theoretical idea behind this prover and his book are valuable resources. I have tried to provide a little more easy understanding of the concept using some illustration and toy examples. I hope to put this report anyway in the EPFL repository sometime. The first version of the project discussing the background is available here in PDF form.

Xitip screenshot, the French version

Xitip screenshot, the French version

The software is open source. If you are not bothered to compile and make an executable yourself, then please download the binary executable and just run. It is just a matter of double click in the latter case. We have Linux, Windows, Windows(Cygwin) and Mac versions available. There are two different linear programming software used. One is a Gnu open source GLPK and the other one is Qsopt (developed at Gatech). The Qsopt version is faster than the GLPK. Just in case you are obsessed with a perfect open source model, you could avail the GLPK [5] version.

Hopefully during this summer we will get to complete the pending work on this project. If any of you happen to find it interesting please don’t forget to update us, on what you though about the software (Comments can be good, bad and ugly!).

Aside, I better mention this: Xitip is a software useful for proving (verifying) Information theoretic inequalities [7] only. Such inequalities contain expressions involving measures such as entropy, mutual information etc. It is a pretty handy tool if you are trying to prove some limiting bounds in information theory. In reality, there is broad classification of Shannon type and non-Shannon type inequalities. Non-Shannon type inequalities are not many, but they exist. Xitip at the moment is equipped to solve only the Shannon type inequalities. You can expect more information on this at the Xitip home page [2]

[1]http://ipg.epfl.ch/doku.php?id=en:research:xitip
[2]http://xitip.epfl.ch
[3]http://www2.isye.gatech.edu/~wcook/qsopt/
[4]http://user-www.ie.cuhk.edu.hk/~ITIP/
[5]http://www.gnu.org/software/glpk/
[6]http://en.wikipedia.org/wiki/Information_theory
[7]http://en.wikipedia.org/wiki/Inequalities_in_information_theory


The title might mislead you. So, let me clarify upfront. I am not on a mission to self appraise. I am to talk about the autobiography of ‘Noerbert Wiener’, titled ‘I am a Mathematician’. This is a piece of book I am reading currently. Since I have heard a lot of stories about Wiener and having known some (percentage is minuscule!) of his work, the presentation of the book didn’t provide disappointment. Rather, it is a very very interesting sketch of his life, put in his own style.

I mentioned about stories being heard about him. There are many of them. I am not saying this candidly, because I hardly checked the authenticity of such tales. Nevertheless, I get ready to laugh everytime, I begin to hear anything about him. The mathematical work of this once child prodigy is very well known and is treasured. His wit and absentmindedness are quite unique. Some of the anecdotes, I have heard about him are;

1.During one of these trips down the hallway at MIT, Wiener was interrupted by several of his students who talked to him for several minutes about what they were working on. After the conversation had ended, Wiener asked one of them “Could you please tell me, in which direction was I traveling when you stopped me?” One of them replied, somewhat confusedly, “You were coming from over there [gesturing] this way [gesturing].” Wiener replied, “Ah, then it is likely that I have already had lunch. Thank you.” and continued down the hallway to his office. (A somewhat similar story is attributed to Einstein as well. As far as I heard, this is when Claude Shannon was giving a lecture at Princeton. It was well attended. Einstein made a back door visit when Shannon was in full stream. Shannon obviously noted Einsteins coming in, chatting with someone in the last row and the leaving soon. The curious Shannon (after the lecture) went to the folks to whom Einstein seemed talking. To Shannon’s surprise, Einstein was apparently asking them ‘where the tea was served’.)

2: After several years teaching at MIT, the Wieners moved to a larger house. Knowing her husband was likely to forget where he now lived, Mrs. Wiener wrote down the address of the new house on a piece of paper and made him put it in his shirt pocket. At lunchtime, an inspiring idea came to the professor, who proceeded to pull out the paper and scribble down calculations, and to subsequently proceed to find a flaw and throw the paper away in disgust. At the end of the day, it occurred to Wiener that he had thrown away his address. He now had no idea where his home was. Putting his mind to work, he concocted a plan: go to his old home and wait to be rescued. Surely Margaret would realize he was lost and come to pick him up. When he arrived at the house, there was a little girl standing out front. “Excuse me, little girl,” he asked, “would you happen to know where the people who used to live here moved to?” “It’s okay, Daddy,” the girl replied, “Mommy sent me to get you.” (Decades later, Norbert Wiener’s daughter was tracked down by a mathematics newsletter. She said the story was essentially correct, except that Wiener had not forgotten who she was.)

Description on the image: Norbert Wiener with Amar Bose (Bose audio fame) and Lee (the early MIT pioneers): Source of this image is [1] [1]http://www.siliconeer.com/past_issues/2005/January2005-Files/jan05_bose_archive.jpg

On the day before yesterday, while on travel I picked up two books from the Bangalore airport book shop. One was the Helen Keller autobiography “The Story of My Life”. This was perhaps the first full volume autobiography book I had read, when I was a child. I still remember, the used copy of this book being given to me, by a family friend and teacher, when I was in early school. The vivid recollection of Helen Keller learning the alphabets and notably, words like ‘rain’ sill echoes somewhere in the back of my mind. I decided to buy this book, with the plan to gift it to someone, who I come across, preferably young kid in school or so. Eventually, I gifted this to a friend of mine who, when I mentioned about the same, was all excited to grab it. In a way I felt happy because I could instigate interest in some one about Helen Keller. Keller’s life has been a message in itself to humanity. Her strives and constant efforts to learn to react to this world is touching to say the least.

The other book which I have picked along is on the story of “Nicolas Bourbaki”. It is titled “The artist, the mathematician: The story of Nicolas Bourbaki, The genius mathematician who never existed”. It was authored by Amir Aczel, the author of the book on ‘The Fermats Last theorem’. I never heard about this book before, either in reviews or from friends. Well, I must confess that I didnt really search for books or reviews on a topic of this kind, to really hear about such a book. Anyway, the book is a recent print and the news may be just coming out only.

This book turned out to be quite an interesting one. On the flight and while on wait, I could quickly finish reading the book. Being a math and mathematician related book, I didn’t feel sleepy over the book. That helped to complete the reading at a stretch. The book is about Nicolas Bourbaki. Out of my innocence, I never heard about this character before, either. As the title says, there was no specimen of this kind in human form lived upon earth. Yet, there was an identity to this Nicolas Bourbaki. The real truth about this name is revealed a little later in the book. Nicolas Bourbaki was not just one person, but an identity for a collection of mathematicians, mostly French. Together they have published a lot of work in mathematics, under the identity of one name “Nicolas Bourbaki”. How does that sound? Well, it sounds a little weird and a little interesting to me. Can we say, Nicolas Bourbaki was an institution itself? (Well there is a story about Nicolas Bourbaki being applied to get an AMS, American Mathematical Society and gets a reply to pay the institutional pay to get the same. Of course, the AMS president that time, had known the truth about the name Nicolas Bourbaki!)

So, Nicolas Bourbaki represented a group of Mathematicians, post 1935 (between first and second world war period), who formulated a systematic structure to the study of mathematics itself. They have worked predominantly on the Modern mathematics (modern algebraic concepts) including the concept of ‘sets’. As you would expect (because of the time line and French connection), the star mathematician André Weil was one of the founding members of this group. Some of the other well known French (connected) mathematicians also figured in the celebrated group. They iuncluded names like Henri Cartan, Claude Chevalley, Jean Coulomb, Jean Delsarte, Jean Dieudonné, Charles Ehresmann, René de Possel, Szolem Mandelbrojt. Some later entrants to the Bourbaki group had the likes of Laurent Schwartz, Jean-Pierre Serre, Alexander Grothendieck, Samuel Eilenberg, Serge Lang and Roger Godement. The group would meet regularly to discuss and debate on modern mathematical topics and eventually publish under one name Nicolas Bourbaki!

Even though Bourbaki does not exist (at least actively) today, it is believed that this has brought some ‘structure’ in not only mathematics, but in European culture in general. Besides, the stellar contributions on the field of algebra and number theory by this group is acknowledged very well.

Now, another reason, why this book took my interest is because of the presence of the mysterious Alexander Grothendieck. The book talks rather in detail about this mystery man. He was regarded as a genius and had contributed immensely to the world of mathematics. His fields medal in 1966 and related absence from the Moscow event on his own Fields medal confer perhaps tells more volumes about this man. I took by surprise to hear about the current life of this once world star of mathematics. Post 1991 he is believed to be living in a remote jungle/outskirts of Southern France, far from any worldly contacts. I quickly recollected the 2006 Fields medal winner Grigori Perelman, when he declined the medal and began to live away from mathematics crowd. Like Perelman, who (now accepted by everyone) proved the Poincaré Conjecture, Grothendieck as well, produced some monumental contributions. He was considered as a Genius and it was recognized too. But in close resemblance to what Perelman did in 2006, he as well decided to quit mathematics altogether and decided to live a mysterious and aloof life.

What makes, genius folks like Perelman and Grothendieck to abandon themselves from the world? It is surely not fear, simply because they have contributed so much to us and can only be proud of it. What else then? There may be arguments that Grothendieck was disappointed by the fact that, he couldn’t get the results he wanted from his political activities (He had strong political views on certain things against the establishment). Others say that, the childhood miseries and leading an abandoned life early on triggered him to do an isolated life, out of frustration and despair. In Perelman’s case, it is surely a hurt feeling and the one that of ignorance by the mathematical community, for questioning his credentials on the Poincaré conjuncture proof. But can the human mind, which is otherwise so highly brilliant all of a sudden taken aback by a feel of hurt? I feel sorry for these two fine mathematicians, who could have produced much more (They have already done so much, but….) to the world. The life and fate of these two tells the story of the society we all are in, where jealousy, politics, lack of respect to others can completely eradicate a gem, which we should have preserved at all cost.

Tears and anxiety glues through my mind on the current whereabouts of Grothendieck. I hope he lives a happy life around Montpelier! Human mind, man is a highly complex black box. What all it takes and what all it breaks?

Top Clicks

  • None

Pages

June 2017
M T W T F S S
« Mar    
 1234
567891011
12131415161718
19202122232425
2627282930  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 84 other followers

Like

%d bloggers like this: