You are currently browsing the category archive for the ‘puzzles’ category.

During the past week, while at Hawaii for the IEEE 802.11 interim, I happened to glance this NY times article. The story is about a New Hampshire professor Yitang Zhang coming out with a recent proof on establishing a finite bound on the gap between prime numbers. While browsing the details, there are more details emerging as several pieces of blog and articles and reviews are being written (and some are still being written). Now, looks like the claim is more or less accepted by pundits in the field, and termed as a beautiful mathematical breakthrough. As an outsider sitting with curiosity, I feel nice to scratch the surface of this new finding.

The subject surrounding this news is number theory, prime numbers to be precise. The question of interest is on the gap between adjacent prime numbers. We know that and are prime with a gap of , but this is truly a special case and unique per definition. The gap between and is 2. Similarly and differ by . One may have thought that, the gap between successive primes go up as we flee along the number line. Not quite. For example, we can see that there are a lot of pairs with a gap of 2. The easy ones are (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71, 73), (101, 103), (107, 109), (137, 139) and the list goes on. It was conjectured that there are infinitely many such pairs, but the proof of that is not quite as easy as yet! It is known that there are precisely below , but infinity is still a lot far from ! An interesting quest was to really prove that there are infinitely many twin primes, but this still remain as an open conjecture.

Now the new discovery by Zhang is not quite proving the twin conjecture, but a close relative of that. Twin conjectures are strictly about prime pairs separated by . A related question is, how about prime pairs and which are separated by where could be a finite number. When , then we have the special case of the classical twin prime case. Can we at least prove mathematically that there exists infinitely many primes such as for some $k$. If so, what is the smallest where this holds true? Zhang now has a proof that for as small as million. Mathematically, if we denote is the th prime, then the new claim says (stated crisply in the paper abstract),

million is still a large gap, but as Dorian Goldfeld says, is still finite and nothing compared to infinity! In future, it is not unlikely that we may get to see this gap coming down and perhaps to the best case of . Who knows?

The result is still interesting, even to general interesting folks like us. This kind of says that, the gap between prime numbers is worst case bounded by a finite number. If we really plot the prime numbers, then we will see a saturation like behavior! Like many other things at asymptotic (for example, the eigenvalues of a large random matrices exhibit very interesting properties, when the size goes to infinity), things at infinity may exhibit some charm, after all!

The paper is accessible here, but as expected the proof is hard (for me at least). Hopefully we will have some diluted explanation of its essence from experts in the coming days. Already,Terrence Tao had this sktech, a couple of weeks ago on his google+ feed. Over the last week or so, finer details on the new break through are still emerging. Terry Tao also has initiated an online Wiki collaboration in an effort to improve upon from this work (For experts that is, not for me!).

Congratulations Professor Zhang.

I was playing a few tricks on a few centrality measures on a few graphs. SO, thought of penning down a quick piece of notes on the notion of centrality (A quick introduction can be found here along with Wikipiedia page on centrality)

I don’t Twitter a lot, in spite of having an account which I visit once in a while and tweet at an even low rate. To me, Tweets are too short and too soon; The 143 characters is something too low a number to post an opinion without multiple breaks. Thats just my take and clearly it is different from the large segment of the rest of the world population.

Anyway, I am still wondering what made the Twitter founders to decided on restricting the length to 143 characters. Yesterday, during a dinner party I hear some of them commenting on this. I didnt get the rationale fully, but it was apparently chosen based on some SMS research, using curtailed English words.

I did a few Google search, but nothing quite came with the explanation. The Wikipedia interestingly has a page for 143. Would you believe? As it turns out, 143 indeed has some very interesting properties.

**143** is the sum of three consecutive primes (43 + 47 + 53), as well as the sum of seven consecutive primes (11 + 13 + 17 + 19 + 23 + 29 + 31). But this number is never the sum of an integer and its base 10 digits, making it a self number.

Every positive integer is the sum of at most 143 seventh powers (see Waring’s problem).

143 is the difference in the first exception to the pattern shown below:

- 3
^{2}+ 4^{2}= 5^{2} - 3
^{3}+ 4^{3}+ 5^{3}= 6^{3} - 3
^{4}+ 4^{4}+ 5^{4}+ 6^{4}= 7^{4}− 143

On a lighter note, here is a very interesting blog piece, bringing some sort of similarity to the giant component behaviour on a pile of staple pins. This is sort of a layman view of the Erdos Renyi explanation.

In one version of the Erdős-Rényi process, one start with a set of isolated vertices and then keep adding random edges one at a time; More specifically, at each stage one choose two vertices at random from among all pairs that are not already connected, then draw an edge between them. It turns out there’s a dramatic change in the nature of the graph when the number of edges reaches . Below this threshold, the graph consists of many small, isolated components; above , the fragments coalesce into one giant component that includes almost all the vertices. “The Birth of the Giant Component” was later described in greater detail in an even bigger paper–it filled an entire issue of *Random Structures and Algorithms* (1993, 4:233–358)–by Svante Janson, Donald E. Knuth, Tomasz Luczak and Boris Pittel.

Ofcourse Joel Spencer has written a beautiful article on AMS commemurating the golden anniversity of the giant component.

An interesting and twisted version of the twenty question game came across recently. I saw it in the 2009 ISIT paper on the error correction under Feedback with list decoding, by Ofer Shayevitz. The problem goes like this (I am going by the nice narration of Ofer with the famous Alice and Bob duo).

The game is being played by Alice and Bob. Alice selects one of possible objects, and Bob can ask binary questions (where the answer can be either yes or no only), in a quest to identify the object selected. If Alice is always truthful, then questions are obviously both necessary and sufficient. Now, suppose Alice is allowed to lie up to times. How many questions does Bob need to get the correct object? Alternatively, given questions, what is the maximal number of objects that Bob can separate? This version of the “twenty questions game” with lies is known as Ulam’s game.

Glanced upon a paper from the 2009 ISIT, titled “LT Codes Decoding: Design and Analysis” by Feng Lu et al. The authors discusses LT decoding strategy which is Peeling decoder (conventional simple LT decoding) followed by an extension based on the well known Widemann algorithm solving a sparse homogeneous linear system. Here is a brief summary as I understood.

On packet erasure channels (internet being one close realization of a model of this kind, where, the TCP/IP suit, one can either gets packets correctly delivered or lost in the wild), Luby Transform (LT) codes forms an easy way to transfer information. It is well known that, asymptotically (When the number of information symbols (or packets) and the number of packets received $r$ are large) near full recovery is made possible by simple peeling decoder which grows in complextity at . The claim is that, for or so, with , on average we can, with high probability, recover all of information packets by simple unwrapping of xor-ed packets. A more precise statement will read as: If from information packets, each coded packet constructed independently by xor operation (the number of packets taken for xor operation follows a distribution; Plain vanila version of LT codes uses Soliton distribution), then the content of the original information source can be recovered from any packets with probability by an average of symbol operations. However, when the number of input symbols is not large (say less, ) the overhead term is not small. It is not too surprising, since we need sufficiently large and to converge the performance in expectation. One can think of solving the system of linear equations using Gaussian elimination and get the best figures, but the question mark stays on the complexity of this method over the cheaper peeling decoder complexity .

Conventional LT decoding based on Peeling decoding rule terminate, once the number of degree -1 packets (ripple as it is called in graph theory parlance) are void. One can hope to continue from there, by using Gaussian elimination and decode a a few more (if the graph matrix rank permits it). If the original graph is full rank, one can technically decode all of them. Gauissian eliminatio being heavy, one hope to find an easy, preferably recursive method. Indeed there comes Wiedemann and that is what the authors use. The authors propose what they call as full rank decoding, which essentially is LT decoding, until there are no more degree one packets. From there, they borrow a packet and then uses Wiedemann algorithm which is somewhat light on computational complexity.

The larger aspect on whether this scheme do any better than conventional LT peeling decoder is the other question they answer. To achieve full recovery we need to have a full rank matrix. Now, the matrix being a sparse matrix defined over where is the number of received packets. The probability of the matrix having a full rank will directly help us to infer the decoding merits. This is an interesting piece since I had looked into the computation of the rank of a binary matrix and also the probability of a random matrix rank being equal to an arbitrary value. All this was done during the Modern Coding Theory doctoral course by Ruediger Urbanke at EPFL. For binary random matrix, we can also setup a recursive formula and thereby a good inference on the probability of decoding can be arrived at.

Today, during the evening chat, Emmanuel Abbe threw an interesting question: Whether the sum of square roots of consecutive binomial coefficients converge to some closed form! That is, . We tried a few known combinatorics tweak, but no meaningful solution arrived. We were also wondering whether this has some asymptotic limit, but that too did not yield anything. A quick check on Mathematica wasn’t helpful either. Now the question is: Does this sum yield some closed form expression.

While playing with this sum in Mathematica, I found that for the sum of squares of binomial coefficients, there is a nice simple closed form.

I was toying with a proof. It turns out that, the proof is extremely simple and is a one line tweak of the Vandermonde identity . Simply substitute and we have the results on table. The natural question then would be: Is there a generalization for for any . Ofcourse now for it is trivial.

Apparently, it turns out that, there is no closed form expression for a general (all) . There are some interesting divisibility properties of these sums. An interesting account of that is addressed by Neil Calkin (Factors of sums of powers of binomial coefficients).

At the moment, I get a feeling that sum of fractional powers of binomial coefficients is not trivial. May be there is no closed form. May be not!

An interesting bound (problem from Rudi’s Modern coding theory book) regarding the chromatic number of a random graph. I first attempted it during the course. Here is the problem statement:

C.5 (Concentration of the Chromatic Number – Spencer and Shamir). Consider a graph on vertices. The chromatic number of a graph , denoted by , is the smallest number of colors needed to color all vertices so that no two vertices which are joined by an edge have the same color. Consider the standard ensemble of random graphs on vertices with parameter : to sample from this ensemble, pick vertices and connect each of the ordered pairs of vertices independently from all other connections with probability . Show that for this ensemble

.

My solution is as folllows: (PDF typeset as single file is available here. Look for problem 3)

Let is a random graph with vertices. If is the probability that a given edge is in . The probability space be . Let be a filter on the set of all such random graphs. We can define a Martingale as follows:

where

and is the Chromatic number of . Chromatic number changes at most by one , when the information about the new edge comes in. Clearly, satisfies the conditions for Azuma’s inequality.

is a Martingale, with ). Let . Clearly

Now we can use the Azuma’s inequality on to get,

.

Since , the result

follows.

Here is an interesting riddle on random matrices.

(Rank of Random Binary Matrix). Let denote the number of binary matrices of dimension and rank , so that by symmetry . This is a repost of the solution that I have arrived at (certainly not the first!) and submitted as part of a homework (9) problem from the doctoral course Modern coding theory (by Rudiger Urbanke) at EPFL. The sumbitted solution in PDF is available here.

Rank of a matrix is essentially the number of nonzero rows when the matrix is expressed in echelon form. So, we just need to compute the ways these matrices can be created with non zero rows. Since the elements of the matrix are binary (from ), we can simply do a counting.

It is trivial to compute for and . For , only all zero matrix possible, and only one such matrix exist. Hence . For , since , no matrix exist, which means .

Now we consider . How many ways? We have non zero rows of the matrix, which means all rows must be nonzero. Without loss of generality, for counting, we could assume that, the rows are ordered. The last row ( row can be be done in , since there anything other than all vector (of size ) is allowed. On -th row, anything other than that of row is allowed. There are ways here. -th row can have anything except any linear combination of the rows and . This is nothing but . Row then have and so on. In all, Following the same procedure, we can have a total of

ways. For , we can construct a rank matrix of size in any of the following ways:

- Take a rank matrix of size and add an independent row.
- Take a rank matrix of size and add a dependent row.

For every matrix,

and hence,

ways. (Essentially avoid all possible linear combinations of existing rows). Using the second (item 2 above) method, we can have and

different ways a rank matrix can be formed. Where,the first term () is when the all zero row is picked as the new row. In ways we can pick any one of the exisiting row as a dependent (new row). In general for we can have combination of existing rows out of in different ways to make a dependent (new) row.

So using (1) and (2) we get,

Putting everything together,

## Recent Comments