You are currently browsing the category archive for the ‘Coding theory’ category.

Through Anand Sarwate’s blog and this piece from Sergio Verdu, I came to know that  the well known Information and Coding theorist Jim Massey has passed away. I don’t have any direct personal recollection of Massey, other than seeing him once at an Information theory workshop and also last year at the Marconi award ceremony at Irvine. The one thing I always remember (besides the Berlekamp Massey algorithm and transform decoding paper) is his notes at ETH. I have enormously benefited from his lecture notes on Cryptography when I was trying to learn the topic at EPFL. So lucid, crisp and intuitive were his scribes. How I always wished to sit in one of his live lectures! RIP!

I am sure detailed writing on his life and work will appear at some point. I recall Rudi Urbanke once mentioned the early impact of Massey’s work (as a graduate student ?) on threshold decoding of convolutional code, having spurred interest in industry.  Codex corporation (to which, he was a co-founder, I learned recently.) once wanted to implement it into their line modems. Not sure whether I have all the details intact here, but prior to the Viterbi algorithm, his threshold decoding scheme must have been a big hit in communication! To have industry interested in a graduate student work must be special, any day, anywhere!

In his blog  Sergio Verdu, has pointed to the IEEE oral history interview archive, which I happened to read last year almost same time.

Info theory website has further details including the funeral info.

If you have not seen this yet, a fascinating talk (Cryptography- Science or Magic) he did at MIT several years ago, is archived here. Boy! who did the speaker introduction? Another true connoisseur Peter Elias! First time, I saw a video of Elias.

This year’s Marconi foundation prize is being awarded to our company founder Henry Samueli. With last year’s price awarded to the other connoisseur Irwin Jacob (jointly with another stalwart Jack Wolf), now we have the two stellar communication company founders getting the prestigious award in consecutive years!. Feel proud to be part of the company he founded. Broadcom simply has a lot of vibrancy and part of this must surely be due to the champion founder. You can see the energy when Henry Samueli talk. I could feel a similar charm when Aart De Geus (founder and CEO of my earlier employer Synopsys) talks too.  Congratulations Dr. Samueli, we are proud of you.

The first mail  this morning (from Nihar Jindal)  brought this very sad news that Tom Cover has passed away. A giant in this field who contributed immensely to many flavours of Information theory will be missed. Everything he touched had class written all over, gracefulness, simplicity, elegance and all the more depth.

A tremendous loss! His legacy will continue.

I am very thrilled to learn that Ruediger Urbanke has won the 2011 (Koji) Kobayashi award. He and Tom Richardson are named the 2010 receipients of the famous Kobayashi award. Rudi and Tom are awarded Kobayashi prize “for developing the theory and practice of transmitting data reliably at rates approaching channel capacity.” They truly deserve this. Looking at the list of earlier Kobayashi award winners, it really is a place of pantheon of greats. Gottfried Ungerboeck, Don Coppersmith, Rivest, Shamir, Addleman, Jack Wolf, Berlekamp and so on are among the famous awardees of the past.

When pointed this to Rudi, he  was as usual every modest about these. I am sure I will get to have a coffee treat from him, in Lausanne! Place Palud or Ouchy?

An interesting and twisted version of the twenty question game came across recently. I saw it in the 2009 ISIT paper on the error correction under Feedback with list decoding, by Ofer Shayevitz.  The problem goes like this (I am going by the nice narration of Ofer with the famous Alice and Bob duo).

The game is being played by Alice and Bob. Alice selects one of M possible objects, and Bob can ask n  binary questions (where the answer can be either  yes or no only), in a quest to identify the object selected. If Alice is always truthful, then n=\lceil \log M \rceil questions are obviously both necessary and sufficient. Now, suppose Alice is allowed to lie up to t times. How many questions n(M, t) does Bob need to get the correct object? Alternatively, given n questions, what is the maximal number of objects M(n, t) that Bob can separate? This version of the “twenty questions game” with lies is known as Ulam’s game.

Glanced upon a paper from the 2009 ISIT, titled “LT Codes Decoding: Design and Analysis” by Feng Lu et al. The authors discusses LT decoding strategy which is Peeling decoder (conventional simple LT decoding) followed by an extension based on the well known Widemann algorithm solving a sparse homogeneous linear system. Here is a brief summary as I understood.

On packet erasure channels (internet being one close realization of a model of this kind, where, the TCP/IP suit, one can either gets packets correctly delivered or lost in the wild), Luby Transform (LT) codes forms an easy way to transfer information.  It is well known that, asymptotically (When the number of information symbols  (or packets)  k and the number of packets received $r$ are large) near full recovery is made possible by simple peeling decoder which grows in complextity at \mathcal{O}(k) . The claim is that, for n >10000  or so, with r= k(1+\delta) , on average  we can, with high probability, recover all of  k information packets  by simple unwrapping of xor-ed packets. A more precise statement will read as: If from k information packets, each coded packet constructed independently by xor operation (the number of packets taken for xor operation follows a distribution; Plain vanila version of LT codes uses Soliton distribution), then the content of the original information source can be recovered from any k+\mathcal{O} \left(\sqrt{k} \ln^{2} \left(\frac{k}{\epsilon}\right)\right) packets with probability 1-\epsilon by an average of \mathcal{O} \left(k \ln \frac{k}{\epsilon}\right) symbol operations. However, when the number of input symbols is not large (say less, n < 1000 ) the overhead term \delta is not small. It is not too surprising, since we need sufficiently large n and r to converge the performance in expectation.  One can think of solving the system of linear equations using Gaussian elimination and get the best figures, but the question mark stays on the complexity \mathcal{O}(k^{3}) of this method over the cheaper peeling decoder complexity \mathcal{O}(k) .

Conventional LT decoding based on Peeling decoding rule terminate, once the number of degree -1 packets (ripple as it is called in graph theory parlance) are void. One can hope to continue from there, by using Gaussian elimination and decode a a few more (if the graph matrix rank permits it). If the original graph is full rank, one can technically decode all of them.  Gauissian eliminatio being heavy, one hope to find an easy, preferably recursive method. Indeed there comes Wiedemann and that is what the authors use.  The authors propose what they call as full rank  decoding, which essentially is LT decoding, until there are no more degree one packets. From there, they borrow a packet and then uses Wiedemann algorithm which is somewhat light on computational complexity.

The larger aspect on whether this scheme do any better than conventional LT peeling decoder is the other question they answer. To achieve full recovery we need to have a full rank matrix. Now, the matrix being a sparse matrix defined over \mathbb{F}_{2}^{k \times r} where r is the number of received packets. The probability of the matrix having a full rank will directly help us to infer the decoding merits. This is an interesting piece since I had looked into the computation of the rank of a binary matrix and also the probability of a random matrix rank being equal to an arbitrary value. All this was done during the Modern Coding Theory doctoral course by Ruediger Urbanke at EPFL.  For binary random matrix, we can also setup a recursive formula and thereby a good inference on the probability of decoding can be arrived at.

The memorial service for Ralf Koetter held at UCSD is video archived. Quite many of the stalwarts in Information theory field found it difficult to control their emotions when they spoke about him. Such was the level of closeness many people had with him.  I have never got to directly interact with Ralf , but was aware about his stellar contributions to many areas in and related to coding. The most notable thing other than his well known research contribution is his amazing teaching skills. The two guest lectures given by him during David Forney’s MIT class in 2005 were simply stunning. He then had talked about Reed Solomon codes and that is by far the best lucid presentation of such a difficult topic, that I have ever seen. His sudden and untimely  demise leaves an irreplaceable void on this planet. He was that good. So woefully cut short by cancer.

Alex Vardy knitted down a fitting tribute to his friend and colleague.

Todays IPG seminar had Fritz Eisenbrand (the Disctete Opt chair, Math department EPFL) talking about Diameter of Polyhedra:Limits of Abstraction. I don’t think I followed the topic too well, but this is a share of what I understood.

The topic is about a convex geometric problem on the diameter of a polyhedra. The question of whether the diameter of a polyhedron is polynomial or not seemed to be a longstanding open problem. The largest diameter {\Delta_{u}(d,n)} of a {d} dimensional polyhedron with {n} facets has known upper and lower bounds.

{n-d+\lfloor d/5 \rfloor \le \Delta_{u}(d,n) \le n^{\log d +1}}.

The lower bound is due to Klee and Walkup and upper bound to Kalai and Kleitman. These bounds also hold good for combinatorial abstractions of the 1-skeleton of non-degenerate polyhedra (Polyhedron here is called non-degenrate). What Fritz and his colleagues have done is to look into the gap between these known lower and upper bounds. Apparently, the gap is wide and they have made some progress to get a super linear lower bound {\Delta_{u}(d,n) \le \Omega\left(n^{3/2}\right)} if {d} is allowed to grow with {n}.

The way they showed this bound is by establishing the bound for the largest diemeter of a graph in a base abstraction family. Let us say, the abstraction family of connected graphs be denoted by {\mathcal{B}_{d,n}}.The largest diameter of a graph in {\mathcal{B}_{d,n}} is denoted by {D(d,n)}. They find that,{D(d,n) =\Omega\left(n^{3/2}\right)} and then using the fact that {\Delta_{u}(d,n) \le D(d,n)}, they conclude the bound {\Delta_{u}(d,n) \le \Omega\left(n^{3/2}\right)}

I have not had a chance to see their paper yet. I must say, the proof was not all that within my grab during the talk. However it appeared that it is based on some layering and combinatorics. He said some applications to covering problem, in particular disjoint covering design which I didn’t follow that well. Sometimes I get the feeling that I am a little dumb to grasp these ideas during a talk. I wonder whether others understand it very well on a first shot presentation. I have put it in my agenda (among the millions of other papers to read) to see through this problem and proof, one day! His presentation was very clear and legible though.

From this blog piece, I came to know that the smart MIT theoretical computer scientist Madhu Sudan is making a move from MIT to industry. He is set to take up a research position with Microsoft. At this economy troubled days, lesser mortals would take the conservative route that ensure stability and so on. They would say a move from a tenured professorship to a more volatile industry is risky. But then one of the smartest mind in the world can have a world revolve around him, if need be. So no surprises here. On the positive side it is a gain for industry, while it is a big loss for MIT, if Madhu decides to stay away from academia for too long.

Interestingly, on the very same blog, someone commented about other famous moves. Apparently, Venkatesan Guruswami, Madhu’s celebrated student is also making a permanent move from UWash to CMU.  In industry, we are often associated with frequent hops. Academia is not too immune to attrition either. However, I see no harm in making smart moves. It is going to help the world, atleast  in expectation.

As in EPFL too, there is imminent big fish attrition(s). Tom Henzinger and his wife Monika Henzinger are about to leave EPFL to take up a permanent position in Austria. The awesome twosome will be missed in EPFL.

An interesting bound (problem from Rudi’s  Modern coding theory book) regarding the chromatic number of a random graph.  I first attempted it during the course.  Here is the problem statement:

C.5 (Concentration of the Chromatic Number – Spencer and Shamir). Consider a graph G on n vertices.  The chromatic number of a graph G, denoted by \chi(G), is the smallest number of colors needed to color all vertices so that no two vertices which are joined by an edge have the same color. Consider the standard ensemble of random graphs on n vertices with parameter p: to sample from this ensemble, pick n vertices and connect each of the \binom{n}{2}Ž ordered pairs of vertices independently from all other connections with probability p. Show that for this ensemble 

\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}\left[\chi(G)\right] \rvert >\lambda \sqrt{n-1}\right) \le 2e^{\frac{-\lambda^2}{2}}.

My solution is as folllows: (PDF typeset as single file is available here. Look for problem 3)

Let G is a random graph with n vertices.  If p is the probability that a given edge is in G. The probability space be G(n,p).  Let \chi be a filter on the set of all such random graphs. We can define a Martingale as follows:

X_{0}=\mathbb{E}[X(G)]

X_{i}=\mathbb{E}[X(G)\lvert 1_{1},1_{2},\ldots,1_{i}],\forall 1\le i\le \binom{n}{2}

where

1_{i}=\begin{cases}1,&\text{if edge} \quad e_{i}\in G\\ 0, &\text{if edge} \quad e_{i}\notin G\end{cases}

and \chi(G) is the Chromatic number of G. Chromatic number changes at most by one , when the information about the new edge comes in. Clearly, \chi satisfies the conditions for Azuma’s inequality. \{X_i\}_{i\ge 0}

is a Martingale, with \lvert X_i-X_0\rvert \le 1).  Let Z_i=X_i-E[X_i]. Clearly 

E[Z_i]=E[X_i]-E[x_i]=0

Z_m=X_m-E[X_m]=\mathbb{E}[\chi(G)|1_1,1_2,\ldots,1_m]-\mathbb{E}[\chi(G)]

=\chi(G)-\mathbb{E}[\chi(G)]

Now we can use the Azuma’s inequality on \{Z_i\} to get,

\mathbb{P}\left(\lvert Z_n-Z_0\rvert \ge \lambda \sqrt{n}\right)=\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert \ge \lambda \sqrt{n}\right)\le 2e^{\frac{-\lambda^2}{2}}.

Since \mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert \ge \lambda \sqrt{n}\right)=\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}[\chi(G)] \rvert > \lambda \sqrt{n-1}\right), the result 

\mathbb{P}\left(\lvert \chi(G)-\mathbb{E}\left[\chi(G)\right] \rvert >\lambda \sqrt{n-1}\right) \le 2e^{\frac{-\lambda^2}{2}} 

follows.

Here is an interesting riddle on random matrices.  

(Rank of Random Binary Matrix). Let R(l,m,k) denote the number of binary matrices of dimension l \times m and rank k, so that by symmetry R(l,m,k)=R(m,l,k).  This is a repost of the solution that I have arrived at (certainly not the first!) and submitted as part of a homework (9) problem from the doctoral course  Modern coding theory (by Rudiger Urbanke) at  EPFL.  The sumbitted solution in PDF is available here.

Rank of a matrix G is essentially the number of nonzero rows when the matrix G is expressed in echelon form. So, we just need to compute the ways these matrices can be created with k non zero rows. Since the elements of the matrix are binary (from \mathbb{F}_{q=2}), we can simply do a counting.

It is trivial to compute R(l,m,k) for k=0 and k>l. For  k=0, only all zero matrix possible, and only one such matrix exist. Hence R(l,m,0)=1. For  l>k>0, since  k>\min(l,m), no matrix exist, which means R(l,m,k)=0 . 

Now we consider l=k>0.  How many ways? We have l=k  non zero rows of the l\times m  matrix, which means all rows must be nonzero. Without loss of generality, for counting, we could assume that, the rows are ordered. The last row (l^{th} row can be be done in 2^{m}-1,  since there anything other than all 0 vector (of size m) is allowed. On (l-1)-th row, anything other than that of row l is allowed. There are 2^{m}-2 ways here. l-2-th row can have anything except any linear combination of the rows l and l-1. This is nothing but 2^m-\left({\binom{2}{0}+\binom{2}{1}+\binom{2}{2}}\right)=2^m-2^2. Row l-3 then have 2^m-\left(\binom{3}{0}+\binom{3}{1}+\binom{3}{2}\right)=2^m-2^3 and so on. In all, Following the same procedure, we can have a total of  

= \left(2^m-1\right) \left(2^m-2^1\right)\left(2^m-2^2\right)\ldots \left(2^m-2^{l-1}\right)

=\left(2^m-1\right) 2^{1} \left(2^{m-1}-1\right) 2^{2} \left(2^{m-2}-1\right) \ldots 2^{l-1}\left(2^{m-l+1}-1\right)

=2^{0} 2^{1} 2^{2} \ldots 2^{l-1}\left(2^m-1\right)\left(2^{m-1}-1\right)\left(2^{m-2}-1\right)\ldots\left(2^{m-l+1}-1\right)

=\prod_{i=0}^{l-1}{{2^i}\left(2^{m-i}-1\right)}

=\prod_{i=0}^{l-1}{\left(2^{m}-2^{i}\right)}

=\prod_{i=0}^{l-1}{2^m \left(1-2^{i-m}\right)}

=2^{ml} \prod_{i=0}^{l-1}{ \left(1-2^{i-m}\right)}

 ways.  For l>k>0, we can construct a rank k matrix of size l \times m in any of the following ways:

  1.  Take a rank k-1 matrix of size (l-1) \times m and add an independent row.
  2.  Take a rank k matrix of size (l-1) \times m and add a dependent row.

For every (l-1) \times m matrix, 

2^{m}-1+\binom{k-1}{1}+\binom{k-1}{2}+\ldots +\binom{k-1}{k-1}=\left(2^m-2^{k-1}\right)

and hence,

R(l-1,m,k-1) \left(2^m-2^{k-1}\right)= R_{1}(l,m,k)

ways. (Essentially avoid all possible linear combinations of existing k-1 rows).  Using the second (item 2 above) method, we can have 1+\binom{k}{1}+\binom{k}{2}+\ldots +\binom{k}{k} = 2^k and 

R_{2}(l,m,k)= 2^k R(l-1,m,k) 

different ways a rank k matrix can be formed. Where,the first term (=1) is when the all zero row is picked as the new row. In\binom{k}{1} ways we can pick any one of the exisiting row as a dependent (new row). In general for 0\le j\le k we can have combination of j existing rows  out of k in \binom{k}{j} different ways to make a dependent (new) row.

So using (1) and (2) we get,

R(l,m,k)=2^k R(l-1,m,k)+\left(2^m-2^{k-1}\right)R(l-1,m,k-1)

Putting everything together,

R(l,m,k) = \begin{cases} 1, & k=0, \\2^{ml} \displaystyle \prod_{i=0}^{l-1}{ \left(1-2^{i-m}\right)} , & l=k>0 \\ 2^k R(l-1,m,k) + \left(2^m-2^{k-1}\right) R(l-1,m,k-1) &l>k>0 \\ 0 & k>l>0 \end{cases}

While trying to understand the Luby transform (LT) code, I stumbled upon the well known coupon collector’s problem. It took a while for me to figure out the connection, but as it turned out, there is a stronger connection between these two.  In LT parlance, if we were to use only degree one packets (that is, packets sent and collected as it is) what is the expected number of packets to be collected (when collected randomly, one at a time) such that all the required packets are collected atleast once. For illustration let us say we have n information packets at the transmitter. The receiver collectes these one by one at random. How many packets on the average, we need to collect until we have collected all of the n different information packets. Remember we are collecting the packets randomly (On the other hand, if we were to collect things deterministically, we just need to collect n packets to get all these n, when done without replacement).

Assume that there are n distinct coupon types.  We have, a large pool of these coupons at disposal. Every time you go to the shop you collect a coupon picked uniformly at random from the pool.  The picked coupon has equal probability of  being any of the n types.  Naturally, some of the collected coupons (over multiple visits to the shop) may be of the same type. The question asked is this:  Suppose the coupon collector aims to have coupons of all types.  How many (number of visits) coupons he  has to collect till he possess all the n distinct types of coupons?

In expectation, the coupon collector should make  n \log(n) + O(1) visits to the shop in order to have atleast one copy of all n distinct types of coupons . This coupon collector problem can sound a little confusing to a fresh reader. For simplicity sake we can assume that, there are n differently coloured coupons at the shop. The question then is, on average (i.e., expectation) how many times one needs to visit (each visit fetch a coupon) the shop so that all coloured coupons are fetched atleast once.

There are n different type of coupons.  The coupon collector collects a coupon upon each visit. The collected coupon is among the n types, picked uniformly at random (from a set of possibly large pool of coupons) .  Since the coupon is drawn uniformly at random, there is a non zero probability that some of the collected coupons over multiple visits may be of the same type.  Suppose that at some stage, the coupon collector has r different type of coupons collected.  The probability that his next visit fetch a new coupon type (not of the r types he already have in the kitty) is p_r=\frac{n-r}{n}.  So, the expected number of coupons to be collected to fetch a new coupon type is \frac{n}{n-r}.  Let us denote this number by E\left[N_r\right].

The expected value E\left[N_i\right]=\frac{1}{p_i}=\frac{n}{n-i}. From this we can compute the expected value of N. In other words, E[N], the expected number of coupons to be collected (i.e, number of visits to the shop!) so that, the he would have all the different n types of coupons is:

E[N]=\displaystyle \sum_{i=1}^{n-1} {\frac{n}{n-i}}=n\sum_{i=1}^{n-1}{\frac{1}{i}}=nH(n)=n\log(n)+O(1)

So, what is the trouble? This number n\log(n) is prohibitively high a number to be acceptable (as decoding time of n\log (n) is significantly higher than the wishful linear time n!). So, simply using degree 1 is not a good idea. This is why Luby went ahead and identified some smarter distribution like Soliton (and its variants proposed later on, such as robust soliton and then the recent raptor codes by Amin).

Today, as part of EPFL annual research day, there were 3 interesting talks. In the morning Prakash Narayan gave a very interesting talk titled “Common randomness, multiuser secrecy and tree packing”. Essentially it covered three distinct problems and he showed a connection among the three. The first problem setup is the following: A set of terminals observe separate but correlated signals. The classical Slepian and Wolf formulation of the data compression then is essentially the problem where a subset of the given terminals seeking to acquire the signals observed by all the terminals. And this is done by means of efficiently compressed inter terminal communication. This is a problem of generating common randomness. This of course does not involve any secrecy constraints. Now suppose a secret key generation problem. There the same subset of terminals seek to devise “secret” common randomness or a secret key through public communication. Assume here that an eavesdropper can observe this. So the setup is such that the key is concealed from the eavesdropper. Such a secret key can be used for subsequent encryption. Prakash’s talk was then to explain the connection between the two problems. He went on to establish the connection to a problem in computer science namely the maximal packing og Steiner trees in an associated multi graph. I dont think I figured out the details that well, but it triggered some curiosity to read the work a little more detail. I hope to do that sometime soon.

The afternoon session had two talks. One was by Shamai who talked about Broadcast approach in communication systems. It went over time. I thought I focused well in the beginning to follow him, but partly because of the post lunch effect and partly because of the tiredness I lost the flow. From what I understood, he outlined a lot of communication scenarios incorporating the broadcast strategy. Some examples were MIMO rate diversity trade off, ARQ, multilayer schemes etc. A lot the work seems to have gone in this direction, especially Suhas and Sanket etc (from the citation) and David Tse, L. Zheng, Al-Dahir and Shamai himself. I am somewhat amazed by the areas Shamai worked on. He seems to have covered a broad spectrum of research and yet produced some stellar work.

After Shamai, it was an interesting talk by Amos Lapidoth. He presented handsomely. I was attentive enough to follow this. Also, it happened to be a talk of different kind. He talked about the well known Matched filter used in communication. He sort of started with a little story. The story of a man from a village, venturing out of that place with a mission to find the meaning of life. So he goes to the mountains with a resolve not to come back until he finds the meaning of life. So days passed, months passed and years passed. Even after 10 years no sign of him. Finally he comes back after 11 years or so. The whole village feels curious: Aha he has come back. They ask him, wow, you have figured out the meaning of life. Please share us what is it? He says, with a pause: Life is (he pauses again)…. : Villages out of patience ask him, : ” You please go on .. life is …”. The man completes and says ” Life is like a train!”. Then they ask what you mean by “life is like a train”. Then to the surprise of the entire village he says, “may be not!”.

That was simply amazing a prelude for the talk. The talk abstract is the following:
One of the key results of Digital Communications can be paraphrased very roughly as follows: “in guessing which of two deterministic signals is being observed in white Gaussian noise, the inner products between the observed waveform and each of the signals form a sufficient statistic. Consequently, it is optimal to base one’s decision on these two inner products.” It is surprising that this basic result is never formulated as a theorem in any of the textbooks on the subject. This may be because of the difficulties in defining white Gaussian noise, in defining sufficient statistics for waveform observations, and in relating sufficiency to optimal detection. In this talk I shall describe a number of approaches to formulating the above statement as a theorem and point out some of their shortcomings. I will finally describe my proposed approach, formulate the theorem, and prove it from first principles. The proposed approach does not rely on the Ito Calculus, on Brownian Motion, or on generalized stochastic processes. It does not introduce non-physical infinite-power noise processes. Moreover, it is suitable for rigorously treating colored noise.

He gave a counter example where we can do better than matched filter. He says a Gaussian noise, but choose a point at random where the noise is made zero. Since it is randomly chosen (the null point) he claims it is still Gaussian. To me, that will result in SNR to blow up to infinity. So, are we missing something. I cant wait to read the full paper presentation of this. Otherwise, it seem to be a very very interesting way to look at matched filter, without needing the sojourn mathematical machinery.

Anyway all these talks are available (schedule at the moment) at [1]
[1]http://ic.epfl.ch/page65253-fr.html

Pages

July 2014
M T W T F S S
« Apr    
 123456
78910111213
14151617181920
21222324252627
28293031  

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 56 other followers

Like

Follow

Get every new post delivered to your Inbox.

Join 56 other followers

%d bloggers like this: