You are currently browsing the tag archive for the ‘Mathematics’ tag.

Todays IPG seminar had Fritz Eisenbrand (the Disctete Opt chair, Math department EPFL) talking about Diameter of Polyhedra:Limits of Abstraction. I don’t think I followed the topic too well, but this is a share of what I understood.

The topic is about a convex geometric problem on the diameter of a polyhedra. The question of whether the diameter of a polyhedron is polynomial or not seemed to be a longstanding open problem. The largest diameter of a dimensional polyhedron with facets has known upper and lower bounds.

.

The lower bound is due to Klee and Walkup and upper bound to Kalai and Kleitman. These bounds also hold good for combinatorial abstractions of the 1-skeleton of non-degenerate polyhedra (Polyhedron here is called non-degenrate). What Fritz and his colleagues have done is to look into the gap between these known lower and upper bounds. Apparently, the gap is wide and they have made some progress to get a super linear lower bound if is allowed to grow with .

The way they showed this bound is by establishing the bound for the largest diemeter of a graph in a base abstraction family. Let us say, the abstraction family of connected graphs be denoted by .The largest diameter of a graph in is denoted by . They find that, and then using the fact that , they conclude the bound

I have not had a chance to see their paper yet. I must say, the proof was not all that within my grab during the talk. However it appeared that it is based on some layering and combinatorics. He said some applications to covering problem, in particular disjoint covering design which I didn’t follow that well. Sometimes I get the feeling that I am a little dumb to grasp these ideas during a talk. I wonder whether others understand it very well on a first shot presentation. I have put it in my agenda (among the millions of other papers to read) to see through this problem and proof, one day! His presentation was very clear and legible though.

While trying to understand the Luby transform (LT) code, I stumbled upon the well known coupon collector’s problem. It took a while for me to figure out the connection, but as it turned out, there is a stronger connection between these two. In LT parlance, if we were to use only degree one packets (that is, packets sent and collected as it is) what is the expected number of packets to be collected (when collected randomly, one at a time) such that all the required packets are collected atleast once. For illustration let us say we have information packets at the transmitter. The receiver collectes these one by one at random. How many packets on the average, we need to collect until we have collected all of the different information packets. Remember we are collecting the packets randomly (On the other hand, if we were to collect things deterministically, we just need to collect packets to get all these , when done without replacement).

Assume that there are distinct coupon types. We have, a large pool of these coupons at disposal. Every time you go to the shop you collect a coupon picked uniformly at random from the pool. The picked coupon has equal probability of being any of the types. Naturally, some of the collected coupons (over multiple visits to the shop) may be of the same type. The question asked is this: Suppose the coupon collector aims to have coupons of all types. How many (number of visits) coupons he has to collect till he possess all the distinct types of coupons?

In expectation, the coupon collector should make visits to the shop in order to have atleast one copy of all distinct types of coupons . This coupon collector problem can sound a little confusing to a fresh reader. For simplicity sake we can assume that, there are differently coloured coupons at the shop. The question then is, on average (i.e., expectation) how many times one needs to visit (each visit fetch a coupon) the shop so that all coloured coupons are fetched atleast once.

There are different type of coupons. The coupon collector collects a coupon upon each visit. The collected coupon is among the types, picked uniformly at random (from a set of possibly large pool of coupons) . Since the coupon is drawn uniformly at random, there is a non zero probability that some of the collected coupons over multiple visits may be of the same type. Suppose that at some stage, the coupon collector has different type of coupons collected. The probability that his next visit fetch a new coupon type (not of the types he already have in the kitty) is . So, the expected number of coupons to be collected to fetch a new coupon type is . Let us denote this number by .

The expected value . From this we can compute the expected value of . In other words, , the expected number of coupons to be collected (i.e, number of visits to the shop!) so that, the he would have all the different types of coupons is:

So, what is the trouble? This number is prohibitively high a number to be acceptable (as decoding time of is significantly higher than the wishful linear time !). So, simply using degree is not a good idea. This is why Luby went ahead and identified some smarter distribution like Soliton (and its variants proposed later on, such as robust soliton and then the recent raptor codes by Amin).

## Recent Comments