Not really blogging

A. Makelov

Arbitrarily biasing a coin in 2 expected tosses

Here’s a neat probability trick that I learned from Konstantin Matveev and which, I think, everybody mildly interested in math should know about:

Problem. Given a fair coin, how do you (efficiently) generate an event $E$ with probability 1/5?

Solution. We can, of course, toss the coin three times, giving us a total of 8 possibilities, then discard our least favorite 3 of them, and weigh the remaining  5 possibilities equally. This algorithm requires an expected number of tosses equal to $3\times 8/5=24/5$. But, what if instead of $1/5$, we have $1/1000000$? You can easily see that the expected number of tosses to emulate a probability of $1/n$ grows logarithmically with $n$. But even worse, what if we had $1/\pi$? Well, here’s a trick that gets rid of both of these problems: let

$\frac{1}{5} = \displaystyle\sum_{i=1}^\infty \frac{a_i}{2^i}$

for $a_i\in \{0,1\}$ be the binary expansion of $1/5$. Then, start tossing the coin until it lands heads, at some time $I$. If $a_I=1$, declare that $E$ has occurred; otherwise, $E$ has not occurred. Then clearly

$\mathbb{P}[E]=\displaystyle\sum_{i=1}^\infty \mathbb{P}[E\ \big| \ I=i]\mathbb{P}[I=i]=\displaystyle\sum_{i=1}^\infty \frac{a_i}{2^i}=\frac{1}{5}$

Furthermore, notice that $\mathbb{E}[I]=2$, regardless of the probability we want to emulate! Well, that seems pretty efficient. When you think about it some more, it really appears to be mind-boggling – you can emulate extremely small, or irrational, probabilities with just two expected tosses. Moreover, you don’t need to have the binary expansion of the probability in advance – you can pass the next digit depending on the status of your experiment.

Combining this with a standard unbiasing technique, say von Neumann unbiasing,, this gives you a very simple procedure that given a biased coin that lands heads with probability $0, allows you to simulate a biased coin that lands heads with probability $0 for any other $q$. Any binary source of randomness is convertible to any other such source.

But we haven’t said anything about the efficiency of unbiasing. There, we can’t do as well as in biasing: there is a fundamental obstacle, the information-theoretic limit. Roughly speaking, the amount of information a biased coin tells us is always strictly less than the amount of information we get from an unbiased coin – this is why biasing is easier than unbiasing. Fortunately, there is a procedure that lets us extract an unbiased stream of bits that on average achieves the best performance theoretically possible: see this paper by Mitzenmacher to learn more.

I guess the moral of all this is the following: if you’re stuck on a deserted island with $\pi -1$ other people, you need to decide who the first to be eaten is, and all you have in your random arsenal is a suspicious-looking coin handed to you by one of your shipmates, do not despair – you can still make sure you have a fair chance of surviving the day.

mathbabe

Exploring and venting about quantitative issues

Rational Altruist