What is Solovay's definition?
Consider once more a computable sequence of coverings Ak. This time, however, we do not follow Martin-Löf and assume that
for all k. Instead we only know that the sum of these measures converges, that is, that
According to Martin-Löf, a real r is non-random if it is contained in all of the Ak. Now, following Solovay, r is non-random if it is merely contained in infinitely many of the Ak.
So if a real is Martin-Löf non-random, then it will also be Solovay non-random. Solovay's criterion for non-randomness is much less demanding than Martin-Löf's.
Why is this a good definition? Because in probability theory one often uses the argument that if the infinite series of events Ak has the property that
converges, then with probability one only finitely many of the events Ak will occur. That's called the Borel-Cantelli lemma. [See W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 1, Wiley, 1950.] We can therefore paraphrase Solovay's definition by saying that a real is Solovay random if it always obeys the Borel-Cantelli lemma.
Now I'll present Solovay's proof that Solovay randomness is actually equivalent to Martin-Löf randomness.
We've already seen that if a real is Martin-Löf non-random, then it is also Solovay non-random. So we only need to prove the converse, namely that if a real is Solovay non-random, then it's also Martin-Löf non-random. Why is this so?
Well, says Solovay, consider a real r which is covered by infinitely many of the Ak in a computable sequence of coverings Ak with
We'll construct a new computable sequence of coverings Bk from the Ak, in such a way that r is in all of the Bk and
How does Solovay achieve this?
Well, he defines Bk to consist precisely of all those reals, of all the points which are in at least 2k+c of the coverings Ak! That's all there is to it!
That proves Solovay's theorem, but how do we program this? Well, for starters, to simplify matters I just produce the Bk consisting of all those points which are in at least k of the coverings Ak.
So at stage n I generate A0 through An for time n. Then I extend all the prefixes for the intervals in these coverings to the same size, I increase the granularity, so that I can determine the multiplicity of each subinterval. And once I have collected those subintervals with multiplicity ≥ k, I try to reassemble them, by collapsing any pairs of adjacent subintervals α0 and α1 into the single interval whose prefix is α.
And what do I take as my coverings Ak? I let Ak consist of all those bit strings of the form
Thus 1j0 is in precisely j of the Ak. And Bk, the set of strings that are in at least k of the Ak, is
And solovay.l calculates
Now we're finally ready to exhibit examples of statistical assertions that we can make about the bits of the halting probability Ω! Solovay randomness is just the tool that we need, and we can use it because at this point we've seen that Ω is Chaitin random, hence Martin-Löf random, and thus Solovay random. So please do the exercises!
http://www.cs.umaine.edu/~chaitin/ait/solovay.l http://www.cs.umaine.edu/~chaitin/ait/solovay.r http://www.cs.auckland.ac.nz/CDMTCS/chaitin/ait/solovay.l http://www.cs.auckland.ac.nz/CDMTCS/chaitin/ait/solovay.r
|= n! / k!(n−k)! .|
Show that this infinite sum converges:
|∑n 2−n ∑|k/n − 1/2| > ε||
Do this by considering the ratio of successive binomial coefficients. Work with estimates of base-two logarithms of probabilities using Stirling's approximation: