# Proof that Solovay randomness is equivalent to Martin-Löf randomness

In his brilliant book-length manuscript on AIT dated May 1975, which unfortunately was never published, Solovay reformulated Martin-Löf's statistical definition of a random real. The resulting definition is, in my opinion, a substantial improvement: it's much easier to apply. And in the next chapter we'll be able to use it to show that Ω is strong Chaitin random.

What is Solovay's definition?

Consider once more a computable sequence of coverings Ak. This time, however, we do not follow Martin-Löf and assume that

μ{ Ak } ≤ 1/2k

for all k. Instead we only know that the sum of these measures converges, that is, that

k μ{ Ak } < ∞.

According to Martin-Löf, a real r is non-random if it is contained in all of the Ak. Now, following Solovay, r is non-random if it is merely contained in infinitely many of the Ak.

So if a real is Martin-Löf non-random, then it will also be Solovay non-random. Solovay's criterion for non-randomness is much less demanding than Martin-Löf's.

Why is this a good definition? Because in probability theory one often uses the argument that if the infinite series of events Ak has the property that

k Prob{Ak}

converges, then with probability one only finitely many of the events Ak will occur. That's called the Borel-Cantelli lemma. [See W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 1, Wiley, 1950.] We can therefore paraphrase Solovay's definition by saying that a real is Solovay random if it always obeys the Borel-Cantelli lemma.

Now I'll present Solovay's proof that Solovay randomness is actually equivalent to Martin-Löf randomness.

We've already seen that if a real is Martin-Löf non-random, then it is also Solovay non-random. So we only need to prove the converse, namely that if a real is Solovay non-random, then it's also Martin-Löf non-random. Why is this so?

Well, says Solovay, consider a real r which is covered by infinitely many of the Ak in a computable sequence of coverings Ak with

k μ{ Ak } ≤ 2c.

We'll construct a new computable sequence of coverings Bk from the Ak, in such a way that r is in all of the Bk and

μ{ Bk } ≤ 1/2k.

How does Solovay achieve this?

Well, he defines Bk to consist precisely of all those reals, of all the points which are in at least 2k+c of the coverings Ak! That's all there is to it!

That proves Solovay's theorem, but how do we program this? Well, for starters, to simplify matters I just produce the Bk consisting of all those points which are in at least k of the coverings Ak.

So at stage n I generate A0 through An for time n. Then I extend all the prefixes for the intervals in these coverings to the same size, I increase the granularity, so that I can determine the multiplicity of each subinterval. And once I have collected those subintervals with multiplicity ≥ k, I try to reassemble them, by collapsing any pairs of adjacent subintervals α0 and α1 into the single interval whose prefix is α.

And what do I take as my coverings Ak? I let Ak consist of all those bit strings of the form

Ak = { 1j0 : j > k }.

Thus 1j0 is in precisely j of the Ak. And Bk, the set of strings that are in at least k of the Ak, is

Bk = { 1j0 : jk }.

And solovay.l calculates

B2 = { 120, 130, 140, ... }.

Now we're finally ready to exhibit examples of statistical assertions that we can make about the bits of the halting probability Ω! Solovay randomness is just the tool that we need, and we can use it because at this point we've seen that Ω is Chaitin random, hence Martin-Löf random, and thus Solovay random. So please do the exercises!

## Software for this chapter

The source code and runs of the software for this chapter can be found on the web at these URL's:
```   http://www.cs.umaine.edu/~chaitin/ait/solovay.l
http://www.cs.umaine.edu/~chaitin/ait/solovay.r

http://www.cs.auckland.ac.nz/CDMTCS/chaitin/ait/solovay.l
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/ait/solovay.r
```

## Exercises

1. Find another sequence of coverings Ak to run solovay.l on. The Ak that I've picked aren't very interesting!

2. Equal limiting relative frequencies of 0's and 1's. Use Ω's Solovay randomness to prove that Ω is Borel simply normal in base two, i.e., that the relative frequency of 0's and 1's both tend to the limit one-half. Hint: the number of n-bit strings with exactly k 0's and nk 1's is the binomial coefficient ``n choose k'' =

(
 n k
)
=  n! / k!(nk)! .

Show that this infinite sum converges:

n 2n|k/n − 1/2| > ε
(
 n k
)
<     ∞.

Do this by considering the ratio of successive binomial coefficients. Work with estimates of base-two logarithms of probabilities using Stirling's approximation:

log2 n!     ~     n log2 n.

3. Invariance of limiting frequency under subsequence selection. Show that any algorithm for picking a subset of the bits of Ω that selects infinitely many bits of Ω selects 0's and 1's with limiting relative frequency exactly one-half. A selection algorithm is one that generates an infinite sequence of position numbers of the selected bits.

4. Nonexistence of prediction algorithms. Show that any algorithm for predicting the next bit of Ω based on all the previous bits will be correct in the limit exactly half the time.

5. Nonexistence of selective prediction algorithms. Show that any algorithm for predicting the next bit of Ω based on all the previous bits, but which now has the option of saying ``no prediction'' as well as predicting that the next bit will be a 0 or a 1, will be correct in the limit exactly half the time, as long as it predicts infinitely many bits of Ω.

6. Borel absolute normality. [The subtlety of applying theoretical models to the real world is emphasized by F. Chaitin-Chatelin's Newcomb-Borel paradox, which points out that in spite of Borel normality, the first digit of a physically measured quantity is much more likely to be a 1 than a 9! See F. Chaitin-Chatelin and V. Frayssé, Lectures on Finite Precision Computations, SIAM, 1996, p. 101.] Show that the preceding exercises work if the real number Ω is written in any base, not just base-two. In fact, show that in any base all possible digits and blocks of digits of the same length have equal limiting relative frequency. E.g., in base ten each digit of Ω has limiting relative frequency exactly one-tenth, and each of the hundred possible blocks of two consecutive digits has limiting relative frequency exactly one in a hundred, etc.