The decimal expansion of an irrational real number is not periodic. Here are three well-known irrational reals that everyone encounters in high school and college mathematics: √2, π, and e.
Each of these numbers would seem to contain an infinite amount of information, because they have an infinite decimal expansion that never repeats. For example,
However, π actually only contains a finite amount of information, because there is a small computer program for computing π. Instead of sending someone the digits of π, we can just explain to them how to compute as many digits as they want.
Are there any real numbers that contain an infinite amount of information? Well, clearly, if the successive decimal digits are chosen at random, the resulting stream of digits has no structure, each digit is a complete surprise, and there cannot be an algorithm for computing the number digit by digit.
However, this random sequence of digits is not useful information, not at all. It's an infinite amount of completely useless information.
Θ doesn't tell us anything about the stock market or history, but it does tell us a great deal about mathematics. Why? Because knowing this number Θ would automatically enable us to resolve famous mathematical problems like Fermat's so-called last theorem, which asserts that there are no positive integer solutions for
with the power N greater than two.
How can Θ enable us to decide if Fermat was right and this equation has no solutions? There is a simple computer program for systematically searching for a solution of Fermat's equation. This program will fail to halt precisely if Fermat's conjecture that there are no solutions is correct.
However, in the case of Fermat's conjecture there is no need to wait for the number Θ; Andrew Wiles now has a proof that there are no solutions. But Θ would enable us to answer an infinite number of such conjectures, more precisely, all conjectures that can be refuted by a single counter example that we can search for using a computer.
Why? Because if we are given N programs, we can determine which ones halt and which ones don't if we merely know how many of these N programs halt, and to know that is only about log2 N bits of information. (Run all N programs in parallel until precisely the correct number have stopped; the remaining programs will never stop.)
Furthermore, log2 N is much smaller than N for all sufficiently large values of N.
So what is the best we can do? Is there an oracle for the halting problem that isn't redundant, that doesn't waste any bits?
The general idea is that the computer that we are using flips a fair coin to generate each bit of the program, a heads yields a 1, a tails yields a 0, successive coin tosses are independent, and the computer starts running the program right away as it generates these bits. Ω is the probability that this process will eventually halt.
More precisely, each K-bit program p that halts contributes precisely 1/2K to the halting probability Ω:
Furthermore, to avoid having this sum diverge to infinity, the set of meaningful programs must be a prefix-free set, in other words, no extension of a valid program is a valid program. Then what information theorists call the Kraft inequality applies to the set of all programs and Ω is necessarily less than one.
Ω is a very valuable oracle, because knowing the first N bits of Ω would enable us to resolve the halting problem for all programs up to N bits in size. No oracle for the halting problem can do better than this. Ω is so valuable precisely because it is the most compact way to represent this information. It's the best possible oracle for the halting problem. You get the biggest bang for your buck with each bit!
And because this information is so valuable, Ω is maximally unknowable, maximally uncomputable: An N-bit computer program can compute at most N bits of Ω, and a mathematical theory with N bits of axioms can enable us to determine at most N bits of Ω. In other words, the bits of Ω are incompressible, irreducible information, both logically irreducible and computationally irreducible.
Paradoxically, however, even though Ω is packed full of useful information, its successive bits appear to be totally unstructured and random, totally chaotic, because otherwise Ω would not be the most compact oracle for the halting problem. If one could predict future bits from past bits, then Ω would not be the best possible compression of all the answers to individual cases of Turing's halting problem.
But what if we bravely try to compute Ω anyway?
Will mathematics always be able to advance in this way, or will we eventually hit an insurmountable obstacle? Who knows! What is clear is that Ω can never be known in its entirety, but if the growth of our mathematical knowledge continues unabated, each individual bit of Ω can eventually be known.
I hope that this note gives some idea why  is of interest. (See also .) For more on Ω, please see my article in Scientific American  or my book . A more recent paper is my Enriques lecture at the University of Milan in 2006 .