Papers on Algorithmic Information Theory
Second Edition

G J Chaitin, IBM Research

This collection of papers was published by World Scientific in Singapore.
The first edition appeared in 1987, and the second edition appeared in 1990.
It may be ordered from the publisher or from Barnes & Noble and Amazon.
Available in hardcover & softcover.

ISBN: 981-02-0154-0 (2nd ed., hardcover), 981-02-0171-0 (2nd ed., softcover),
9971-50-479-0 (1st ed., hardcover), 9971-50-480-4 (1st ed., softcover).

324 pages (2nd ed.), 284 pages (1st ed.)

Papers on Algorithmic Information Theory — Second Edition
World Scientific Series in Computer Science — Vol. 8
by Gregory J Chaitin (IBM)

This book contains in easily accessible form all the main ideas of the creator and principal architect of algorithmic information theory. This expanded second edition has added thirteen abstracts, a 1988 SCIENTIFIC AMERICAN article, a transcript of a EUROPALIA 89 lecture, and an essay on biology. Its new larger format makes it easier to read. Chaitin's ideas are a fundamental extension of those of Gödel and Turing and have exploded some basic assumptions of mathematics and thrown new light on the scientific method, epistemology, probability theory, and of course computer science and information theory.

Readership: Computer scientists, mathematicians, physicists, philosophers and biologists.

``Many of Chaitin's results are discussed in a delightful collection of his published articles.'' — Joseph Ford, American Scientist

``Chaitin advances the cause of truths whose time have come; he is preparing a roadmap to ease our voyage into a truly uncertain future. Those who embark on this great adventure will most assuredly find sustenance in the books reviewed here.'' — Joseph Ford, Foundations of Physics

``One will find [in Information, Randomness & Incompleteness] all kinds of articles which are popularizations or epistemological reflections and presentations which permit one to rapidly obtain a precise idea of the subject and of some of its applications (in particular in the biological domain). Very complete, it is recommended to anyone who is interested in algorithmic information theory.'' (translated) — Jean-Paul Delahaye, La Recherche

About the author

Gregory J Chaitin is a member of the theoretical physics group at the IBM Thomas J Watson Research Center in Yorktown Heights, New York. He created algorithmic information theory in the mid 1960's when he was a teenager. In the two decades since he has been the principal architect of the theory. His contributions include: the definition of a random sequence via algorithmic incompressibility, the reformulation of program-size complexity in terms of self-delimiting programs, the definition of the relative complexity of one object given a minimal-size program for another, the discovery of the halting probability Ω and its significance, the information-theoretic approach to Gödel's incompleteness theorem, the discovery that the question of whether an exponential diophantine equation has finitely or infinitely many solutions is in some cases absolutely random, and the theory of program size for Turing machines and for LISP. He is the author of the monograph Algorithmic Information Theory published by Cambridge University Press in 1987.


God not only plays dice in quantum mechanics, but even with the whole numbers! The discovery of randomness in arithmetic is presented in my book Algorithmic Information Theory published by Cambridge University Press. There I show that to decide if an algebraic equation in integers has finitely or infinitely many solutions is in some cases absolutely intractable. I exhibit an infinite series of such arithmetical assertions that are random arithmetical facts, and for which it is essentially the case that the only way to prove them is to assume them as axioms. This extreme form of Gödel incompleteness theorem shows that some arithmetical truths are totally impervious to reasoning.

The papers leading to this result were published over a period of more than twenty years in widely scattered journals, but because of their unity of purpose they fall together naturally into the present book, intended as a companion volume to my Cambridge University Press monograph. I hope that it will serve as a stimulus for work on complexity, randomness and unpredictability, in physics and biology as well as in metamathematics.

For the second edition, I have added the article ``Randomness in arithmetic'' (Part I), a collection of abstracts (Part VII), and, as an Epilogue, two essays which have not been published elsewhere that assess the impact of algorithmic information theory on mathematics and biology, respectively. I should also like to point out that it is straightforward to apply to LISP the techniques used in Part VI to study bounded-transfer Turing machines. A few footnotes have been added to Part VI, but the subject richly deserves book length treatment, and I intend to write a book about LISP in the near future. [LISP program-size complexity is discussed at length in my book Information-Theoretic Incompleteness published by World Scientific in 1992.]



Part I—Introductory/Tutorial/Survey Papers

Randomness and mathematical proof, 1975
Randomness in arithmetic, 1988
On the difficulty of computations, 1970
Information-theoretic computational complexity, 1974
Algorithmic information theory, 1982
Algorithmic information theory, 1977

Part II—Applications to Metamathematics

Gödel's theorem and information, 1982
Randomness and Gödel's theorem, 1986
An algebraic equation for the halting probability, 1988
Computing the busy beaver function, 1987

Part III—Applications to Biology

To a mathematical definition of ``life,'' 1970
Toward a mathematical definition of ``life,'' 1979

Part IV—Technical Papers on Self-Delimiting Programs

A theory of program size formally identical to information theory, 1975
Incompleteness theorems for random reals, 1987
Algorithmic entropy of sets, 1976

Part V—Technical Papers on Blank-Endmarker Programs

Information-theoretic limitations of formal systems, 1974
A note on Monte Carlo primality tests and algorithmic information theory, 1978
Information-theoretic characterizations of recursive infinite strings, 1976
Program size, oracles and the jump operation, 1977

Part VI—Technical Papers on Turing Machines & LISP

On the length of programs for computing finite binary sequences, 1966
On the length of programs for computing finite binary sequences: Statistical considerations, 1969
On the simplicity and speed of programs for computing infinite sets of natural numbers, 1969

Part VII—Abstracts

On the length of programs for computing finite binary sequences by bounded-transfer Turing machines, 1966
On the length of programs for computing finite binary sequences by bounded-transfer Turing machines II, 1966
Computational complexity and Gödel's incompleteness theorem, 1970
Computational complexity and Gödel's incompleteness theorem, 1971
Information-theoretic aspects of the Turing degrees, 1972
Information-theoretic aspects of Post's construction of a simple set, 1972
On the difficulty of generating all binary strings of complexity less than N, 1972
On the greatest natural number of definitional or information complexity ≤ N, 1973
A necessary and sufficient condition for an infinite binary string to be recursive, 1973
There are few minimal descriptions, 1973
Information-theoretic computational complexity, 1973
A theory of program size formally identical to information theory, 1974
Recent work on algorithmic information theory, 1977


Undecidability and randomness in pure mathematics, 1989
Algorithmic information and evolution, 1991


Information, Randomness & Incompleteness, 2nd edition, World Scientific, 1990. Errata:

  1. On page 26, line 25, ``quickly that'' should read ``quickly than''.

  2. On page 31, line 19, ``Here one'' should read ``Here once''.

  3. On page 55, line 17, ``RI, p. 35'' should read ``RI, 1962, p. 35''.

  4. On page 85, line 14, ``1. The problem'' should read ``1. The Problem''.

  5. On page 88, line 13, ``4. What is life?'' should read ``4. What is Life?''.

  6. On page 108, line 13, ``the table in'' should read ``in the table in''.

  7. On page 117, Theorem 2.3(q), ``HC (s, t)'' should read ``HC (s | t)''.

  8. On page 134, line 7, ``#{ n | H(n) ≤ n } ≤ 2n'' should read ``#{ k | H(k) ≤ n } ≤ 2n''.

  9. On page 274, bottom line, ``n4p+4'' should read ``n4p'+4''.