I am tutoring CS 350, Mathematical Foundations of Computer Science, in Semester 1, 2007: handouts and model answers to tutorials.
Universal semimeasures , a type of function derived from probability and computability theory, describe extremely powerful sequence predictors. They outperform all other predictors but for a penalty no more than the predictor's complexity. They learn to predict any computable sequence with error no more than the sequence's complexity. Universal semimeasures work by modelling the sequence as generated by an unknown program running on a universal computer. Although these predictors are uncomputable, and so cannot be implemented in practice, they serve to describe an ideal: an existence proof for systems that predict better than humans.
We review the mathematics behind universal semimeasures and discuss some of its implications. Our approach differs from previous ones in several respects. We show semimeasures correspond to probability measures over the set of finite and infinite sequences, establishing a rigorous footing. We demonstrate the existence of universal semimeasures using a novel proof of the equivalence between enumerable semimeasures and processes. We take into account the possibility of sequence termination leading to a stronger definition of universality. To make explicit hidden constants we define a novel complexity measure, simulation complexity, which generalises monotone complexity. Finally, we emphasise the use of logarithmic scoring rules  to measure error in prediction.
We outline a method for quantifying the error of a sequence prediction. With sequence predictions represented by semimeasures $\nu$ we define their error to be $-log_2 \nu(x)$. We note that enumerable semimeasures are those which model the sequence as the output of a computable system given unknown input. Using this we define the simulation complexity of a computable system C relative to another U giving an exact bound on their difference in error. This error in turn gives an exact upper bound on the number of predictions $\nu$ gets incorrect.
Presented at the Dagstuhl seminar on Kolmogorov Complexity.
Artificial intelligence can be seen as the study of agents which achieve what we want, an agent being an interactive system. This can be mathematically modelled through decision theory, using probability distributions to capture what we know and utility functions to capture what we want. Optimal agents, those that maximally achieve what we want, are agents with maximal expected utility.
We describe this theory of optimal agents. We detail how these optimal agents behave, giving an explicit formula for their actions. We discuss applications of this theory, in particular Marcus Hutter's AIXI . Finally, we suggest possible directions for future research extending Hutter's work on universal AI.
A recent trend in AI centres around the design of agents: autonomous systems interacting with environments. For example, the controlling software for a robot is an agent which interacts with the environment through the robot. We describe a definition of an optimal AI as one which maximises an expected utility, and describe how this agent works. This generalises AIXI, Marcus Hutter's model of optimal AI based on reinforcement learning and algorithmic information theory.
The complication is the obvious algorithm is at best intractable, and at worst uncomputable. Furthermore, the model has free parameters which aren't easily chosen. We suggest further research into determining these parameters.