 
  
  
  
  
 Next: Maximum Likelihood & Least-Squared 
Up: Bayesian Learning
 Previous: Bayes Theorem & Concept 
 
-  a learning algorithm is a consistent learner if it commits zero
errors over the training examples
-  every consistent learner outputs a MAP hypothesis if 1) we assume a
uniform prior probability distribution over H and if 2) we assume a
deterministic noise free training data.
-  Find-S & Candidate-Elimination output a MAP hypotheses
-  Bayesian perspective can be used to characterize learning
algorithms even if they do not explicitly manipulate probabilities
 
Patricia Riddle 
Fri May 15 13:00:36 NZST 1998