
Machine Learning
Department of Computer Science
Iowa State University

Study Guide
Week 1 (January 12, 2004)
Overview of the course. Overview of machine learning. Why should machines learn?
Operational definition of learning. Taxonomy of machine learning.
Specification of a computational model of learning. Example: specification
and analysis of a model for conjunctive concept learning. Role of
representational and inferential bias in learning.
Inductive Learning of Pattern Classifiers. Decision Tree Induction from
Examples. Occam's razor.
Brief digression on probability theory and Information Theory. Review of elements of probability. Probability spaces. Ontological and epistemological commitments of probabilistic representations of knowledge. Bayesian (subjective view of probability)  Probabilities as measures of belief conditioned on the agent's knowledge. Possible world interpretation of probability. Axioms of probability. Conditional probability. Bayes theorem. Random Variables. Discrete Random Variables as functions from event spaces to Value sets. Possible world interpretation of random variables. Joint Probability distributions. Conditional Probability Distributions. Conditional Independence of Random variables. Pairwise independence and independence. Entropy of random variables, Information, Mutual Information. Measures of distance between probability distributions  KullbackLiebler divergence.
Required readings
Recommended Readings
Strongly Recommended Java Readings for those unfamiliar with Java.
Week 2 (January 19, 2003)
Learning Decision Tree Classifiers. Evaluation of classifiers  estimation of performance measures; confidence interval calculation for estimates; crossvalidation based estimates of hypothesis performance; leaveoneout and bootstrap estimates of performance; comparing two hypotheses; hypothesis testing and null hypothesis; comparing two learning algorithms.
Overfitting and methods to avoid overfitting  dealing with small sample sizes; prepruning and postpruning. Pitfalls of entropy as a splitting criterion for multivalued splits. Alternative splitting strategies  twoway versus multiway splits; Alternative split criteria: Gini impurity, Entropy, etc. Costsensitive decision tree induction  incorporating attribute measurement costs and misclassification costs into decision tree induction.
Algorithms for Learning Decision Trees (continued). Dealing with categorical, numeric, and ordinal attributes. Dealing with missing attribute values during tree induction and instance classification;
Required readings
 Chapter 3 from Mitchell's Machine Learning Textbook.
 Nils Nilsson. Decision Trees
 Chapter 5 from Mitchell's Machine Learning Textbook.
 Vasant Honavar, Lecture Slides. (Note: Monday, Jan 19 was a holiday)

Lecture Slides. (Additional Material on Evaluation of Learning Algorithms). Vasant Honavar.

Supplementary Slides (Cover material from some of the assigned readings). Vasant Honavar

Caragea, D., Silvescu, A., and Honavar, V. (2004). A Framework for Learning from Distributed Data Using Sufficient Statistics and its Application to Learning Decision Trees. International Journal of Hybrid Intelligent Systems. Invited Paper.
Vol. 1. pp. 8089.

Zhang, J. and Honavar, V. (2003). Learning Decision Tree Classifiers from Attribute Value Taxonomies and Partially Specified Data. In: Proceedings of the International Conference on Machine Learning (ICML03). Washington, DC. pp. 880887.

Atramentov, A., Leiva, H., and Honavar, V. (2003).
A MultiRelational Decision Tree Learning Algorithm  Implementation and Experiments.. In: Proceedings of the Thirteenth International Conference on Inductive Logic Programming. Berlin: SpringerVerlag. Lecture Notes in Computer Science. Vol. 2835, pp. 3856.

Silvescu, A., and Honavar, V. (2001). Temporal Boolean Network Models of Genetic Networks and Their Inference from Gene Expression Time Series. Complex Systems.. Vol. 13. No. 1. pp. 54.

Wang, X., Schroeder, D., Dobbs, D., and Honavar, V. (2003). Automated DataDriven Discovery of MotifBased Protein Function Classifiers. Information Sciences. Vol. 155. pp. 118. 2003.

Wang, H. and Zaniolo, C. (2000).
CMP: A Fast Decision Tree Classifier Using Multivariate Predictions In: Proceedings of the International Conference on Data Engineering.

Domingos, P. (1999). The Role of Occam's Razor in Knowledge Discovery. Preprint.

Dietterich, T., Kong, E. (1995). Machine Learning Bias, Statistical Bias, Statistical Variance of Decision Tree Algorithms. Technical Report, Department of Computer Science, Oregon State University
Recommended Readings

Codrington, C. W. and Brodley, C. E., On the Qualitative Behavior of ImpurityBased Splitting Rules: The MinimaFree Property. Tech. Rep. 9705. Dept. of Computer Science. Cornell University.

Brodley, C. and Utgoff, P. (1995). Multivariate Decision Trees. Machine Learning 19: 4577.

Martin, K.J. (1997). An exact Probability Metric for Decision Tree Splitting and Stopping. Machine Learning 28: 257291.

Manish Mehta Jorma Rissanen Rakesh Agrawal (1995)
MDLbased Decision Tree Pruning.. In: Proceedings of KDD95.

Bradford, C. Kunz, R. Kohavi, C. Brunk, and C.E. Brodley. Pruning decision trees with misclassification costs. In Proceedings of Tenth European Conference on Machine Learning (ECML98), pages 131136, Berlin, 1998. 95

Elomaa, T. and Rousu, J. (1999). General and Efficient Multisplitting of Numerical Attributes.. Machine Learning. 1999.

W.Y. Loh and Y.S. Shih, Split selection methods for classification trees, Statistica Sinica, vol. 7, pp. 815840, 1997.
 J. E. Gehrke, V. Ganti, R. Ramakrishnan, and WY Loh.
BOAT  Optimistic Decision Tree Construction. In Proceedings of the 1999 SIGMOD Conference, Philadelphia, Pennsylvania, 1999.

Johannes E. Gehrke, Raghu Ramakrishnan, and Venkatesh Ganti. RAINFOREST  A Framework for Fast Decision Tree Construction of Large Datasets. Data Mining and Knowledge Discovery, Volume 4, Issue 2/3, July 2000, pp 127162.

Mansour, Y. and McAllester, D. (2000).
Generalization Bounds for Decision Trees. In: Proceedings of COLT2000.

M. Kearns and Y. Mansour.
A Fast, BottomUp Decision Tree Pruning Algorithm with NearOptimal Generalization. In: Proceedings of the 15th International Conference
on Machine Learning, 1998, Morgan Kaufmann.
Week 3 (Beginning January 26, 2004)
Bayesian Framework for Learning.
Learning Maximum aposteriori (MAP) and Maximum Likelihood (ML) hypothesis from data. The relationship between MAP hypothesis learning, minimum description length principle (Occam's razor) and the role of priors.
Equivalence of ML hypothesis learner and consistent learner for classification tasks. Equivalence of ML hypothesis learning and minimization of mean squared error for function approximation problems.
ML estimation of probabilities. Bayes Optimal Classification (and how it differs from Maximum A posteriori Classifier (hypothesis). Gibbs Classifier.
Required readings
Recommended Readings
Week 4 (Beginning February 2, 2004)
Introduction to Artificial Neural Networks. Threshold logic unit (perceptron) and the associated hypothesis space. Connection with Logic and Geometry. Weight space and pattern space representations of perceptrons. Linear separability and related concepts. Perceptron Learning algorithm and its variants. Convergence properties of perceptron algorithm. WinnerTakeAll Networks. Multiplicative Update Algorithms (e.g., Winnow and Balanced Winnow).
Required Readings
Recommended Readings

Littlestone, N., and Warmuth, M. (1994).
The weighted Majority Algorithm, Information and Computation Vol. 108: 212261. 1994.

Blum A. (1995).
Empirical Support for Winnow and Weighted Majority Algorithms, In: Proceedings of the
Twelfth International Conference on Machine Learning, pages 6472. Morgan Kaufmann, 1995.

Golding, A. and Roth, D. (1999).
Applying Winnow to ContextSensitive Spelling Correction
Machine Learning, 34(13):107130, 1999.

Yang, J., Parekh, R. & Honavar, V. (2001).
Comparison of Performance of Variants of SingleLayer Perceptron Algorithms on NonSeparable Data.
Neural, Parallel, and Scientific Computation.

Adam J. Grove, Nick Littlestone, and Dale Schuurmans. General convergence results for linear discriminant updates. In COLT97, pages 171183, 1997.

Parekh, R., Yang, J., and Honavar, V. (2000).
Constructive Neural Network Learning Algorithms for MultiCategory Pattern Classification. IEEE Transactions on Neural Networks. Vol. 11. No. 2. pp. 436451.

Nilsson, N. J. Mathematical Foundations of Learning Machines. Palo Alto, CA: Morgan Kaufmann (1992).

Minsky, M. amd Papert, S. Perceptrons: Introduction to Computational Geometry. Cambridge, MA: MIT Press (1988).

McCulloch, W. Embodiments of Mind. Cambridge, MA: MIT Press.
Week 5 (beginning February 9, 2004)
Introduction to neural networks as trainable function approximators. Function approximation from examples. Least Mean Squared (LMS) Error Criterion. Minimization of Error Functions. Review of Relevant Mathematics (Limits, Continuity and Differentiablity of Functions, Local Minima and Maxima, Derivatives, Partial Derivatives, Taylor Series Approximation, MultiVariate Taylor Series Approximation)
Derivation of a Learning Rule for Minimizing Mean Squared Error Function for a Simple Linear Neuron.
Momentum modification for speeding up learning.
Introduction to neural networks for nonlinear function approximation.
Nonlinear function approximation using multilayer neural networks.
Universal function approximation theorem. Derivation of the generalized delta rule (GDR) (the backpropagation learning algorithm).
Generalized delta rule (backpropagation algorithm) in practice  avoiding overfitting, choosing neuron activation
functions, choosing learning rate, choosing initial weights, speeding up learning, improving generalization, circumventing local minima, using domainspecific constraints (e.g., translation invariance in visual pattern recognition), exploiting hints, using neural networks for function approximation and pattern classification. Relationship between neural networks and Bayesian pattern classification. Variations  Radial basis function networks. Learning non linear functions by searching the space of network topologies as well as weights.
Required readings

Mitchell, T. Chapter 4, Machine Learning.

Lecture Slides.. Vasant Honavar

Honavar, V. Function Approximation from Examples.

Honavar, V. Multilayer networks.

Honavar, V. Radial Basis Function Networks.

T. G. Dietterich, H. Hild, and G. Bakiri.
A comparative study of ID3 and backpropagation for English texttospeech mapping. In Proceedings of 7th IMLW, Austin, 1990.
Morgan Kaufmann.

Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R. Howard, W. Hubbard,
and L. Jackel.Handwritten digit recognition with a backpropagation neural network. In D.
Touretzky, editor, Advances in Neural Information Processing Systems 2, pages 396404. Morgan
Kaufmann, San Mateo, CA, 1990.

Thrun and T.M. Mitchell. Learning one more thing. Technical Report CMUCS94184, Carnegie Mellon University, Pittsburgh, PA 15213, 1994.

Fahlman, S. and Lebiere, C. 1991. Cascade Correlation Architecture Technical Report CMUCS90100 Carnegie Mellon University, August 1991.

Poggio et al. 1989. A Theory of Networks for Approximation and Learning Technical Report 1140, MIT Artificial Intelligence Laboratory, 1989.
Recommended Readings

Caruana, R.
Learning Many Related Tasks through backpropagation in NIPS, MIT Press, 1995.

R. Williams, and D. Zipser. GradientBased Learning Algorithms for
Recurrent Networks and
Their Computational Complexity. In Backpropagation: Theory,
Architectures, and Applications, Chauvin and Rumelhart, Eds., LEA, 1995, pp. 433485.

Solomon, R. and J. L. van Hemmen (1996).
Accelerating backpropagation through dynamic selfadaptation.
Neural Networks 9 (4), 589601.

Craven, M. and Shavlik, J.
Using Neural Networks for Data Mining. Future Generation Computer Systems 13:211229.

R. Setiono, W.K. Leow and J.M. Zurada. Extraction of rules from artificial neural networks for nonlinear regression, IEEE Transactions on Neural Networks, 2002.

Poggio et al. 1995.
Regularization Theory and Neural Network Architectures Neural Computation, 7:219269, 1995.
Week 6 (Beginning February 16, 2004)
Bayesian Classifiers and Bayesian Networks.
Minimal Error Bayes Classifier, Minimum Risk Bayes Classifier, Conditional Independence revisited. Naive Bayes Classifier and its relation to Linear Discriminant Functions. Estimation of probabilities from data
Maximum Likelihood and Bayesian estimation of parameters from data, detailed treatment of estimation of parameters for multinomial distributions using conjugate Dirichlet priors; Example  Naive Bayes text classification. Bayesian Networks. dseparation, and compact representation of joint probability distribution functions in Bayes Networks.
Required readings

Chapter 6 from: Mitchell, T. 1997.
Machine Learning. New York: McGraw Hill.

Lecture Slides. Vasant Honavar

D. D. Lewis. Naive Bayes at forty: The independence assumption in information retrieval. In ECML98: Proceedings of the Tenth European Conference on Machine Learning, pages 415, Chemnitz, Germany, April 1998. Springer.

P. Domingos and M. Pazzani. On the optimality of the simple Bayesian classifier under zeroone loss. Machine Learning, 29:103130, 1997.

McCallum, A. and Nigam, K. A Comparison of Event Models for Naive Bayes Text Classification.. In AAAI/ICML98 Workshop on Learning for Text Categorization, pp. 4148. Technical Report WS9805. AAAI Press. 1998.

Jason D. M. Rennie, Lawrence Shih, Jaime Teevan and David R. Karger
Tackling the Poor Assumptions of Naive Bayes Text Classifiers
Proceedings of the Twentieth International Conference on Machine Learning. 2003.

Susana Eyheramendy, David Lewis, David Madigan
On the Naive Bayes Model for Text Categorization.
In: Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics., Bishop, C.M. and Frey, B. (Ed). 2003.

Thorsten Joachims. A probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization. Technical Report CMUCS96118, School of Computer Science,
Carnegie Mellon University, March 1996.
Recommended Readings

Langley, P., Iba, W., and Thompson, K. (1992). An Analysis of Naive Bayes Classifier. In: Proceedings of AAAI. 1992.

Langley, P. and Sage, S. (1999). Tractable averagecase analysis of naive Bayesian classifiers.. Proceedings of the Sixteenth International Conference on Machine Learning (pp. 220228). Bled, Slovenia: Morgan Kaufman.

Rish, I. An Empirical Study of Naive Bayes Classifier, In: Proc. ICML 2001.

Yang, Y. and G. I. Webb (2003). On Why Discretization Works for NaiveBayes Classifiers. In Proceedings of the 16th Australian Conference on AI (AI 03)Lecture Notes AI 2903, pages 440452. Berlin: SpringerVerlag.

An Introduction to Graphical ModelsKevin Murphy. 2001.

Bayesian Networks and DecisionTheoretic Reasoning for Artificial Intelligence, Daphne Koller and Jack Breese. Tutorial Given at AAAI97.

George H. John, Pat Langley, Estimating Continuous Distributions in Bayesian Classifiers Proceedings of the 1995 Conference on Machine Learning.
Week 7 (Beginning February 23 2004)
Bayesian Networks. Conditional Independence Revisited, dseparation, and compact representation of joint probability distribution functions in Bayes Networks.
Reasoning with Bayes Networks, Some algorithms for Exact Inference, and Approximate Inference of relevant probabilities from a Bayesian network using stochastic simulation (Sampling).
Learning Bayesian Networks from Data. Learning of parameters (conditional probability tables) from fully specified instances (when no attribute values are missing) in a network of known structure  Maximum Likelihood and Bayesian estimation of parameters from data.
Required readings

Chapter 6 from: Mitchell, T. 1997.
Machine Learning. New York: McGraw Hill.

Lecture Slides. Vasant Honavar

Graphical Models  Probabilistic Inference. M. Jordan and Y. Weiss.

A Tutorial on Learning with Bayesian Networks, David Heckerman. Tech. Rep. MSRTR9506. Microsoft Research.

Approximating Discrete Probability Distributions with Dependence Trees. Chou, C.K. and Liu, C.N. IEEE Transactions on Information Theory. 14(3), 1968. pp. 462467.

Learning Bayesian belief networks: An approach based on the MDL principle., W. Lam and F. Bacchus, Computational Intelligence, 10(4), 1994.

Bayesian Network Classifiers Friedman, N., Geiger, D., and Goldszmidt, M. Machine Learning 29: pp. 131163. 1997.

Learning
Naive Bayes Classifiers from Attribute Value Taxonomies and Partially Specifid Data, Zhang, J. and Honavar, V. (2004). Technical Report ISUCSTR0403. Department of Computer Science, Iowa State University.
Recommended Readings

Bayesian Networks and DecisionTheoretic Reasoning for Artificial Intelligence, Daphne Koller and Jack Breese. Tutorial Given at AAAI97.

Inference in Bayesian Networks  A Procedural Guide Huang, C., A. Darwiche. Journal of Approximate Reasoning. Vol 15. pp. 225263.

Learning Bayesian Belief Networks Based on the Minimum Description Length Principle: Basic Properties.J. Suzuki, IEICE Transactions on Fundamentals, vol. E82, No. 10., pp. 2237 2245

Comparing Model Selection Criteria for Belief Networks Tim Van Allen, Russ Greiner, 2000.

Using bayesian networks to analyze expression data.
Friedman, M. Linial, I. Nachman, and D. Per. Proceedings of the 4th Annual International Conference on Computational Molecular Biology (RECOMB00), pages 127135, N.Y., April 811 2000. ACM Press.

An Introduction to MCMC for Machine Learning, Andrieu et al., Machine Learning, 2001.

Learning Bayesian Network Classifiers for Credit Scoring Using Markov Chain Monte Carlo Search, B. Baesens et al., 2001.

Operations for Learning with Graphical Models Wray Buntine. Journal of Artificial Intelligence Research. Vol. 2. pp. 159225. 1994.

Friedman, N 1998. The Bayesian Structural EM Algorithm Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence Morgan Kaufmann. 1998.

Being Bayesian about Network Structure  A Bayesian Approach To Structure Discovery in Bayesian Networks N. Friedman and D. Koller. Machine Learning. To appear.
Week 8 (beginning March 1, 2004)
Learning Bayesian networks with unknown structure  searching the space of network topologies using suitable scoring functions to guide the search, structure learning in practice
 learning low order Bayesian networks (where each random variable directly depends on a small number of others), Bayesian approach to structure discovery,
examples. Learning Bayesian network parameters in the presence of missing attribute values (using Expectation Maximization) when the structure is known;
Learning networks of unknown structure in the presence of missing attribute values.
Required readings

Chapter 6 from: Mitchell, T. 1997.
Machine Learning. New York: McGraw Hill.

Lecture Slides. Vasant Honavar

Graphical Models  Probabilistic Inference. M. Jordan and Y. Weiss.

A Tutorial on Learning with Bayesian Networks, David Heckerman. Tech. Rep. MSRTR9506. Microsoft Research.


Approximating Discrete Probability Distributions with Dependence Trees. Chou, C.K. and Liu, C.N. IEEE Transactions on Information Theory. 14(3), 1968. pp. 462467.

Learning Bayesian belief networks: An approach based on the MDL principle., W. Lam and F. Bacchus, Computational Intelligence, 10(4), 1994.

Bayesian Network Classifiers Friedman, N., Geiger, D., and Goldszmidt, M. Machine Learning 29: pp. 131163. 1997.

Learning
Naive Bayes Classifiers from Attribute Value Taxonomies and Partially Specifid Data, Zhang, J. and Honavar, V. (2004). Technical Report ISUCSTR0403. Department of Computer Science, Iowa State University.
Recommended Readings

Bayesian Networks and DecisionTheoretic Reasoning for Artificial Intelligence, Daphne Koller and Jack Breese. Tutorial Given at AAAI97.

Inference in Bayesian Networks  A Procedural Guide Huang, C., A. Darwiche. Journal of Approximate Reasoning. Vol 15. pp. 225263.

Learning Bayesian Belief Networks Based on the Minimum Description Length Principle: Basic Properties.J. Suzuki, IEICE Transactions on Fundamentals, vol. E82, No. 10., pp. 2237 2245

Comparing Model Selection Criteria for Belief Networks Tim Van Allen, Russ Greiner, 2000.

Using bayesian networks to analyze expression data.
Friedman, M. Linial, I. Nachman, and D. Per. Proceedings of the 4th Annual International Conference on Computational Molecular Biology (RECOMB00), pages 127135, N.Y., April 811 2000. ACM Press.

An Introduction to MCMC for Machine Learning, Andrieu et al., Machine Learning, 2001.

Learning Bayesian Network Classifiers for Credit Scoring Using Markov Chain Monte Carlo Search, B. Baesens et al., 2001.

Operations for Learning with Graphical Models Wray Buntine. Journal of Artificial Intelligence Research. Vol. 2. pp. 159225. 1994.

Friedman, N 1998. The Bayesian Structural EM Algorithm Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence Morgan Kaufmann. 1998.

Being Bayesian about Network Structure  A Bayesian Approach To Structure Discovery in Bayesian Networks N. Friedman and D. Koller. Machine Learning. To appear.
Week 9 (Beginning March 8 2004)
Mistake bound analysis of learning algorithms. Mistake bound analysis of online algorithms for learning Conjunctive Concepts. Optimal Mistake Bounds. Version Space Halving Algorithm. Randomized Halving Algorithm. Learning monotone disjunctions in the presence of irrelevant attributes  the Winnow and Balanced Winnow Algorithms. Multiplicative Update Algorithms for concept learning and function approximation. Weighted majority algorithm. Applications.
Probably Approximately Correct (PAC) Learning Model. Efficient PAC learnability. Sample Complexity of PAC Learning in terms of cardinality of hypothesis space (for finite hypothesis classes). Some Concept Classes that are easy to learn within the PAC setting.
Required readings

Lecture Slides. V. Honavar

Chapter 7, Mitchell, T. 1997. Machine Learning. New York: McGraw Hill.

The weighted Majority Algorithm, Littlestone, N., and Warmuth, M. Information and Computation Vol. 108: 212261. 1994.

Empirical Support for Winnow and Weighted Majority Algorithms, Blum, A. In: Proceedings of the
Twelfth International Conference on Machine Learning, pages 6472. Morgan Kaufmann, 1995.

Applying Winnow to ContextSensitive Spelling Correction Golding, A., and Roth, D.
Machine Learning, 34(13):107130, 1999.

M.H. Yang, D. Roth, and N. Ahuja. A SNoWbased face detector. NIPS(12), pages 855861, 2000.


A Tutorial on Computational Learning Theory. V. Honavar

Overview of the Probably Approximately Correct (PAC) Learning Framework. D. Haussler, 1995.
Recommended Readings

Computational Learning Theory Sally Goldman.

A.J. Grove, N. Littlestone, and D. Schuurmans. General convergence results for linear discriminant updates. In Proc. 10th Annu. Conf. on Comput. Learning Theory, pages 171183, 1997.

Tong Zhang. Regularized winnow methods. In Advances in Neural Information Processing Systems 13, pages 703709, 2001.

Helmbold, D.P., Schapire, R.E., Singer, Y., Warmuth, M.K. Online portfolio selection using multiplicative updates. Mathematical Finance, vol. 8 (4), pp.325347, 1998.
Spring Break
Week 10 (Beginning March 22 2004)
Efficiently PAC learnable concept classes. Sufficient conditions for efficient PAC learnability. Some concept classes that are not efficiently learnable in the PAC setting. Making hardtolearn concept classes efficiently learnable  transforming instance representation and hypothesis representation. Occam Learning Algorithms. PAC Learnability of infinite concept classes. VapnikChervonenkis (VC) dimension. Properties of VC dimension, VC dimension and learnability, Learning from Noisy examples, Transforming weak learners into PAC learners through accuracy and confidence boosting, Learning under helpful distributions  Kolmogorov Complexity, Conditional Kolmogorov Complexity, Universal distributions, Learning Simple Concepts, Learning from Simple Examples
Required readings

Lecture Slides. V. Honavar

Chapter 7, Mitchell, T. 1997. Machine Learning. New York: McGraw Hill.

Overview of the Probably Approximately Correct (PAC) Learning Framework. D. Haussler, 1995.

Kearns, M. 1998. Efficient Noise Tolerant Learning from Statistical Queries. Journal of the ACM. Vol. 45, pp. 9831006.

Parekh, R. and Honavar, V. (2000). On the Relationships between Models of Learning in Helpful Environments. In: Proceedings of the Fifth International Conference on Grammatical Inference. Lisbon, Portugal.

Parekh, R. and Honavar, V. (2001). DFA Learning from Simple Examples. Machine Learning. Vol. 44. pp. 935.

Polikar, R., Udpa, S., Udpa, L., and Honavar, V. Learn++: An Incremental Learning Algorithm for MultiLayer Perceptron Networks. IEEE Transactions on Systems, Man, and Cybernetics. Vol. 31, No. 4. pp. 497508.
Recommended Readings

Computational Learning Theory Sally Goldman.

CesaBianchi, N., Dichterman, E., Fischer, P., Shamir, E., Simon, H. 1999. SampleEfficient Strategies for Learning in the Presence of Noise. Journal of the ACM. Vol. 46. pp. 684719.

Goldreich, O. and Goldwasser, S. 1998. Property testing and its connection to Learning and approximation. Journal of the ACM. Vol. 45. pp. 653750.

Khardon, R. and Roth, D. 1997. Learning to Reason. Journal of the ACM. Vol, 44. pp. 697725.

Valiant, L. 2000. A Neuroidal Architecture for Cognitive Computation. Journal of the ACM. Vol. 47. pp. 854882.

Maass, W. 1994. Efficient Agnostic PAC Learning With Simple Hypotheses. . Proceedings of the Seventh Annual Conference on Computational Learning Theory. 1994. pp. 6775.

Benedek. G. and Itai, A. Dominating Distributions and Learnability. In: Annual Workshop on Computational Learning Theory. 1992.
Week 11 (Beginning March 29, 2004)
Support Vector Machines. Background: Dual representation of Perceptrons.
A learning algorithm using dual representation of perceptrons.
Margin and geometric margin. Maximal Margin Separating Hyperplanes  Why?
Maximal Margin separating hyperplanes  How?
Introduction to Lagrange/KarushKuhnTucker Optimization Theory. Optimization
problems. Linear, quadratic, and convex optimization problems. Primal and
dual representations of optimization problems.
Convex Quadratic programming formulation of the
maximal margin separating hyperplane finding problem. Characteristics of the
maximal margin separating hyperplane. Kernel functions for classification of
non linearly separable data. Soft margin SVM algorithms.
Properties of Kernel functions. Implementation
of SVM.
Required readings

Lecture Slides. V. Honavar

B. E. Boser, I. M. Guyon, and V. N. Vapnik.
A training algorithm for optimal margin classifiers. 5th Annual ACM Workshop on COLT, pp. 144152, Pittsburgh, PA, 1992. ACM Press.

A Tutorial on Support Vector Machines. Nello Christianini, International Conference on Machine Learning (ICML 2001).

J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121167, 1998.

M. A. Hearst, B. Schvlkopf, S. Dumais, E. Osuna, and J. Platt. Trends and controversies  support vector machines. IEEE Intelligent Systems, 13(4):1828, 1998.

Platt, J. Fast training of support vector machines using sequential minimal optimization.
In B. Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in
Kernel Methods  Support Vector Learning, pages 185208, Cambridge, MA, 1999. MIT Press.

Edgar Osuna, Robert Freund, and Federico Girosi. Support vector machines: Training and applications. Technical Report AIM1602, 1997.

Brown, M. P., Grundy, W. N., Lin, D., Cristianini, N., Sugnet, C. W., Furey, T. S., Ares, M. Jr, and Haussler, D. Knowledgebased analysis of microarray gene expression data by using support vector machines.
Proc. Natl. Acad. Sci. USA 97: 262267: 2000.

T. Joachims. Text categorization with support vector machines: Learning with many relevant features. In European Conference on Machine Learning (ECML98), 1998.

Yan, C., Dobbs, D., and Honavar, V. A TwoStage Classifier for Identification of ProteinProtein Interface Residues. In: Proceedings of the Conference on Intelligent Systems in Molecular Biology, 2004. (also Bioinformatics In Press., 2004).
Recommended readings

S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, and K.R.K. Murthy. Improvements to platt's SMO algorithm for SVM classifier design. Technical report, Dept of CSA, IISc, Bangalore, India, 1999.

S.S. Keerthi and E.G. Gilbert, Convergence of a generalized SMO algorithm for SVM classifier design, Technical Report CD0001, Dept. of Mechanical and Production Eng., National University of Singapore, 2000.

S.S. Keerthi, S. Shevade, C. Bhattacharya and K. Murthy. A fast iterative nearest point algorithm for support vector machine classifier design. (Technical Report TRISL9903). Dept. Comp. Sci. and Auto., Indian Inst. of Science, Bangalor, India, 1999.

Yi Li and Philip M. Long, The Relaxed Online Maximum Margin Algorithm, Machine Learning Vol. 46,
pp. 361, 2002.

Graepel, T., Herbrich, R., & Williamson, R. C. (2001). From margin to sparsity. In Advances in Neural Information System Processing 13.

J. Platt, N. Cristianini, J. ShaweTaylor, Large Margin DAGs for Multiclass Classification, in: Advances in Neural
Information Processing Systems 12, pp. 547553, MIT Press, (2000).
Week 12 (Beginning April 5, 2004)
Lazy Learning Algorithms. Instance based Learning, Knearest neighbor classifiers, distance functions, locally weighted regression,
sample application to document classification using TFIDF representation. Relative advantages and disadvantages of lazy learning and eager
learning.
Learning Sequence Classifiers. Learning Sequence Classifiers Using FeatureBased Representations of Sequences  e.g., bag of words representations for text classification. Ngrams for text analysis and statistical natural language processing (NLP). Markov models and Hidden Markov Models (HMM). Some special classes of HMM (ergodic, lefttoright, etc.) Fundamental HMM problems  computing the probability of a given observation sequence given a model; computing the most likely hidden state sequence given an observation sequence and a model; computing the most likely model given observation sequence(s). Forward, Backward, ForwardBackward and Viterbi Algorithms; Algorithms for Learning a HMM from data.
Required readings

Lecture Slides, V. Honavar

Lecture Slides. V. Honavar

Chapter 8 from: Mitchell, T. 1997.
Machine Learning. New York: McGraw Hill.

C. G. Atkeson, S. A. Schaal and Andrew W, Moore, Locally Weighted Learning, AI Review,Volume 11, Pages 1173 (Kluwer
Publishers) 1997

J. Kleinberg. Two algorithms for nearestneighbor search in high dimensions. In Proceedings of the Twentyninth ACM Symposium on Theory of Computing, 1997.
in IEEE Intelligent Systems Vol. 13, No. 2. pp. 4449.

Dugad, R., and Desai, U.B.
A Tutorial on Hidden Markov Models, Tech. Rep. 96.1, Department of Electrical Engineering, Indian Institute of Technology (IIT) Bombay, India.

J. Bilmes, A Gentle Tutorial on the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models, Tech. Rep. TR97021, International Computer Science Institute, Berkeley, CA. 1997.

A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, L. R. Rabiner, Proc. IEEE, Vol. 77, No. 2, pp. 257286, February 1989. Errata.

An Introduction to Hidden Markov Models for Biological Sequences. In: Computational Methods in Molecular Biology, Salzberg, S.L., Searls, D.B and Kasif, S., Elsevier, 1998.

Christina Warrender, Stephanie Forrest, and Barak A. Pearlmutter. Detecting intrusions using system calls: Alternative data models.. In IEEE Symposium on Security and Privacy, pages 133145, 1999.

Alan Poritz, Hidden Markov Models: A guided tour., ICASSP 1988.
Software
Recommended Readings

P.N. Yianilos. Data structures and algorithms for nearest neighbor search in general metric spaces. In Fourth ACMSIAM Symposium on Discrete Algorithms, pages 311 321, January 1993.

Yang, J. and Honavar, V. (1998).
Feature Subset
Selection Using a Genetic Algorithm. In: Feature Extraction, Construction, and Subset Selection: A Data Mining Perspective. Motoda, H. and Liu, H. (Ed.) New York: Kluwer. 1998. In press. A shorter version of this paper appeared
in IEEE Intelligent Systems Vol. 13, No. 2. pp. 4449.

S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman, and A. Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. JACM: Journal of the ACM, 45, 1998.

A. Hinneburg, C. Aggarwal, and D. Keim (2000). What is the nearest neighbor in high dimensional spaces?, International Conference on Very Large Data Bases, Cairo, Egypt, 2000, pages 506515.

S. Eddy,
Profile Hidden Markov Models, Bioinformatics 14: 755763, 1998.

A. Krogh, M. Brown, S. Mian, K. Sjolander, and D. Haussler. Hidden markov models in computational biology: Applications to protein modeling. Mol. Biology, 235:15011531, 1994.

P. Baldi, S. Brunak, P. Frasconi, G. Pollastri and G. Soda. Bidirectional Dynamics for Protein Secondary Structure Prediction, In Sequence Learning: Paradigms, Algorithms, and Applications (R. Sun and C.L. Giles eds.), pp. 80–104. Springer, 2000.

K. Seymore, A. McCallum, and R. Rosenfeld. Learning hidden markov model structure for information extraction.In: AAAI 99 Workshop on Machine Learning for Information Extraction, 1999.

G. Sigletos, G. Paliouras, V. Karkaletsis,
Role Identification from Free Text Using Hidden Markov Models,
Proceedings of the Panhellenic Conference in Artificial Intelligence (SETN), Lecture Notes in Artificial Intelligence, n. 2308, Springer Verlag, pp. 167178, 2002.

Charniak, Eugene (1993). Statistical Language Learning. MIT Press.
Week 13 (April 12, 2004)
Unsupervised or selfsupervised learning. Clustering. Learning Mixture Models from Data; Identifiability of Mixture Models; Maximum Likelihood approach to Mixture Model Learning  Expectation Maximization (EM) algorithms. Kmeans clustering algorithm and variants. Adaptive Resonance Theory (ART) family of clustering algorithms. Distance measures, Clustering Criteria  IntraCluster and InterCluster distances. Hierarchical Agglomerative Clustering Algorithm. Distributional Clustering, Applications to Learning Attribute Value Taxonomies from Data, Phylogeny Construction. Latent Semantic Indexing, Principal Component Analysis and related methods.
Required readings

Lecture Slides. V. Honavar

Pavel Berkhin. A Survey of Clustering Algorithms 2002.

Why so many Clustering Algorithms?

ART I and Pattern Clustering. In: Proceedings of the 1988 Connectionist Models Summer School, Touretzky, D., Hinton, G., and Sejnowski, T. (Ed). Palo Alto, CA: Morgan Kaufmann.

Pereira, Fernando, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words.. Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 183190.

Pitt, L. and Reinke, R. 1988. Criteria for Polynomial Time (Conceptual) Clustering. Machine Learning. 2:371396, 1988.

Barbara, D. 2002. Requirements for Clustering Data Streams. SIGKDD Explorations. Vol. 3. pp. 2327.
Recommended Readings

Escobar, M.D., and West, M. (1995), Bayesian Density Estimation and Inference Using Mixtures Journal of the American Statistical Association, 90, 577588.

Mishra, N., Oblinger, D., and Pitt, L. Sublinear Time Approximate Clustering.. 12th Annual ACMSIAM Symposium on Discrete Algorithms, pp. 439447, January, 2001.

Ramgopal R. Mettu and C. Greg Plaxton. Optimal Time Bounds for Approximate Clustering. In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence, August 2002, pages 344351.

P. Indyk, 1999. Sublinear Time Algorithms for Metric Space Problems, Symposium on Theory of Computing (STOC '99).

N. Slonim and N. Tishby. Agglomerative information bottleneck.. In Proc. of NIPS12, pages 617623. MIT Press, 2000.

Slonim, N. and Tishby, N. (2000) Document clustering using word clusters via the information bottleneck method. In Proceedings of the 23rd International Conference on Research and Development in Information Retrieval (SIGIR), pp. 208215.

T. Hofmann, Probabilistic latent semantic indexing, in Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1999, pp. 5057.

Barbara, D. 2002. Requirements for Clustering Data Streams. SIGKDD Explorations. Vol. 3. pp. 2327.
Additional Information

Charniak, Eugene (1993). Statistical Language Learning. MIT Press.

Clustering Algorithms. Hartigan.
Week 14
Reinforcement Learning. Agents that learn by exploration of
environments, using environmental reward and punishment. Examples of
reinforcement learning problems. Credit assignment problem and its implications. Explorationexploitation dilemma.
Some approaches to explorationexploitation tradeoffs. Markov decision processes. Learning optimal policies from
interaction with the environment  determistic, stochastic, stationary and nonstationary
environments. Value functions and ActionValue functions. Bellman equations and dynamic programming
approach to learning optimal policies when the transition function and reward functions are known (i.e.,
the agent has a model of the environment). Q learning algorithm for learning optimal policies when an
accurate model of the environment is not known. Temporal difference methods. Scaling up reinforcement learning to large
state spaces  function approximation methods for compact representation of actiovalue functions; state abstraction methods
for hierarchical reinforcement learning. Multiagent reinforcement learning. Applications.
Required readings
 Chapter 13 from the Mitchell Text.

Lecture Slides. V. Honavar

Reinforcement Learning Lecture Slides. F. Bromberg

Multiagent Reinforcement Learning Lecture Slides. F. Bromberg

Harmon, M. and Harmon, S. (1997). Reinforcement
Learning: A Tutorial.
 Kaelbling, L., Littman, M., and Moore, A. (1996). Reinforcement
Learning: A Survey.

J. Rennie and A. McCallum. Using Reinforcement Learning to Spider the Web Efficiently. Proceedings of the Sixteenth International Conference on Machine Learning, 1999.

M. Kearns and S. Singh. Nearoptimal reinforcement learning in polynomial time. . In Proc. of the 15th Int. Conf. on Machine Learning, pages 260268. Morgan Kaufmann, 1998.

Barto, A., and Mahadevan, S. Hierarchical Reinforcement Learning. Draft.

Schultz, W, Dayan, P and Montague, PR (1997). A neural substrate of prediction and reward. Science, 275, 15931599.

EvenDar, E. and Mansour, Y. (2001). Convergence of Optimistic and Incremental Q Learning. NIPS 2001.

D. Precup, R. S. Sutton, and S. Singh. Theoretical results on reinforcement learning with temporally abstract options.. In Proceedings of the 10th European Conference on Machine Learning, ECML98, pages 382393. Springer Verlag, 1998.
Recommended readings
 Sutton, R. and Barto, A. (1998). Reinforcement
Learning: An Introduction Cambridge, MA: MIT Press.

Mahadevan, S. SpatioTemporal Abstraction of Stochastic Sequential Processes.

P. Marbach, O. Mihatsch, and J. N. Tsisiklis, Call admission control and routing in integrated services networks using neurodynamic programming,. IEEE Trans. Veh. Technol., vol. 18, no. 2, pp. 1972.

T. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. JAIR, 13:227303, 2000.

J. Baxter and P. Bartlett. Direct gradientbased reinforcement learning. Technical report, Australian National University, Research School of Information Sciences and Engineering, July 1999.

J. Baxter and P. Bartlett. Direct gradientbased reinforcement learning. Technical report, Australian National University, Research School of Information Sciences and Engineering, July 1999.

Bartlett and J. Baxter. Estimation and approximation bounds for gradientbased reinforcement learning. Journal of Computer and Systems Sciences, 2002.

Mark Humphrys. Action Selection methods using Reinforcement Learning. PhD thesis, University of Cambridge, June 1997.
Week 15
Review, Summary of the Course, and Discussion of Some Current Research Problems
General Background Material of Interest

Computing Machinery and Intelligence, Alan Turing.
 AI Overview pa
ge from the American Association for Artificial Intelligence
 AI Applications page from the American Association for Artificial Intelligence
 AI in the News page from the American Association for Artificial Intelligence
 What is AI? (by J. McCarthy)
 History and Promise of AI (by D. Waltz)
 Report on 21 st Century Intelligent Systems (by B. Grosz and R. Davis)
 The Role of Intelligent Systems in the National Information Infrastructure (by D. Weld)

Tutorial on Intelligent Agents and Multiagent systems (by V. Honavar)

Nwana, H.
Software Agents: An Overview., Knowledge Engineering Review. vol. 11., no. 3., pp. 140. 1996.

Wooldridge, M. and Jennings, N.
Intelligent Agents: Theory and Practice Knowledge Engineering Review. vol. 10., no. 2., pp. 115152. 199
5.

Jennings, N. R., Sycara, K., and Wooldridge, M.,
A Roadmap of Agent Research and Development Int Journal of Autonomous Agents and MultiAgent Systems 1 (1) 738, 1998.

AgentWeb by Tim Finin

Additional AI Links by V. Honavar
 AI on the web by Russell and Norvig
If you would like to be alerted by email when this page is updated, please register with Mind it.
Copyright © 19992003, Vasant Honavar, Department of Computer Science, Iowa State University. All rights reserved.
Dr. Vasant Honavar
Professor
Department of Computer Science
Iowa State University
Atanasoff Hall, Ames, IA 500111040 USA
phone: +15152944377, fax: +15152940258
Additional Information

AAAI Machine Learning Topics Page

Jaynes, E.T. Probability Theory: The Logic of Science, Cambridge University Press, 2003.

Cox, R.T. The Algebra of Probable Inference, The Johns Hopkins Press, 1961.

Boole, G. The Laws of Thought, (First published: 1854). Prometheus Books, 2003.

Feller, W. An Introduction to Probability Theory and its Applications. Vols 1, 2. New York: Wiley. 1968.

Russell, S. and Norvig, P. 2003. Artificial Intelligence: A Modern Approach. Prentice Hall.

Duda, R., Hart, R., And Stork, D. (2000). Pattern Recognition. Wiley.

Langley, P. (1995). Elements of Machine Learning. Palo Alto, CA: Morgan Kaufmann.

Bishop, C. M. Neural Networks for Pattern Recognition. New York: Oxford University Press (1995).

Baldi, P. and Brunak, S. (2003). Bioinformatics  A Machine Learning Approach. Cambridge, MA: MIT Press.
 Cohen, P.R.Empirical Methods in Artificial Intelligence

Chakrabarti, S. (2003). Mining the Web, Morgan Kaufmann.

Baldi, P., Frasconi, P., Smyth, P. (2003). Modeling the Internet and the Web  Probabilistic Methods and Algorithms. New York: Wiley.

Quinlan, R. (1993). C4.5: Programs for Machine Learning. Palo Alto, CA: Morgan Kaufmann.

Sestito, S. and Dillon, T. (1994). Automated Knowledge Acquisition.
Sydney, Australia: PrenticeHall.

Gallant, S. Neural Network Learning and Expert Systems. Cambridge, MA: MIT Press. 1993.

Theodoridis, S., and Konstantinos, K. Pattern Recognition. Elsevier. 2003.

Pearl, J. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1990.

Ripley, B. (1995). Pattern Recognition and Neural Networks. Cambridge University Press.

Fukunaga, K. Introduction to Statistical Pattern Recognition. New York: Academic Press (1990).

Jensen, F. Bayesian Networks and Decision Graphs. Berlin: SpringerVerlag. 2001.

Hastie, T., Tibshirani, R., Friedman, J. (2001).
The Elements of Statistical Learning : Data Mining, Inference, and Prediction.

Casella and Berger (2001).
Statistical Inference, New York: Duxbury Press.

Kearns, M. J. & Vazirani, U. V. An Introduction to Computational Learning Theory. Cambridge, MA: MIT Press. (1994).

Natarajan, B. Machine Learning: A Theoretical Approach. Morgan Kaufmann. 1992.

Machine Learning Resources by David Aha.

WEKA Machine Learning Algorithms in Java
If you would like to be alerted by email when this page is updated, please register with Changedetect.
Copyright © 19992004, Vasant Honavar, Department of Computer Science, Iowa State University. All rights reserved.
Dr. Vasant Honavar
Professor
Department of Computer Science
Iowa State University
Atanasoff Hall, Ames, IA 500111040 USA
phone: +15152944377, fax: +15152940258