Lecture Slides
Instructor: Vibhav Gogate (Email: vgogate at hlt dot utdallas dot edu)
Introduction Readings: Mitchell, Chapter 1.
Inductive Learning Readings: Mitchell, Chapter 2.
Decision Tree Induction Readings: Mitchell, Chapter 3.
Point Estimation Readings: Bishop, Chapters 1 and 2; Murphy, Chapters 2 and 3. Review Probability Basics here.
Naive Bayes Readings: Mitchell, Sections 6.9-6.10; Murphy, Section 3.5
Logistic Regression Readings: Tom Mitchell book chapter; Andrew Ng's notes on Linear Regression
Error-Driven Linear Classifiers Readings: Mitchell, Chapter 4, Optional: Perceptron proofs
Neural Networks, Readings: Mitchell, Chapter 4; Bishop Chapter 5; Deep learning book, chapter 6.
Support Vector Machines, Readings: SVM tutorial by Chris Burges (Sections 3 and 4), Bishop, Chapter 7. Andrew Ng's notes
Useful Optimization Theory for SVMs
Instance Based Learning, Readings: Mitchell, Chapter 8. Tutorial on KD-trees
Bias/Variance Tradeoff, Boosting and Bagging, Readings: Bishop, Chapter 14. Duda, Hart and Stork, Chapter 9.
Model Selection
Midterm Review
Unsupervised Learning and Clustering. Readings: Bishop, Chapter 9. Duda, Hart and Stork, Chapter 10.
Bayesian networks: Representation. Readings: Murphy chapter 10. My Notes on Bayesian networks
Bayesian networks: Inference. Readings: Bishop, Chapter 8. Murphy Chapter 20. My Notes on Bayesian networks
Parameter Learning in Bayesian networks. Readings: Murphy, Chapter 10 and 11.
Hidden Markov models. Readings: Bishop, Chapter 13. Kevin Murphy's book chapter on HMMs (Sections 15.1-15.3).
Computational Learning Theory. Readings: Mitchell, Chapter 7.
RECAP. Readings: A Few Useful Things to Know about Machine Learning by Pedro Domingos.
Some material on the slides is courtesy of Pedro Domingos, Carlos Guestrin, Luke Zettlemoyer, Raymond Mooney, Dan Weld and Vincent Ng.
|