Instructor | Feng Chen |
Office | ECSS 3.901 |
Number | (518) 442-4270 |
feng.chen@utdallas.edu | |
Office Hour |
Wednesday 2:00PM to 3:00PM
Thursday 2:00PM to 3:00PM
|
TA |
Chen Zhao
|
Office | ECSS 2.114 |
Number | |
Chen.Zhao@utdallas.edu | |
Office Hour | 2-3pm Thursdays and 10-11am Fridays |
Class Time and Location | 4:00PM-5:15PM, SOM 2.107 |
Course Description:
A course in artificial intelligence (AI) introducing basic concepts and techniques. Topics include statistics, optimization, first-order logic, probabilistic soft logic, Markov model and hidden Markov models, Markov random fields, and Artificial Neural networks.
Textbooks & References
There is no required textbook, but the following books may serve as useful references for different parts of the course.
Course Description:
Part | Lecture | Lecture Topics | Reading Materials |
Introduction | 1 | Syllabus; Introduction | AI (Chapter 1); PRML (Chapter 1); DL (Chapter 1) |
A: Statistics | 2 | Basic Distribution (Binomial, Poisson, Gaussian) | PRML (Chapter 2; Appendix B) |
3 | Parameter Esitmation (Maximum Likelihood Estimation) | PRML (Chapter 2; Appendix B) | |
4 | Linear Regression, Logistic Regression | PRML (Section 3.1; Section 4.1) Par I and Part II in Andrew Ng's Lecture Notes Lectures 2.1 to 4.6 and Lectures 6.1 to 7.4 in Andrew NG's short videos |
|
B: Numerical Optimization Techniques | 5 | Gradient Descent, Stochastic Gradient Descent, Mini-bach Gradient Descent, Momentum, Nesterov Momentum, AdaGrad, RMSProp, Adam |
DL (Chapter 8) Lecture 17.1 to 17.4 in Andrew NG's short videos The Evolution of Gradient Descent How to implement linear regression using graident descent Interpretation of bias correction in the Adam algorithm |
G: Artificial Neural Networks | 6 | Introduction: neurons, activiation function, artification, loss function, algorithms |
DL (Chapter 6, 7) Introduciton to Deep Learning Part 1 Introduciton to Deep Learning Part 2< Activiation Functions< Why Non-linear Activiation Functions Train/Dev/Test Sets Parameters vs Hyperparameters> Hyperparameter Tunning in Practice> Nuts and Bolts of Applying Deep Learning Improving deep neural networks: hyperparameter tuning, regularization and optimization Oneline demo of NN |
7 | Convoluational Neural Networks |
DL (Chapter 9) Lecture by Andrej Karpathy A short course on Convolutional Neural Networks by Anrew NG |
|
8 | Deep Reinforcement Learning |
Lecture by Serena Yeung
Applications of Deep Reinforcement Learning Demo of Deep Reinforcement Learning |
C: First Order Logic | 9 | Syntax and Semantics Learning and Inference |
D: Probablistic Soft Logic | 9 | Syntax and Semantics Learning and Inference |
|
E: Markov Model and Hidden Markov Model | 10 | TBD | |
F: Markov Random Fields | 10 | TBD | |
9 | TBD | ||
Examinations and Grading:
Homework
Course Project Requirement
Policy on Cheating:
Cheating in an exam will result in an E grade for the course. Further, the students involved will be referred to the Dean's o ce for disciplinary action.
Homework problems are meant to be individual exercises; you must do these by yourself. Any of the following actions will be considered as cheating.
Cheating in a homework exercise will result in the following penalty for all the students involved.
Students who cheat in two or more homework assignments will receive an E grade for the course. The names of such students will also be forwarded to the Dean's office for disciplinary action.
For a complete list of UTD policies and procedures, see here.