Adversarial Machine Learning (CS 7301.005) 


Time and Location :  Friday 10:00am-12:45pm @ SLC 1.202

Instructor                           :   Murat Kantarcioglu
Office Hours & Location :   Friday 09:00-10:00am
 

Teaching Assistant           :  N/A       
Office Hours & Location : N/A

Prerequisites
         :  Machine Learning

Grading:

  •   Project                      %50
  •   Paper presentation   %40 (Please use the ppt template provided (ppt) )  
  •   Class Discussion      %10

 

Course Topics: (tentative)

Increasingly, detecting and preventing cyber attacks require sophisticated use of  machine learning tools. This seminar class will cover the theory and practice of adversarial machine learning tools in the context of applications such as cybersecurity where we need to deal with intelligent adversaries that try to fool the machine learning algorithms.

         
Textbook: We will cover selected theoretical and practical papers on the topic. We will use our book on the topic. You do not have to buy it. It is available for free for UT Dallas students.

Adversarial Machine Learning
Synthesis Lectures on Artificial Intelligence and Machine Learning

Yevgeniy Vorobeychik, Murat Kantarcioglu

               

Course Outline:

 

08.23.19


  • Overview of the Class
  • Survey of the topics.
    • Please the following survey for the overview (pdf)
    • Slides (pdf)
    • Our papers on the topic 
      • M. Kantarcioglu, B. Xi, and C. Clifton, "Classifier evaluation and attribute selection against active adversaries," Data Min. Knowl. Discov., vol. 22, pp. 291, January 2011. (pdf)
      • Y. Zhou, M. Kantarcioglu, B. Thuraisingham, and B. Xi, "Adversarial support vector machine learning" SIGKDD '12 (pdf)
      • Yan Zhou, Murat Kantarcioglu, Bhavani M. Thuraisingham: Sparse Bayesian Adversarial Learning Using Relevance Vector Machine Ensembles. ICDM 2012:1206-1211 (pdf)

08.30.19

09.06.19

09.13.19

09.20.19

  • Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A. Robustness may be at odds with accuracy. In ICLR 2019.-647, 2015.  (pdf)
    • Presented by Murat
  • Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., and Madry, A. Adversarially robust generalization requires more data. In NIPS 2018.  (pdf)
    • Presented by Murat
  • Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In ICLR 2018, Conference Track Proceedings (pdf)
    • Presented by Murat

09.27.19

  • Athalye, A., Carlini, N., and Wagner, D. A. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML 2018 (pdf)
    • Presented by Murat
  • He, W., Wei, J., Chen, X., Carlini, N., and Song, D. Adversarial example defense: Ensembles of weak defenses are not strong. In 11th USENIX Workshop on Offensive Technologies, WOOT 2017. (pdf)
    Presented by Murat
  • Trojaning Attack on Neural Networks (pdf)
    • Presented by Murat

10.04.19

  • Elsayed, G. F., Shankar, S., Cheung, B., Papernot, N., Kurakin, A., Goodfellow, I., and Sohl-Dickstein, J. Adversarial examples that fool both computer vision and time-limited humans. In NIPS 2018.  (pdf)
    • Presented by Yu
  • Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., and Song, D. Robust physical-world attacks on deep learning visual classification. In IEEE CVPR June 2018. (pdf)
    • Presented by Yu
  • Carlini, N., Athalye, A., Papernot, N., Brendel, W., Rauber, J., Tsipras, D., Goodfellow, I. J., Madry, A., and Kurakin, A. On evaluating adversarial robustness. CoRR abs/1902.06705 (2019).  ICML. (pdf)
    • Presented by  Yifan

10.11.19


  • Sharif, M., Bhagavatula, S., Bauer, L., and Reiter, M. K. A general framework for adversarial examples with objectives. ACM Trans. Priv. Secur. 22, 3 (2019)  (pdf)
    • Presented by Vibha
  • Huang, S., Papernot, N., Goodfellow, I. J., Duan, Y., and Abbeel, P. Adversarial attacks on neural network policies. In ICLR 2017 Workshop Track Proceedings.  (pdf)
    • Presented by Vibha
  • Hosseini, H., and Poovendran, R. Semantic adversarial examples. In The IEEE CVPR 2018 Workshop. (pdf)
    • Presented by Yifan

10.18.19


  • Carlini, N., and Wagner, D. A. Audio adversarial examples: Targeted attacks on speech-to-text. In 2018 IEEE Security and Privacy Workshops, SP Workshops 2018 (pdf)
    • Presented by Mustafa
  • "Discrete Adversarial Attacks and Submodular Optimization with Aplications to Text Classification" SysML (pdf)
    • Presented by Pooneh
  • Nedim Srndic and Pavel Laskov, "Alzantot, M., Sharma, Y., Elgohary, A., Ho, B.-J., Srivastava, M., and Chang, K.-W. Generating natural language adversarial examples. In EMNLP (short) (2018). (pdf) )
    • Presented by Rushabh

10.25.19


  • Qin, Y., Carlini, N., Cottrell, G. W., Goodfellow, I. J., and Raffel, C. Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In ICML 2019 (pdf)
    • Presented by Rushabh
  • Athalye, A., Engstrom, L., Ilyas, A., and Kwok, K. Synthesizing robust adversarial examples. In ICML 2018 (pdf)
    • Presented by Pooneh
  • Athalye, A., and Carlini, N. On the robustness of the CVPR 2018 white-box adversarial example defenses. (pdf) 
    • Presented by Yu
11.01.19
  • Carlini, N., and Wagner, D. A. Adversarial examples are not easily detected: Bypassing ten detection methods. In 10th ACM Workshop on Artificial Intelligence and Security, AISec@CCS 2017 (pdf)
    • Presented by Pooneh
  •  ImageNet-Trained CNNs are biased towards texture (pdf)
    • Presented by Vibha
  • Bhagoji, A. N., Chakraborty, S., Mittal, P., and Calo, S. Analyzing federated learning through an adversarial lens.  ICML 2019 (pdf) 
    • Presented by Yifan

11.08.19

  • Gilmer, J., Adams, R. P., Goodfellow, I. J., Andersen, D., and Dahl, G. E. Motivating the rules of the game for adversarial example research. CoRR abs/1807.06732 (2018) (pdf)
    • Presented by Rushabh
  • Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks (pdf)
    • Presented by Aref
  • Programmable neural network trojan for pre-trained feature extractor (pdf)
    • Presented by Aref
11.15.19
  • PoTrojan: powerful neural-level trojan designs in deep learning models (pdf)
    • Presented by Mustafa
  • STRIP: A Defence Against Trojan Attacks on Deep Neural Networks (pdf)
    • Presented by Aref
  • Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks (pdf)
    • Presented by Mustafa

11.22.19

  • Project Presentations

11.29.19

  • FALL BREAK

12.06.19

  • Project Presentations