NSF IIS-1815696: A novel paradigm for detecting complex anomalous patterns in multi-modal, heterogeneous, and high-dimensional multi-source data sets

People encounter serious hurdles in finding effective decision-making solutions to real world problems because of uncertainty from a lack of information, conflicting information, and/or unsure observations. Critical safety concerns have been consistently highlighted because how to interpret this uncertainty has not been carefully investigated. If the uncertainty is misinterpreted, this can result in unnecessary risk. For example, a self-driving autonomous car can misdetect a human in the road. An artificial intelligence-based medical assistant may misdiagnose cancer as a benign tumor. Further, a phishing email can be detected as a normal email. The consequences of all these misdetections or misclassifications caused by different types of uncertainty adds risk and potential adverse events. Artificial intelligence (AI) researchers have actively explored how to solve various decision-making problems under uncertainty. However, no prior research has looked into how different approaches of studying uncertainty in AI can leverage each other. This project studies how to measure different causes of uncertainty and use them to solve diverse decision-making problems more effectively. This project can help develop trustworthy AI algorithms that can be used in many real world decision-making problems. In addition, this project is highly transdisciplinary so that it can encourage broader, newer, and more diverse approaches. To magnify the impact of this project in research and education, this project leverages multicultural, diversity, and STEM programs for students with diverse backgrounds and under-represented populations. This project also includes seminar talks, workshops, short courses, and/or research projects for high school and community college students.

This project aims to develop a suite of deep learning (DL) techniques by considering multiple types of uncertainties caused by different root causes and employ them to maximize the effectiveness of decision-making in the presence of highly intelligent, adversarial attacks. This project makes a synergistic but transformative research effort to study: (1) how different types of uncertainties can be quantified based on belief theory; (2) how the estimates of different types of uncertainties can be considered in DL-based approaches; and (3) how multiple types of uncertainties influence the effectiveness and efficiency of decision-making in high-dimensional, complex problems. This project advances the state-of-the-art research by performing the following: (1) Proposing a scalable, robust unified DL-based framework to effectively infer predictive multidimensional uncertainty caused by heterogeneous root causes in adversarial environments. (2) Dealing with multidimensional uncertainty based on neural networks. (3) Enhancing both decision effectiveness and efficiency by considering multidimensional uncertainty-aware designs. (4) Testing proposed approaches to ensure their robustness in the presence of intelligent adversarial attackers with advanced deception tactics based on both simulation models and visualization tools.

Demo

Publications

2021

  1. Multidimensional Uncertainty-Aware Evidential Neural Networks.
    Yibo Hu, Yuzhe Ou, Xujiang Zhao, Jin-Hee Cho, and Feng Chen.
    Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI), 2021 (To Appear). [PDF]

2020

  1. Uncertainty Aware Semi-Supervised Learning on Graph Data
    Xujiang Zhao, Feng Chen, Shu Hu, and Jin-Hee Cho
    Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS), 2020. (Spotlight acceptance rate: 4%) [PDF]
  2. Multifaceted Uncertainty Estimation for Label-Efficient Deep Learning
    Wei Shi, Xujiang Zhao, Feng Chen, and Qi Yu
    Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS), 2020. [PDF]