Welcome to the AI Safety Laboratory (AISL) at UT Dallas!
Our lab is dedicated to advancing the field of AI safety and ethics through the development of responsible machine learning methods. Our goal is to ensure the reliable detection and management of ethical risks, biases, and anomalies in complex, large-scale datasets. By prioritizing transparency, fairness, and accountability in our AI systems, we collaborate with public sector partners to not only push the boundaries of AI technology but also safeguard public health, safety, and security. Our commitment to ethical AI practices guides our research and applications, aiming to foster trust and promote the responsible use of AI in society.
The lab is currently directed by Prof. Feng Chen. Please feel free to reach out!
Research at AISL
The lab's three main research areas include:
- Robust AI for Out-of-Distribution Detection and Generalization. This consolidated area focuses on developing AI systems that are adept at identifying out-of-distribution data and anomalies, as well as generalizing to new, unseen scenarios. The aim is to enhance the safety and reliability of AI applications by equipping them with advanced meta-learning, few-shot learning, and domain adaptation techniques. This research strives to create models that can adapt to dynamic and evolving environments, ensuring consistent performance even when faced with data or situations that diverge from their training experiences. The overarching goal is to build AI systems that are robust, adaptable, and capable of handling the uncertainties inherent in real-world applications.
- Uncertainty Quantification and Reasoning in AI Safety. This area is dedicated to the development of methodologies and frameworks for quantifying and reasoning about uncertainties in AI systems, with a particular focus on safety-critical applications. It involves exploring probabilistic models, Bayesian inference, and other statistical approaches to assess and manage the uncertainties associated with AI predictions and decisions. The research aims to improve the transparency and trustworthiness of AI systems by providing clear metrics and reasoning for their outputs, especially in high-stakes environments where understanding and mitigating uncertainty is crucial for ensuring safety and reliability..
- Algorithmic Fairness and Equity in AI. This area continues to focus on creating AI systems that promote fairness and equity, addressing and mitigating biases to ensure equitable outcomes across diverse user groups. The research encompasses the development of fairness-aware algorithms, bias detection and correction techniques, and the integration of ethical principles into AI design and deployment. The objective is to ensure that AI-driven decisions are just, equitable, and free from discriminatory biases, thereby fostering inclusivity and fairness in various applications.
Key application areas include:
- Healthcare and Medical Diagnosis: Implementing AI to enhance diagnostic accuracy and patient care, with a focus on detecting out-of-distribution cases in medical imaging and patient data. Ensuring equitable healthcare outcomes by addressing biases in diagnostic algorithms and promoting fairness in treatment recommendations.
- Autonomous Systems and Robotics: Developing robust AI systems for autonomous vehicles and robots that can safely navigate and make decisions in unpredictable environments. Emphasizing out-of-distribution generalization to handle novel scenarios and ensure safety in diverse operational contexts.
- Financial Services: Applying AI to detect anomalous transactions and prevent fraud, while ensuring fairness in credit scoring and lending practices. Addressing biases in financial algorithms to promote equitable access to financial products and services.
- Environmental Monitoring and Climate Change: Utilizing AI for detecting unusual environmental patterns and changes, contributing to more effective climate change mitigation and adaptation strategies. Ensuring that AI-driven environmental policies are fair and do not disproportionately affect vulnerable communities.
- Social Media and Content Moderation: Employing AI to identify and manage out-of-distribution content, such as emerging forms of harmful content, while maintaining fairness in content moderation practices to avoid bias against certain groups or viewpoints.
- Criminal Justice and Public Safety: Using AI to enhance predictive policing and recidivism risk assessments, with a strong emphasis on eliminating biases to ensure fairness in the criminal justice system. Developing systems that can adapt to new crime patterns while upholding ethical standards.
- Education and Personalized Learning: Implementing AI to provide personalized learning experiences, ensuring the systems can adapt to diverse learning styles and needs. Addressing fairness to prevent biases in educational content and recommendations, ensuring equal opportunities for all learners.
Additional details on many of these projects will be posted soon; in the meantime, please feel free to browse some of our recent publications.