Explainable Artificial Intelligence (XAI) – Tractable Probabilistic Logic Models

  • Objective: The overall objective of the UTD-led effort (Team: UTD, UCLA, IIT-Delhi and TAMU), titled Tractable Probabilistic Logic Models (TPLM): A New, Deep Explainable Representation, is to develop a unified approach to explainable AI. TPLMs are a powerful family of representations that include decision trees, binary decision diagrams, cutset networks [6], sum-product networks [5], arithmetic circuits [1], sentential decision diagrams [2], first-order arithmetic circuits [8], and tractable Markov logic [3]. The UTD system extends TPLMs to: generate explanations of query results (called explanation queries); handle continuous variables, complex constraints, and unseen entities; compactly represent complex objects such as parse trees, lists and shapes, and; enable efficient representation and reasoning about time. To address the need for scalable inference, the system uses novel algorithms to answer complex explanation queries using a wide variety of techniques including lifted inference [3, 8], variational inference, and their combination. To address the need for fast and improved accuracy of learning the system uses discriminative techniques, deriving learning algorithms that compose neural networks and SVMs with TPLMs, using interpretability as a learning bias to learn more interpretable models, and extending these approaches to handle real-world issues. The UTD explanation interface provides a visualization component that displays interpretable representations and multiple related explanations, and an interactive component that allows the user to debug the model and suggest alternative explanations. UTD is addressing the analytics challenge problem area for the program and has demonstrated the TPLM-based system for recognizing and inferring human activities in multimodal data (video and text) such as Wetlab [4] (for biology experiments) and TACoS [7] (for cooking) datasets.

References:

[1] A. Darwiche. A differential approach to inference in Bayesian networks. Journal of the ACM, 50(3):280–305, 2003.

[2] A. Darwiche. SDD: A new canonical representation of propositional knowledge bases. In Proceedings of the Twenty Second International Joint Conference on Artificial Intelligence, pages 819–826, 2011.

[3] V. Gogate and P. Domingos. Probabilistic theorem proving. Communications of the ACM, 59(7):107–115, 2016.

[4] I. Naim, Y.C. Song, Q. Liu, H. Kautz, J. Luo, and D. Gildea. Unsupervised alignment of natural language sentences with video segments. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1558–1564, 2014.

[5] H. Poon and P. Domingos. Sum-Product Networks: A New Deep Architecture. In Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence, pages 337–346. AUAI Press, 2011.

[6] T. Rahman, P. Kothalkar, and V. Gogate. Cutset networks: A simple, tractable, and scalable approach for improving the accuracy of chow-liu trees. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part II, pages 630–645, 2014.

[7] M. Regneri, M. Rohrbach, D. Wetzel, S. Thater, B. Schiele, and M. Pinkal. Grounding action descriptions in videos. Transactions for the Association of the Computational Linguistics (TACL), 1:25–36, 2013.

[8] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic Inference by First-Order Knowledge Compilation. In Proceedings of the Twenty Second International Joint Conference on Artificial Intelligence, pages 2178–2185, 2011.