Abstract Resources Tools Used Team Publications

Explanation Ontology: A General-Purpose, Semantic Representation for Supporting User-Centered Explanations

A website to navigate resources open-sourced for the Explanation Ontology. Use the side navigation panel to explore different sections of the website and click on an add symbol for more navigation options under some sections.


Abstract

In the past decade, trustworthy Artificial Intelligence (AI) has emerged as a focus for the AI community to ensure better adoption of AI models, and explainable AI is a cornerstone in this area. Over the years, the focus has shifted from building transparent AI methods to making recommendations on how to make black-box or opaque machine learning models and their results more understandable by experts and non-expert users.
In our previous work, to address the goal of supporting user-centered explanations that make model recommendations more explainable, we developed an Explanation Ontology (EO), a general-purpose representation, to help system designers, our intended users of the EO, connect explanations to their underlying data and knowledge.
We now addresses the apparent need for improved interoperability to support a wider range of use cases. We expand the EO, mainly in the system attributes contributing to explanations, by introducing new classes and properties to support a broader range of state-of-the-art explainer models. We present the expanded ontology model, highlighting the classes and properties that are important to model a larger set of fifteen literature-backed explanation types that are supported within the EO.
We build on these explanation type descriptions to show how to utilize the EO model to represent explanations in five use cases spanning the domains of finance, food, and healthcare. We include competency questions that evaluate the EO's capabilities to provide guidance for system designers on how to apply our ontology to their own use cases. This guidance includes allowing system designers to query the EO directly and providing them exemplar queries to explore content in the EO represented use cases.
We have released this significantly expanded version of the Explanation Ontology at https://purl.org/heals/eo and updated here on our resource website, with supporting documentation.
Overall, through the EO model, we aim to help system designers be better informed about explanations and support these explanations that can be composed, given their systems' outputs from various AI models, including a mix of machine learning, logical and explainer models, and different types of data and knowledge available to their systems.


List of Resources 


Tools Used during Development


Team

Current Contributors

Shruthi Chari1, Oshani Seneviratne1, Mohamed Ghalwash2, Sola Shirai1 Daniel M. Gruen1, Pablo Meyer2, Prithwish Chakraborty2, Deborah L. McGuinness1

Past Contributors

Morgan Foreman2, Amar K. Das2

1Rensselaer Polytechnic Institute | 2IBM Research

Publications

  • Chari, Shruthi, Oshani Seneviratne, Mohamed Ghalwash, Sola Shirai, Daniel M. Gruen, Pablo Meyer, Prithwish Chakraborty, and Deborah L. McGuinness. "Explanation Ontology: A General-Purpose, Semantic Representation for Supporting User-Centered Explanations."
  • Chari, Shruthi, Prasant Acharya, Daniel M. Gruen, Olivia Zhang, Elif K. Eyigoz, Mohamed Ghalwash, Oshani Seneviratne et al. "Informing clinical assessment by contextualizing post-hoc explanations of risk prediction models in type-2 diabetes." Artificial Intelligence in Medicine 137 (2023): 102498
  • Chari, Shruthi, Prithwish Chakraborty, Mohamed Ghalwash, Oshani Seneviratne, Elif K. Eyigoz, Daniel M. Gruen, Fernando Suarez Saiz, Ching-Hua Chen, Pablo Meyer Rojas, and Deborah L. McGuinness. "Leveraging Clinical Context for User-Centered Explainability: A Diabetes Use Case." arXiv preprint arXiv:2107.02359 (2021).
  • Padhiar, I., Seneviratne, O., Chari, S., Gruen, D., & McGuinness, D. L. (2021, April). Semantic modeling for food recommendation explanations. In 2021 IEEE 37th International Conference on Data Engineering Workshops (ICDEW) (pp. 13-19). IEEE.
  • [Best Paper] Chari, S., Chakraborty, P., Seneviratne, O., Ghalwash, M., Gruen, D. M., Sow, D., & McGuinness, D. L. (2021). Towards Clinically Relevant Explanations for Type-2 Diabetes Risk Prediction with the Explanation Ontology. AMIA.
  • Gruen, Daniel M., Shruthi Chari, Morgan A. Foreman, Oshani Seneviratne, Rachel Richesson, Amar K. Das, and Deborah L. McGuinness. "Designing for ai explainability in clinical context." AAAI, 2021.
  • [Best Resource Paper] Explanation Ontology: A Model of Explanations for User-Centered AI; Shruthi Chari , Oshani Seneviratne , Daniel M. Gruen , Morgan A. Foreman , Amar K. Das, Deborah L. McGuinness; Resource Track,19th International Semantic Web Conference 2020
  • Explanation Ontology in Action: A Clinical Use-Case; Shruthi Chari , Oshani Seneviratne , Daniel M. Gruen , Morgan A. Foreman , Amar K. Das, Deborah L. McGuinness; Posters and Demo Track,19th International Semantic Web Conference 2020
  • S Chari, O Seneviratne, DM Gruen, DL McGuinness. "Foundations of Explainable Knowledge-Enabled Systems." In Ilaria Tiddi, Freddy Lecue, Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI -- Foundations, Applications and Challenges. Studies on the Semantic Web, pp 23 - 48; 2020
  • S Chari, O Seneviratne, DM Gruen, DL McGuinness. "Directions for Explainable Knowledge-Enabled Systems." In Ilaria Tiddi, Freddy Lecue, Pascal Hitzler (eds.), Knowledge Graphs for eXplainable AI -- Foundations, Applications and Challenges. Studies on the Semantic Web, pp 245 - 261; 2020