Example of Competency Questions
We have crafted a set of competency questions which showcase how our explanation ontology can be useful to provide system designers the support they seek when planning to include different explanation types into the system and while deciding what explanation would be best suited for the user's question in real-time given the system's capabilities. We first present a table of our competency question list with the setting they correspond to and answers, and, we then present SPARQL query implementations for these questions.
Setting | Example Competency Question | Candidate Answer |
---|---|---|
System Design |
(Q1). Which AI model (s) is capable of generating this explanation type? E.g. Which AI model is capable of generating a trace-based explanation? |
Knowledge-based systems Machine learning model: decision trees |
System Design |
(Q2). What example questions have been identified for counterfactual explanations? |
What other factors about the patient does the system know of? What if the major problem was a fasting plasma glucose? |
System Design |
(Q3).What are the components of a scientific explanation? | Generated by an AI Task, Based on recommendation, and based on evidence from study or basis from scientific method |
Real-time | (Q4). Given the system has ranked specific recommendations by comparing different medications, what explanations can be provided for that recommendation? |
Contrastive Explanation |
Real-time | (Q5). Which explanation type best suits the user question, asking about numerical evidence for patients on a drug?,'' and how will the system generate the answer? |
Explanation type: statistical System: run `Inductive' AI task with `Clustering' method to generate numerical evidence |
SPARQL Queries
- Which AI models can generate trace based explanations?
- Query:
prefix rdfs:<http://www.w3.org/2000/01/rdf-schema#> prefix owl:<http://www.w3.org/2002/07/owl#> prefix ep: <http://linkedu.eu/dedalo/explanationPattern.owl#> prefix prov: <http://www.w3.org/ns/prov#> select ?class ?property ?taskObject where { ?class (rdfs:subClassOf|owl:equivalentClass)/owl:onProperty ep:isBasedOn . ?class (rdfs:subClassOf|owl:equivalentClass)/owl:someValuesFrom ?object . ?object owl:intersectionOf ?collections . ?collections rdf:rest*/rdf:first ?comps . ?comps rdf:type owl:Restriction . ?comps owl:onProperty ?property . ?comps owl:someValuesFrom ?taskObject . ?class rdfs:label "Trace Based Explanation" . }
- Answer
Class Property Restriction Trace based
ExplanationwasGeneratedBy 'Artificial Intelligence Task' and (used some ('Decision Tree' or 'Knowledge based System'))
- Query:
- What example questions have been identified for counterfactual explanations?
- Query:
prefix rdfs:<http://www.w3.org/2000/01/rdf-schema#> prefix owl:<http://www.w3.org/2002/07/owl#> prefix ep: <http://linkedu.eu/dedalo/explanationPattern.owl#> prefix prov: <http://www.w3.org/ns/prov#> prefix eo: <https://purl.org/heals/eo#> prefix sio: <http://semanticscience.org/resource/> select ?questionLabel where { ?explanation a eo:CounterfactualExplanation . ?explanation eo:addresses ?question . ?question a sio:SIO_000085 . ?question rdfs:label ?questionLabel . }
- Answer
Question Label What other factors about the patient does the system know of? What if the major problem was a fasting plasma glucose?
- Query:
- What are the components of a scientific explanation?
- Query:
prefix rdfs:<http://www.w3.org/2000/01/rdf-schema#> prefix owl:<http://www.w3.org/2002/07/owl#> select ?class ?restriction where { ?class (rdfs:subClassOf|owl:equivalentClass) ?restriction . ?class rdfs:label "Scientific Explanation" . }
- Answer
Class Restriction Scientific
Explanation(ep:isBasedOn some (eo:`Scientific Knowledge' and ((prov:wasGeneratedBy
some `Study') or (prov:wasAssociatedWith some `Scientific Method'))) and
(isBasedOn some `System Recommendation')) or
(ep:isBasedOn some (`System Recommendation' and
(prov:used some (eo:`Scientific Knowledge' and ((prov:wasGeneratedBy
some `Study') or (prov:wasAssociatedWith some
`Scientific Method'))))))
- Query:
- Given the system was performing abductive reasoning and has ranked specific recommendations by comparing different medications, what explanations can be provided for that recommendation??
- Query:
prefix rdfs:<http://www.w3.org/2000/01/rdf-schema#> prefix owl:<http://www.w3.org/2002/07/owl#> prefix ep: <http://linkedu.eu/dedalo/explanationPattern.owl#> prefix eo: <http://purl.org/eo#> prefix prov: <http://www.w3.org/ns/prov#> select DISTINCT ?expType where { ?expType owl:equivalentClass/owl:intersectionOf/rdf:rest*/rdf:first/owl:someValuesFrom ?compObject . ?compObject a owl:Class ; owl:intersectionOf [ rdf:rest*/rdf:first ?otherClass ] . ?expType rdfs:subClassOf* ep:Explanation . ?expType rdfs:subClassOf [ a owl:Restriction ; owl:onProperty prov:wasGeneratedBy ; owl:someValuesFrom ?tasker ] . ?otherClass rdfs:label "System Recommendation" . ?tasker rdfs:label "Abductive Task" . }
- Answer
Explanation Type Contrastive Explanation
- Query:
- Which explanation type best suits the user question, ``Which explanation type can expose numerical evidence about patients that did well on this drug?,'' and how will the system generate the answer?
- Query:
prefix rdfs:<http://www.w3.org/2000/01/rdf-schema#> prefix owl:<http://www.w3.org/2002/07/owl#> prefix eo: <https://purl.org/heals/eo#> prefix ep: <http://linkedu.eu/dedalo/explanationPattern.owl#> prefix prov: <http://www.w3.org/ns/prov#> select ?class ?object where { { select DISTINCT ?classLabel ?object where { ?class (rdfs:subClassOf|owl:equivalentClass)/owl:onProperty ep:isBasedOn . ?class (rdfs:subClassOf|owl:equivalentClass)/owl:someValuesFrom ?object . ?object owl:intersectionOf ?collections . ?collections rdf:rest*/rdf:first ?comps . ?comps a owl:Restriction . ?comps owl:onProperty prov:wasGeneratedBy . ?comps owl:someValuesFrom ?compObject . ?compObject owl:intersectionOf ?subCollections . ?subCollections rdf:rest*/rdf:first ?subComps . ?subComps a owl:Restriction . ?subComps owl:someValuesFrom ?subCompObject . ?subCompObject owl:intersectionOf ?innerCollections . ?innerCollections rdf:rest*/rdf:first ?innerComps . ?innerComps a owl:Class . ?innerComps rdfs:label "Numerical Evidence" . ?class rdfs:label ?classLabel . ?class rdfs:subClassOf* ep:Explanation . } } ?class rdfs:label ?classLabel . }
- Answer
Class Object Statistical
Explanation'System Recommendation' and (GeneratedBy some ('Inductive Task' and ('has output' some ('Numerical Evidence' and ('in relation to' some 'Object Record')))))
- Query:
Call for Competency Questions
We have put together a set of competency questions that we consider representative questions that system designers would want to address by the Explanation Ontology (EO). We vetted the validity and usefulness of these questions with a small expert panel within our lab. In addition, given that explainability needs are evolving and use cases are diverse, we are actively looking and are excited for other competency questions that the EO can help address.
If you are interested to provide us questions, please reach out to the project authors, Shruthi: charis@rpi.edu, Prof. Oshani Seneviratne, senevo@rpi.edu and Prof. Deborah L. McGuinness, dlm@cs.rpi.edu, with a subject line titled "Competency Questions for the Explanation Ontology".