Catalog of Explanation Types Modeling of Explanation Types

Explanation Types

In this section, we show how our Explanation Ontology can be used to represent the generational needs of fifteen different explanation types we identified from our literature review. For more details about our explanation types itself, refer to our paper submission. In this website we present modeling snippets that use classes and properties from our ontology.

Catalog of Explanation Types

We identified nine explanation types, each with different foci and generational needs, from a literature review we conducted in the computer science and adjacent explanation science domains of philosophy and social sciences. Additionally, we also support six explanation types detailed in a recent paper, Zhou et al. Utilizing the schema provided by our explanation ontology, we can encode the generational needs of these explanation types as OWL restrictions. Below for each explanation type, we present our description, a prototypical question they can address in different settings (mainly clinical) and the logical formalization of the explanation type. The explanation types are:

Explanation Type
Case Based
Contextual
Contrastive
Counterfactual
Data
Everyday
Fairness
Impact
Rationale
Responsibility
Safety and Performance
Scientific
Simulation Based
Statistical
Trace Based

Explanation Types

We depict logical formalization of our encoding of the generational needs for explanation type in Manchester OWL syntax, in that classes in the OWL restriction are referred to via their labels, and the color highlights are similar to those that can be viewed in Protege. These logical formalizations presented against the OWL restrictions label for each explanation type are a representation of the sufficiency conditions mentioned before the restrictions.

  1. Case Based ExplanationTop
    • Definition: Provides solutions that are based on actual prior cases that can be presented to the user to provide compelling support for the system’s conclusions, and may involve analogical reasoning, relying on similarities between features of the case and of the current situation.
    • Prototypical Question: To what other situations has this recommendation been applied?
    • Sufficency Condition:
      Is there at least one other prior `case' similar to this situation that requires an `explanation'?
      Is there a similarity between this case, and that other case?
    • OWL Restriction:
           isBasedOn some 
          (Explanation
           and (isBasedOn some
              ('System Recommendation'
               and (prov:wasGeneratedBy some
                  ('Artificial Intelligence Task'
                   and ('has input' some 'Object Record'))))))
            
  2. Contextual ExplanationTop
    • Definition: Refers to information about items other than the explicit inputs and output, such as information about the user, situation, and broader environment that affected the computation.
    • Prototypical Question: What broader information about the current situation prompted the suggestion of this recommendation?
    • Sufficency Condition:
      Are there any other extra inputs that are not contained in the `situation' description itself?
      And by including those, can better insights be included in the `explanation'?
    • OWL Restriction:
           (
           (isBasedOn some  
          ('Contextual Knowledge'
           and ('in relation to' some Situation))) 
           or 
           (isBasedOn some ('Contextual Knowledge'
           and ('in relation to' some  'Object Record'))))
       and (isBasedOn some  'System Recommendation')
            
  3. Contrastive ExplanationTop
    • Definition: Answers the question “Why this output instead of that output,” making a contrast between the given output and the facts that led to it (inputs and other considerations), and an alternate output of interest and the foil (facts that would have led to it).
    • Prototypical Question: Why drug A over drug B the one I am typically prescribed?
    • Sufficency Condition:
      Is there a `system recommendation' that was made let’s call it A)?
      What `facts' led to it?
      Is there another `system recommendation' that could have happened or did occur, (let’s call it B)?
      What was the `foil that led to B?
      Can A and B be compared?
    • OWL Restriction:
          (isBasedOn some 
          ('System Recommendation'
           and (used some Fact)))
       and 
       (isBasedOn some
          ('System Recommendation'
           and (used some Foil)))
            
  4. Counterfactual ExplanationTop
    • Definition:Addresses the question of what solutions would have been obtained with a different set of inputs than those used.
    • Prototypical Question: What if input A was over 1000?
    • Sufficency Condition:
      Is there a different set of inputs that can be considered?
      If so what is the alternate `system recommendation'?
    • OWL Restriction:
          isBasedOn some 
          ('System Recommendation'
           and (used some 
              (Knowledge
               and ('in relation to' some 'Object Record'))))
            
  5. Data ExplanationTop
    • Definition: Focuses on what the data is and how it has been used ina particular decision, as well as what data and how it hasbeen used to train and test the ML model. This type of ex-planation can help users understand the influence of dataon decisions.
    • Prototypical Question: What is the data? How has the data been used in a particular decision? How has the data been used to train the ML model?
    • Sufficency Condition:
      Is there a ‘system recommendation’ from an ‘AI method’ that has as input, a ‘dataset’ or part of it?
      Is there a ‘system recommendation’ that includes ‘object records’ that are used to train / test the ‘AI method’?
    • OWL Restriction:
          isBasedOn some 
          ('System Recommendation'
           and (used some 
              (Knowledge
               and ('in relation to' some 'Object Record'))))
            
  6. Everyday ExplanationTop
    • Definition:Uses accounts of the real world that appeal to the user, given their general understanding and knowledge.
    • Prototypical Question:Why does Option A make sense?
    • Sufficency Condition:
      Can accounts of the real world be simplified to appeal to the user based on their general understanding and `knowledge'?
    • OWL Restriction:
          (
      ((isBasedOn some  
          (Situation
           and ('in relation to' some User))) or (isBasedOn some  
          ('Experential Knowledge'
           and ('in relation to' some  User))))
       and (isBasedOn some 'System Recommendation')) 
      or
      (isBasedOn some  
          ('System Recommendation'
           and ((used some 
              (Situation
               and ('in relation to' some User))) or (used some  
              ('Experential Knowledge'
               and ('in relation to' some User)))))
      )
            
  7. Fairness ExplanationTop
    • Definition:Provides steps taken across the design and implementation of an ML system to ensure that the decisions it assistsare generally unbiased, and whether or not an individualhas been treated equitably. Fairness explanations are key to increasing individuals’ confidence in an AI system. It can foster meaningful trust by explaining to an individual how bias and discrimination in decisions are avoided.
    • Prototypical Question:Is there a bias consequence of this system recommendation? What data was used to arrive at this decision?
    • Sufficency Condition:
      Is there a ‘system recommendation’ from an ‘AI method’ that has a ‘statement of consequence’?
      Is there a ‘dataset’ in the ‘system recommendation’ the explanation is based on?
    • OWL Restriction:
          (
      ((isBasedOn some  
          (Situation
           and ('in relation to' some User))) or (isBasedOn some  
          ('Experential Knowledge'
           and ('in relation to' some  User))))
       and (isBasedOn some 'System Recommendation')) 
      or
      (isBasedOn some  
          ('System Recommendation'
           and ((used some 
              (Situation
               and ('in relation to' some User))) or (used some  
              ('Experential Knowledge'
               and ('in relation to' some User)))))
      )
            
  8. Impact ExplanationTop
    • Definition:Concerns the impact that the use of a system and its de-cisions has or may have on an individual and on a wider society. Impact explanations give individuals some power and control over their involvement in ML-assisted decisions. By understanding the possible consequences of the decision, an individual can better assess their participation in the process and how the outcomes of the decision may affect them.
    • Prototypical Question:What is the impact of a system recommendation? How will the recommendation affect me?
    • Sufficency Condition:
      Is there a ‘system recommendation’ from an ‘AI method’ that has a ‘statement of consequence’?
    • OWL Restriction:
          (
      ((isBasedOn some  
          (Situation
           and ('in relation to' some User))) or (isBasedOn some  
          ('Experential Knowledge'
           and ('in relation to' some  User))))
       and (isBasedOn some 'System Recommendation')) 
      or
      (isBasedOn some  
          ('System Recommendation'
           and ((used some 
              (Situation
               and ('in relation to' some User))) or (used some  
              ('Experential Knowledge'
               and ('in relation to' some User)))))
      )
            
  9. Rationale ExplanationTop
    • Definition:About the “why” of an ML decision and provides reasonsthat led to a decision, and is delivered in an accessible and understandable way, especially for lay users. If the ML decision was not what users expected, rationale explanations allows users to assess whether they believe the reasoning of the decision is flawed. While, if so, the explanation supports them to formulate reasonable arguments for why they think this is the case.
    • Prototypical Question:Why was this ML decision made and does it provide reasons that led to a decision
    • Sufficency Condition:
      Is there a ‘system recommendation’ from an ‘AI method’ that has a ‘system trace’?
      Is there a ‘local explanation’ output that an ‘explanation’ is based on?
    • OWL Restriction:
          (
      ((isBasedOn some  
          (Situation
           and ('in relation to' some User))) or (isBasedOn some  
          ('Experential Knowledge'
           and ('in relation to' some  User))))
       and (isBasedOn some 'System Recommendation')) 
      or
      (isBasedOn some  
          ('System Recommendation'
           and ((used some 
              (Situation
               and ('in relation to' some User))) or (used some  
              ('Experential Knowledge'
               and ('in relation to' some User)))))
      )
            
  10. Responsibility ExplanationTop
    • Definition:Concerns “who” is involved in the development, manage-ment, and implementation of an ML system, and “who” to contact for a human review of a decision. Responsibility explanations help by directing the individual to the person or team responsible for a decision. It also makes accountability traceable.
    • Prototypical Question:Who is involved in the development, management, and implementation of an ML system? Who to contact for a human review of a decision?
    • Sufficency Condition:
      Is there a ‘system recommendation’ from an ‘AI method’ that was part of a ‘system’, whose ‘system developer’ is known?
    • OWL Restriction:
          (
      ((isBasedOn some  
          (Situation
           and ('in relation to' some User))) or (isBasedOn some  
          ('Experential Knowledge'
           and ('in relation to' some  User))))
       and (isBasedOn some 'System Recommendation')) 
      or
      (isBasedOn some  
          ('System Recommendation'
           and ((used some 
              (Situation
               and ('in relation to' some User))) or (used some  
              ('Experential Knowledge'
               and ('in relation to' some User)))))
      )
            
  11. Safety and Performance ExplanationTop
    • Definition:Deals with steps taken across the design and implementation of an ML system to maximise the accuracy, reli-ability, security, and robustness of its decisions and behaviours. Safety and performance explanations help to assure individuals that an ML system is safe and reliable by explanation to test and monitor the accuracy, reliability,security, and robustness of the ML model.
    • Prototypical Question:What steps were taken to ensure robustness and reliability of system? How has the data been used totrain the ML model? What steps were taken to ensure robustness and reliability of AI method? What were the plans for the system development?
    • Sufficency Condition:
      Is there a ‘system recommendation’ from an ‘AI method’ that is part of a ‘system’ that exposes its design ‘plans’?
      Is there a ‘system recommendation’ that includes ‘object records’ that are used to train / test the ‘AI method’?
    • OWL Restriction:
          (
      ((isBasedOn some  
          (Situation
           and ('in relation to' some User))) or (isBasedOn some  
          ('Experential Knowledge'
           and ('in relation to' some  User))))
       and (isBasedOn some 'System Recommendation')) 
      or
      (isBasedOn some  
          ('System Recommendation'
           and ((used some 
              (Situation
               and ('in relation to' some User))) or (used some  
              ('Experential Knowledge'
               and ('in relation to' some User)))))
      )
            
  12. Scientific ExplanationTop
    • Definition:References the results of rigorous scientific methods, observations, and measurements.
    • Prototypical Question: What studies have backed this recommendation?
    • Sufficency Condition:
      Are there results of rigorous `scientific methods' to explain the situation?
      Is there `evidence' from the literature to explain this `situation'?
    • OWL Restriction:
          (
      (isBasedOn some 'System Recommendation')
       and (isBasedOn some 
          ('Scientific Knowledge'
           and ((wasGeneratedBy some study) or (wasGeneratedBy some 'Scientific Method'))))) 
      or
      (isBasedOn some 
          ('System Recommendation'
           and (used some 
              ('Scientific Knowledge'
               and ((wasGeneratedBy some study) or (wasGeneratedBy some 'Scientific Method')))))
      )
            
  13. Simulation Based ExplanationTop
    • Definition:Uses an imagined or implemented imitation of a system or process and the results that emerge from similar inputs.
    • Prototypical Question: What would happened if this recommendation is followed?
    • Sufficency Condition:
      Is there an `implemented' imitation of the `situation' at hand?
      Does that other scenario have inputs similar to the current `situation'?
    • OWL Restriction:
          isBasedOn some 
          ('System Recommendation'
           and (wasGeneratedBy some 
              ('AI Task'
               and ('has input' some 
                  ('Object Record'
                   and (hasSetting some Situation))))))
            
  14. Statistical ExplanationTop
    • Definition:Presents an account of the outcome based on data about the occurrence of events under specified (e.g., experimental) conditions. Statistical explanations refer to numerical evidence on the likelihood of factors or processes influencing the result.
    • Prototypical Question: What percentage of people with this condition have recovered?
    • Sufficency Condition:
      Is there `numerical evidence'/likelihood account of the `system recommendation' based on data about the occurrence of the outcome described in the recommendation?
    • OWL Restriction:
           isBasedOn some 
          ('System Recommendation'
           and (used some 
              ('Numerical Evidence'
               and ('in relation to' some 'Object Record'))))
            
  15. Trace Based ExplanationTop
    • Definition:Provides the underlying sequence of steps used by the system to arrive at a specific result, containing the line of reasoning per case and addressing the question of why and how the application did something.
    • Prototypical Question:What steps were taken by the system to generate this recommendation?
    • Sufficency Condition:
      Is there a record of the underlying sequence of steps (`system trace') used by the `system' to arrive at a specific `recommendation'?
    • OWL Restriction:
           isBasedOn some 
          ('System Recommendation'
           and (wasGeneratedBy some  
              ('AI Task'
               and (used some  
                  (('Decision Tree' or 'Knowledge based Systems')
                   and ('has output' some 'System Trace'))))))