Argumentation Competency Questions

Competency questions for modeling arguments, claims, assumptions, evidence, and justification structures in scholarly discourse.

CQ-ARG-01: What arguments are made in a document?

Intent

Retrieve all argumentative structures asserted in a given document.

Natural Language Question

What arguments does this paper make?

SPARQL Query


PREFIX amo:  
PREFIX idea: 
PREFIX po:   

SELECT DISTINCT ?arg
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg a amo:Argument .
}

Expected Result


Semantic modeling of research findings improves transparency and comparison

CQ-ARG-02: What claims are made with respect to an argument?

Intent

Retrieve claims that constitute or support a given argument.

Natural Language Question

What claims are made with respect to the argument?

SPARQL Query


PREFIX amo:  
PREFIX idea: 
PREFIX po:   

SELECT DISTINCT ?claim
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg amo:hasClaim ?claim .
}

Expected Result


Explicit semantic representation of research findings enhances transparency.

CQ-ARG-03: How is an argument realized as an approach?

Intent

Identify the methodological approach that operationalizes an argument.

Natural Language Question

How is the argument realized in the paper as an approach?

SPARQL Query


PREFIX amo:  
PREFIX idea: 
PREFIX po:   

SELECT ?arg ?app
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg a amo:Argument ;
       idea:realizes ?app .
}

Expected Result


Semantic modeling of research findings improves transparency and comparison
|
SemSur ontology-based semantic representation pipeline

CQ-ARG-04: What assumptions underlie the approach?

Intent

Retrieve assumptions underlying a methodological approach.

Natural Language Question

What assumptions are made in the approach?

SPARQL Query


PREFIX amo:  
PREFIX idea: 
PREFIX po:   
PREFIX owl:  

SELECT DISTINCT ?assumption ?type
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg a amo:Argument ;
       idea:hasAssumption ?assumption .
  ?assumption a ?type .
  FILTER (?type != owl:NamedIndividual)
}

Expected Result


Structured representations facilitate accessibility and comparability | Assumption
Machine-readable semantic models enable automated reasoning | TheoreticalAssumption

CQ-ARG-05: What artifacts are used or introduced by the approach?

Intent

Retrieve scholarly artifacts employed or introduced by an approach.

Natural Language Question

What artifacts are used or introduced in the approach?

SPARQL Query


PREFIX idea: 
PREFIX po:   
PREFIX owl:  

SELECT DISTINCT ?artifact ?type
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg idea:realizes ?app .

  { ?app idea:uses ?artifact }
  UNION
  { ?app idea:introduces ?artifact }

  ?artifact a ?type .
  FILTER (?type != owl:NamedIndividual)
}

Expected Result


FaBiO and CiTO ontologies for bibliographic and citation representation | Model
Document Component Ontology (DoCO) | Model
SemSur ontology for semantic representation of research findings | Model

CQ-ARG-06: What hypotheses are evaluated during evidence generation?

Intent

Retrieve hypotheses evaluated during the evidence generation process.

Natural Language Question

What experimental hypotheses are evaluated?

SPARQL Query


PREFIX amo:  
PREFIX expo: 
PREFIX idea: 
PREFIX po:   

SELECT DISTINCT ?evidence ?hypothesis
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg amo:hasEvidence ?evidence .
  ?evidence expo:hasHypothesis ?hypothesis .
}

Expected Result


Semantic representations improve cross-paper querying | Hypothesis

CQ-ARG-07: What results are produced as evidence?

Intent

Retrieve experimental or evaluative results produced as evidence.

Natural Language Question

What experimental results are generated?

SPARQL Query


PREFIX amo:  
PREFIX expo: 
PREFIX idea: 
PREFIX po:   

SELECT DISTINCT ?evidence ?result
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg amo:hasEvidence ?evidence .
  ?evidence expo:hasResult ?result .
}

Expected Result


Semantic representations enabled complex cross-document queries | Result

CQ-ARG-08: What is the evidence generation design?

Intent

Retrieve the experimental or evaluation design used to generate evidence.

Natural Language Question

What is the evidence generation design?

SPARQL Query


PREFIX amo:  
PREFIX expo: 
PREFIX idea: 
PREFIX po:   

SELECT DISTINCT ?evidence ?design
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg amo:hasEvidence ?evidence .
  ?evidence expo:hasExperimentDesign ?design .
}

Expected Result


Create semantic representations of research findings | ExperimentalDesign

CQ-ARG-09: What evidence supports a given claim?

Intent

Retrieve evidence that directly supports a specific claim.

Natural Language Question

What evidence supports this claim?

SPARQL Query


PREFIX amo:  
PREFIX idea: 
PREFIX po:   

SELECT DISTINCT ?claim
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg amo:hasClaim ?claim .
}

Expected Result


Explicit semantic representation enables transparent comparison and reuse.

CQ-ARG-11: Which cited works are used as backing?

Intent

Retrieve cited scholarly works that provide backing for claims.

Natural Language Question

Which cited works are used as backing?

SPARQL Query


PREFIX amo:  
PREFIX idea: 
PREFIX po:   

SELECT DISTINCT ?backing
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg amo:hasBacking ?backing .
}

Expected Result


Prior semantic publishing efforts demonstrate feasibility of structured scholarly representation.

CQ-ARG-12: What warrants connect evidence to claims?

Intent

Retrieve warrants that justify the connection between evidence and claims.

Natural Language Question

What warrants connect evidence to claims?

SPARQL Query


PREFIX amo:  
PREFIX idea: 
PREFIX po:   

SELECT DISTINCT ?warrant
WHERE {
  idea:work-semsur-2018 po:contains ?arg .
  ?arg amo:hasWarrant ?warrant .
}

Expected Result


If research contributions are represented as linked entities, they become comparable across publications.
Machine-readable representations enable automated analysis beyond human reading.