Scientific Fact-Checking - A Survey of Resources and Approaches

Summary

Annotations

Annotation

« it has become difficult to discern reliable sources from dubious content »()

Annotation

« We define scientific fact-checking as a variation of the fact-checking task that deals with assessing claims rooted in scientific knowledge. »(2)

Annotation

« working with highly complex scientific language and specific terminology. »(3)

Annotation

« The task of Natural Language Inference (NLI), commonly equated with Recognizing Textual Entailment (RTE), is the task of inferring whether a premise entails or contradicts a given hypothesis. This task is a crucial component of automated factchecking since predicting the final veracity of the claim is modeled entailment recognition between a claim and found evidence. »(3)

Annotation

« For the scientific domain, datasets like MedNLI, which features medical claims rooted in the medical history of patients (Romanov and Shivade, 2018); SciNLI, which has claims from the domain of computational linguistics (Sadat and Caragea, 2022); and NLI4CT, with claims and evidence that originate from clinical trials reports of breast cancer patients (Vladika and Matthes, 2023). »(3)

Annotation

« with the search string ("scientific" OR "biomedical") AND ("fact checking" OR "fact verification" OR "claim verification"). »(4)

Annotation

« SUPPORTED, REFUTED, and NOT ENOUGH INFORMATION (NEI). »(5)

Annotation

« The standard framework usually consists of three major components that can all be modeled as well-established NLP tasks: document retrieval, evidence (rationale) selection, and verdict prediction »(5)

Annotation

« ParagraphJoint, ARSJoint, and MultiVerS are so-called joint models because they all use multi-task learning to jointly learn the tasks of rationale selection and verdict prediction »(7)


Related Notes