Scientific Fact-Checking - A Survey of Resources and Approaches
Summary
- A survey on scientific fact checking
- The task of automated fact checking can be inferred as the RTE (Recognizing Textual Entailment) task
- MedNLI: medical claims rooted on patients history
- SciNLI: computational linguistic
- NLI4CT: claims and evidence rooted in clinical trial reports
- ParagraphJoint, ARSJoint, and MultiVerS are so-called joint models because they all use multi-task learning to jointly learn the tasks of rationale selection and verdict prediction.
Annotations
« We define scientific fact-checking as a variation of the fact-checking task that deals with assessing claims rooted in scientific knowledge. »(2)
« working with highly complex scientific language and specific terminology. »(3)
« The task of Natural Language Inference (NLI), commonly equated with Recognizing Textual Entailment (RTE), is the task of inferring whether a premise entails or contradicts a given hypothesis. This task is a crucial component of automated factchecking since predicting the final veracity of the claim is modeled entailment recognition between a claim and found evidence. »(3)
« For the scientific domain, datasets like MedNLI, which features medical claims rooted in the medical history of patients (Romanov and Shivade, 2018); SciNLI, which has claims from the domain of computational linguistics (Sadat and Caragea, 2022); and NLI4CT, with claims and evidence that originate from clinical trials reports of breast cancer patients (Vladika and Matthes, 2023). »(3)
« with the search string ("scientific" OR "biomedical") AND ("fact checking" OR "fact verification" OR "claim verification"). »(4)
« SUPPORTED, REFUTED, and NOT ENOUGH INFORMATION (NEI). »(5)
« The standard framework usually consists of three major components that can all be modeled as well-established NLP tasks: document retrieval, evidence (rationale) selection, and verdict prediction »(5)
« ParagraphJoint, ARSJoint, and MultiVerS are so-called joint models because they all use multi-task learning to jointly learn the tasks of rationale selection and verdict prediction »(7)
Date : 07-01-2023
Authors : Juraj Vladika, Florian Matthes
Paper Link : https://aclanthology.org/2023.findings-acl.387/
Zotero Link: Full Text PDF
Tags : ##p1
Citation : @inproceedings{Vladika_Matthes_2023, address={Toronto, Canada}, title={Scientific Fact-Checking: A Survey of Resources and Approaches}, url={https://aclanthology.org/2023.findings-acl.387/}, DOI={10.18653/v1/2023.findings-acl.387}, abstractNote={The task of fact-checking deals with assessing the veracity of factual claims based on credible evidence and background knowledge. In particular, scientific fact-checking is the variation of the task concerned with verifying claims rooted in scientific knowledge. This task has received significant attention due to the growing importance of scientific and health discussions on online platforms. Automated scientific fact-checking methods based on NLP can help combat the spread of misinformation, assist researchers in knowledge discovery, and help individuals understand new scientific breakthroughs. In this paper, we present a comprehensive survey of existing research in this emerging field and its related tasks. We provide a task description, discuss the construction process of existing datasets, and analyze proposed models and approaches. Based on our findings, we identify intriguing challenges and outline potential future directions to advance the field.}, booktitle={Findings of the Association for Computational Linguistics: ACL 2023}, publisher={Association for Computational Linguistics}, author={Vladika, Juraj and Matthes, Florian}, editor={Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki}, year={2023}, month=jul, pages={6215–6230} }