PubMedQA - A Dataset for Biomedical Research Question Answering
Summary
In this paper, the authors have presented a dataset of 3 splits
- Human annotated - 1k data (test + validation)
- Artificially Labeled - 211k (by simple deterministic heuristic)
- Unlabeled - (61k)
The heuristic the authors have used to artificially label the unlabeled dataset is that,
- Identify POS tagging structures of NP-(VBP/VBZ) [1]
- Then the verb is used to convert the statement to make question ("is", "are" or "does", "do")
Annotations
« Interestingly, more than half of the question titles of PubMed articles can be briefly answered by yes/no/maybe, »(2)
« manually labeled 1k of them for cross-validation and testing »(2)
« The rest of yes/no/answerable QA instances compose of the unlabeled subset which can be used for semisupervised learning. »(2)
« we automatically convert statement titles of 211.3k PubMed articles to questions and label them with yes/no answers using a simple heuristic. »(2)
« PubMed articles which have i) a question mark in the titles and ii) a structured abstract with conclusive part are collected and denoted as pre-PQA-U. Now each instance has 1) a question which is the original title 2) a context which is the structured abstract without the conclusive part and 3) a long answer which is the conclusive part of the abstract »(3)
« We denote prediction using question and context as a reasoning-required setting, »(4)
Date : 09-13-2019
Authors : Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W. Cohen, Xinghua Lu
Paper Link : http://arxiv.org/abs/1909.06146
Zotero Link: Preprint PDF
Tags : #/unread
Citation : @article{Jin_Dhingra_Liu_Cohen_Lu_2019, title={PubMedQA: A Dataset for Biomedical Research Question Answering}, url={http://arxiv.org/abs/1909.06146}, DOI={10.48550/arXiv.1909.06146}, abstractNote={We introduce PubMedQA, a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expert-annotated, 61.2k unlabeled and 211.3k artificially generated QA instances. Each PubMedQA instance is composed of (1) a question which is either an existing research article title or derived from one, (2) a context which is the corresponding abstract without its conclusion, (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and (4) a yes/no/maybe answer which summarizes the conclusion. PubMedQA is the first QA dataset where reasoning over biomedical research texts, especially their quantitative contents, is required to answer the questions. Our best performing model, multi-phase fine-tuning of BioBERT with long answer bag-of-word statistics as additional supervision, achieves 68.1% accuracy, compared to single human performance of 78.0% accuracy and majority-baseline of 55.2% accuracy, leaving much room for improvement. PubMedQA is publicly available at https://pubmedqa.github.io.}, note={arXiv:1909.06146 [cs]}, number={arXiv:1909.06146}, publisher={arXiv}, author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William W. and Lu, Xinghua}, year={2019}, month=sep }
Using Stanford CoreNLP parser ↩︎