Piecing It All Together - Verifying Multi-Hop Multimodal Claims
Summary
3+ Most Important Things
1+ Deficiencies
3+ New Ideas
Annotations
« Our pipeline first uses LLMs to reformulate multi-hop multimodal question-answer pairs into atomic multi-hop claims and generate a set of candidate claims. »(2)
« One approach to achieving this is to transform multimodal question-answering pairs into atomic claims and refine them to incorporate additional reasoning steps, making them more natural. »(3)
« we develop a pipeline that leverages the emerging capabilities of large language models to generate text and learn from feedback, with human input to ensure the quality of the final output. »(3)
« we employ a modify-then-refine approach that iteratively enhances the quality of the modified claim candidate based on feedback from LLMs »(4)
Date : 01-01-2025
Authors : Haoran Wang, Aman Rangapur, Xiongxiao Xu, Yueqing Liang, Haroon Gharwi, Carl Yang, Kai Shu
Paper Link : https://aclanthology.org/2025.coling-main.498/
Zotero Link: Full Text PDF
Citation : @inproceedings{Wang_Rangapur_Xu_Liang_Gharwi_Yang_Shu_2025, address={Abu Dhabi, UAE}, title={Piecing It All Together: Verifying Multi-Hop Multimodal Claims}, url={https://aclanthology.org/2025.coling-main.498/}, abstractNote={Existing claim verification datasets often do not require systems to perform complex reasoning or effectively interpret multimodal evidence. To address this, we introduce a new task: multi-hop multimodal claim verification. This task challenges models to reason over multiple pieces of evidence from diverse sources, including text, images, and tables, and determine whether the combined multimodal evidence supports or refutes a given claim. To study this task, we construct MMCV, a large-scale dataset comprising 15k multi-hop claims paired with multimodal evidence, generated and refined using large language models, with additional input from human feedback. We show that MMCV is challenging even for the latest state-of-the-art multimodal large language models, especially as the number of reasoning hops increases. Additionally, we establish a human performance benchmark on a subset of MMCV. We hope this dataset and its evaluation task will encourage future research in multimodal multi-hop claim verification.}, booktitle={Proceedings of the 31st International Conference on Computational Linguistics}, publisher={Association for Computational Linguistics}, author={Wang, Haoran and Rangapur, Aman and Xu, Xiongxiao and Liang, Yueqing and Gharwi, Haroon and Yang, Carl and Shu, Kai}, editor={Rambow, Owen and Wanner, Leo and Apidianaki, Marianna and Al-Khalifa, Hend and Eugenio, Barbara Di and Schockaert, Steven}, year={2025}, month=jan, pages={7453–7469} }