G-Eval - NLG Evaluation using GPT-4 with Better Human Alignment
Summary
- In this work, the authors have used a Auto COT based prompting method to evaluate the language generation
- They have experimented with two generation tasks
- text summarization
- dialogue generation
- They have used probability based scoring rather than directly asks for a score, because
- in some distribution, llm prefers a specific number like 3 in 1--5 scale
- LLM gives integer score even when asked for decimal values, so increases the number of draws
- LLM based metrics prefer LLM generated text even when human generated texts are better
- First, the LLM generates COT based on the task and evaluation criteria
- Next, the LLM generates the probability score based on the task, eval. criterion and generated COT
- The scoring is as follows
Example Prompt
You will be given one summary written for a news article.
Your task is to rate the summary on one metric.
Please make sure you read and understand these instructions carefully. Please keep this document open while reviewing, and refer to it as needed.
Evaluation Criteria:
Coherence (1-5) - the collective quality of all sentences. We align this dimension with the DUC quality question of structure and coherence whereby ”the summary should be well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to sentence to a coherent body of information about a topic.”
Evaluation Steps:
1. Read the news article carefully and identify the main topic and key points.
2. Read the summary and compare it to the news article. Check if the summary covers the main topic and key points of the news article, and if it presents them in a clear and logical order.
3. Assign a score for coherence on a scale of 1 to 5, where 1 is the lowest and 5 is the highest based on the Evaluation Criteria.
Example:
Source Text:
{{Document}}
Summary:
{{Summary}}
Evaluation Form (scores ONLY):
- Coherence:
Annotations
« LLM-based metrics have a potential issue of preferring LLM-generated texts over humanwritten texts, which may lead to the selfreinforcement of LLMs if LLM-based metrics are used as the reward signal for improving themselves. »(2)
« We find that LLM can generate such evaluation steps by itself »(3)
« The scoring function calls the LLM with the designed prompt, auto CoT, the input context and the target text that needs to be evaluated »(3)
« However, we notice this direct scoring function has two issues »(3)
« For some evaluation tasks, one digit usually dominates the distribution of the scores, such as 3 for a 1 - 5 scale. »(3)
« LLMs usually only output integer scores, even when the prompt explicitly requests decimal values. This leads to many ties in evaluation scores »(3)
Date : 05-23-2023
Authors : Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, Chenguang Zhu
Paper Link : http://arxiv.org/abs/2303.16634
Zotero Link: Preprint PDF
Tags : ##p1
Citation : @article{Liu_Iter_Xu_Wang_Xu_Zhu_2023, title={G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment}, url={http://arxiv.org/abs/2303.16634}, DOI={10.48550/arXiv.2303.16634}, abstractNote={The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose preliminary analysis on the behavior of LLM-based evaluators, and highlight the potential issue of LLM-based evaluators having a bias towards the LLM-generated texts. The code is at https://github.com/nlpyang/geval}, note={arXiv:2303.16634 [cs]}, number={arXiv:2303.16634}, publisher={arXiv}, author={Liu, Yang and Iter, Dan and Xu, Yichong and Wang, Shuohang and Xu, Ruochen and Zhu, Chenguang}, year={2023}, month=may }