Skip to content

maybenotime/PHD

Repository files navigation

PHD

The dataset and code for paper: A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection (https://arxiv.org/abs/2310.06498), which has been accepted by EMNLP2023 findings.

Introduction

Large Language Models (LLMs) have shown their ability to collaborate effectively with humans in real-world scenarios. However, LLMs are apt to generate hallucinations, i.e., makeup incorrect text and unverified information, which can cause significant damage when deployed for mission-critical tasks. In this paper, we propose a self-check approach based on reverse validation to detect factual errors automatically in a zero-resource fashion. To facilitate future studies and assess different methods, we construct a hallucination detection benchmark named PHD, which is generated by ChatGPT and annotated by human annotators. Contrasting previous studies of zero-resource hallucination detection, our method and benchmark concentrate on passage-level detection instead of sentence-level. We empirically evaluate our method and existing zero-resource detection methods on two datasets. The experimental results demonstrate that the proposed method considerably outperforms the baselines while costing fewer tokens and less time. Furthermore, we manually analyze some hallucination cases that LLM failed to capture, revealing the shared limitation of zero-resource methods.

Motivation for Study Passage-level Hallucination Detection

Previous studies suffer from the following two disadvantages:

Suffering from noise and counterfactual content

Retrieval-Augmented Generation (RAG) is an effective strategy for mitigating hallucination. However, the retrieved knowledge does not always help and even has a negative impact as the retrieved context can be misleading. In addition, retrieving external knowledge often has complex processes and notable delays. A self-check hallucination detection method can help RAG adaptively call for external resources, enhancing the robustness of RAG.

Only focusing on sentence-level hallucination detection

LLMs tend to furnish users with comprehensive and informative answers instead of a single sentence. Hence, real-world applications often require passage-level hallucination detection rather than sentence-level detection. In many scenarios, a judgment about the entire passage is enough, which enables a quick decision on whether to activate the retrieval module and generate a new response.

Files

  • LMvsLM_replicate/*: We replicate LMvsLM (https://aclanthology.org/2023.emnlp-main.778/) and adapt it to passage-level hallucination detection.
  • SelfCheckBERTScore/*: We use the implementation of SelfCheckGPT (https://github.com/potsawee/selfcheckgpt) released by the author.
  • Zero-shot_Baseline/*: We prompt LLM to detect hallucination in a zero-shot fashion.
  • Reverse_Validation/*: This folder contains the implementation of two variants of our RV method.
  • Ablation_Study_Llama2-7b/*: This folder contains scripts for deploying Llama-2-7b-chat-hf API using Flask.
  • Construct_benchmark/*: This folder contains codes and record for crawling Google Browser return items, which can be used to expand the PHD benchmark.

Running Instruction

Detect hallucination using RV method

python detect_out_dataset.py    #test on PHD benchmark
python detect_wikibio.py        #test on WikiBio-GPT3 dataset
python cal_metric.py            #get detection results and calculate metrics
  • If you want to use RV-QG variant
from detection_components import question_generation_pipeline
from cal_metric import get_qg_predict
  • If you want to use RV-EM variant
from detection_components import entity_matching_pipeline
from cal_metric import get_em_predict

RV method is tailored for entity-based QA scenarios. Please extract the entity from query or response firstly if you want to employ this method on your private data.

  • You can prompt the LLM to extract the entity in a zero-shot style, we give the template as follows:
Which entity is the following passage mainly describing? Only extract the entity name.
Passage: {Passage}
  • In practical application scenarios, we can directly extract the entity from the user's query rather than models' response to save token costs.
Which entity is the following query mainly asking? Only extract the entity name. 
Query: {query}

Deploy Llama-2-7b-chat-hf API

python llama2_flask_api.py

Crawl Google return items

python plus_search_result.py

Cite our Work

@inproceedings{yang-etal-2023-new-benchmark,
    title = "A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection",
    author = "Yang, Shiping  and
      Sun, Renliang  and
      Wan, Xiaojun",
    editor = "Bouamor, Houda  and
      Pino, Juan  and
      Bali, Kalika",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.findings-emnlp.256",
    doi = "10.18653/v1/2023.findings-emnlp.256",
    pages = "3898--3908",
    abstract = "Large Language Models (LLMs) have shown their ability to collaborate effectively with humans in real-world scenarios. However, LLMs are apt to generate hallucinations, i.e., makeup incorrect text and unverified information, which can cause significant damage when deployed for mission-critical tasks. In this paper, we propose a self-check approach based on reverse validation to detect factual errors automatically in a zero-resource fashion. To facilitate future studies and assess different methods, we construct a hallucination detection benchmark named PHD, which is generated by ChatGPT and annotated by human annotators. Contrasting previous studies of zero-resource hallucination detection, our method and benchmark concentrate on passage-level detection instead of sentence-level. We empirically evaluate our method and existing zero-resource detection methods on two datasets. The experimental results demonstrate that the proposed method considerably outperforms the baselines while costing fewer tokens and less time. Furthermore, we manually analyze some hallucination cases that LLM failed to capture, revealing the shared limitation of zero-resource methods.",
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages