Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

salesforce/QAFactEval

QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization

This is the official code repository for the NAACL 2022 paper QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization by Alexander R. Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong.

In our paper, we conduct an extensive comparison of the components of QA-based metrics for factual consistency evaluation in summarization. Our optimized metric builds on QAEval with question consistency filtering and an improved answer overlap metric, leading to a 14% average improvement over previous QA-based metrics on the SummaC factual consistency benchmark.

Table of Contents

  1. Updates
  2. Using QAFactEval
  3. Citation
  4. License

Updates

5/2/2022 - Initial commit! :)

Using QAFactEval

You can install qafacteval via pip:

pip install qafacteval

You can also install from source:

git clone https://github.com/salesforce/QAFactEval
cd QAFactEval
pip install -e .

For use in scripts

Download the required pretrained models using download_models.sh.

See run.py for an example of using the QAFactEval metric:

from qafacteval import QAFactEval
kwargs = {"cuda_device": 0, "use_lerc_quip": True, \
        "verbose": True, "generation_batch_size": 32, \
        "answering_batch_size": 32, "lerc_batch_size": 8}

model_folder = "" # path to models downloaded with download_models.sh
metric = QAFactEval(
    lerc_quip_path=f"{model_folder}/quip-512-mocha",
    generation_model_path=f"{model_folder}/generation/model.tar.gz",
    answering_model_dir=f"{model_folder}/answering",
    lerc_model_path=f"{model_folder}/lerc/model.tar.gz",
    lerc_pretrained_model_path=f"{model_folder}/lerc/pretraining.tar.gz",
    **kwargs
)

results = metric.score_batch_qafacteval(["This is a source document"], [["This is a summary."]], return_qa_pairs=True)
score = results[0][0]['qa-eval']['lerc_quip']

Citation

When referencing this repository, please cite this paper:

@misc{fabbri-etal-2022-qafacteval,
  title = {QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization},
  author = {Alexander R. Fabbri and Chien-Sheng Wu and Wenhao Liu and Caiming Xiong},
  year={2022},
  eprint={2112.08542},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url = {https://arxiv.org/abs/2112.08542},
}

License

This repository is released under the BSD-3 License.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
Morty Proxy This is a proxified and sanitized view of the page, visit original site.