Skip to content

An official code repository for the paper "Argumentative Large Language Models for Explainable and Contestable Claim Verification"

License

Notifications You must be signed in to change notification settings

CLArg-group/argumentative-llms

Repository files navigation

Argumentative LLMs

This is the oficial code repository for the paper "Argumentative Large Language Models for Explainable and Contestable Claim Verification". Argumentative LLMs (ArgLLMs) augment large language models with a formal reasoning layer based on computational argumentation. This approach enables ArgLLMs to generate structured and faithful explanations of their reasoning, while also allowing users to challenge and correct any identified issues.

Getting Started

To run the main experiments, please follow these steps:

  1. Install the required dependencies in requirements.txt. Note that, depending on the versions of HuggingFace models you use, you may need to update the transformers library to a more recent version.
  2. Run experiments using the python3 main.py <OPTIONS> command. For the list of available options, please run python3 main.py -h

Reproducibility Information

The experiments in our paper were run using the package versions in requirements.txt on a locally customized distribution of Ubuntu 22.04.2. The used machine was equipped with two RTX 4090 24GB GPUs and an Intel(R) Xeon(R) w5-2455X processor.

Acknowledgements

We thank Nico Potyka and the other contributors to the Uncertainpy package, which we adapted for use in our code.

Citation

If you find our paper and code useful, please consider citing the original work:

@article{freedman-2025-argumentative-llms,
    title={Argumentative Large Language Models for Explainable and Contestable Claim Verification},
    volume={39}, url={https://ojs.aaai.org/index.php/AAAI/article/view/33637},
    DOI={10.1609/aaai.v39i14.33637},
    abstractNote={The profusion of knowledge encoded in large language models (LLMs) and their ability to apply this knowledge zero-shot in a range of settings makes them promising candidates for use in decision-making. However, they are currently limited by their inability to provide outputs which can be faithfully explained and effectively contested to correct mistakes. In this paper, we attempt to reconcile these strengths and weaknesses by introducing argumentative LLMs (ArgLLMs), a method for augmenting LLMs with argumentative reasoning. Concretely, ArgLLMs construct argumentation frameworks, which then serve as the basis for formal reasoning in support of decision-making. The interpretable nature of these argumentation frameworks and formal reasoning means that any decision made by ArgLLMs may be explained and contested. We evaluate ArgLLMs’ performance experimentally in comparison with state-of-the-art techniques, in the context of the decision-making task of claim verification. We also define novel properties to characterise contestability and assess ArgLLMs formally in terms of these properties.},
    number={14},
    journal={Proceedings of the AAAI Conference on Artificial Intelligence},
    author={Freedman, Gabriel and Dejl, Adam and Gorur, Deniz and Yin, Xiang and Rago, Antonio and Toni, Francesca},
    year={2025},
    month={Apr.},
    pages={14930-14939}
}

About

An official code repository for the paper "Argumentative Large Language Models for Explainable and Contestable Claim Verification"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages