Paper: https://arxiv.org/abs/2401.08574 Project page: https://lingo-mit.github.io/deductive-closure/
Afra Feyza Akyürek, Ekin Akyürek, Leshem Choshen, Derry Wijaya, and Jacob Andreas
While language models (LMs) can sometimes generate factually correct text and estimate truth values of individual claims, these generally do not reflect a globally coherent, manipulable model of the world. As a consequence, current LMs also generate incorrect or nonsensical content, and are difficult to edit and bring up to date. We present a method called Deductive Closure Training (DCT) that uses LMs themselves to identify implications of (and contradictions within) the text that they generate, yielding an efficient self-supervised procedure for improving LM factuality. Given a collection of seed documents, DCT prompts LMs to generate additional text implied by these documents, reason globally about the correctness of this generated text, and finally fine-tune on text inferred to be correct. Given seed documents from a trusted source, DCT provides a tool for supervised model updating; if seed documents are sampled from the LM itself, DCT enables fully unsupervised fine-tuning for improved coherence and accuracy. Across the CREAK, MQUaKE, and Reversal Curse datasets, supervised DCT improves LM fact verification and text generation accuracy by 3-26%; on CREAK fully unsupervised DCT improves verification accuracy by 12%. These results show that LMs' reasoning capabilities during inference can be leveraged during training to improve their reliability.
Create an environment and install the required packages in requirements.txt
. Data is available in Google Drive.
Checkout the sample scripts under scripts
e.g. use sh scripts/run_generate_graphs_mquake.sh
. Graphs will appear under dumped
directory. For Unsupervised DCT for CREAK, check out scripts/run_creak_unsupervised_generate_graphs.sh
.
Checkout the sample scripts under scripts
e.g. use sh scripts/run_question_conversion_mquake.sh
. This script calls create_question_conversions_mquake_llama.py
, make sure to point to the correct llama checkpoint and tokenizer paths. Within this script you can specify the paths to the graphs for which you want to convert statements to questions. The converted files will appear under the same directories as the graphs. For Unsupervised DCT for CREAK, check out scripts/run_creak_unsupervised_generate_finetuning.sh
.
Use sh scripts/run_finetune_eval_mquake.sh
for fine-tuning LLaMa with peft library. This script requires the huggingface version of llama. If you don't have it here is a conversion script or just let the script download the model for you. The adapter checkpoints and predictions for test set will appear under the same directories as the graphs. For test examples, see the Google Drive link above. For Unsupervised DCT, checkout scripts/scripts/run_creak_unsupervised_finetune_eval.sh
.