Skip to content
/ ICE Public

The code for Consistent In-Context Editing, an approach for tuning language models through contextual distributions, overcoming the limitations of traditional fine-tuning methods that learn towards one-hot targets.

Notifications You must be signed in to change notification settings

bigai-ai/ICE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

arXiv Dataset paperwithcode punkt KnowledgeEditingPapers AIModels.fyi

About

This repository is the official implementation of the paper "In-Context Editing: Learning Knowledge from Self-Induced Distributions". The main idea of this paper is to use an in-context distribution to guide the learning process of knowledge editing for language models.

This project is developed based on EasyEdit. Please refer to the original repository for more details of other methods and an overview of knowledge editing. The following is a list of related repositories:

  • EasyEdit An open source knowledge edit framework.
  • ROME A related method of Locating and Editing.
  • MEMIT A related method of Locating and Editing.

Table of Contents

🤗Dataset

We evaluate our method using four datasets, WikiDatarecent, ZsRE, WikiBio, WikiDatacounterfact. The four datasets share two tasks of knowledge editing to test the generalization of our method.

Task Knowledge Insertion Knowledge Modification
Datasets WikiDatarecent ZsRE WikiBio WikiDatacounterfact
Type Fact Question Answering Hallucination Counterfact
# Train 570 10,000 592 1,455
# Test 1,266 1,230 1,392 885

You can download data 🤗 Huggingface Dataset. And the expected structure of files is:

ICE
|-- data
|   |-- wikibio.json
|   |-- wikidata_counterfact.json
|   |-- wikidata_recent.json
|   |-- zsre.json

🛠️Requirements and Installation

# clone ICE
git clone https://github.com/Yofuria/ICE.git
cd ICE

# create conda env
conda create -n ICE python=3.10
conda activate ICE

# install package
pip install -r requirements.txt

In lines 32 and 33 of examples/run_knowedit_llama2.py, you need to download the punkt package.

  • If your internet connection is sufficiently fast, you can execute the code directly from the command line.
if __name__ == "__main__":
    # If you have a slow Internet connection and can't download nltk quickly, comment these two lines and use the second method of Requirements and Installation in README.md
    import nltk
    nltk.download('punkt')
  • If your internet speed is slow, comment out lines 32 and 33 and manually download the punkt package from punkt. Place it in the ICE environment directory you created, then create a nltk_data/tokenizers folder, and extract the contents of punkt into this directory.

🤖Evaluation

You can get the evaluation results using eval.py.

The data used by PPL_ris the edit operation that saves the sentences generated by the model.

Such as:ICE_zsre_Llama-2-7b-chat-hf_gen_sentence.json

python eval.py 
    --model_name_or_path=''  # Path to pre-trained model
    --output_file='./FT-M_counterfact_gpt2-xl_gen_sentence.json'  # Generated sentences file (xxx.json)
    --result_file='./FT-M_counterfact_gpt2-xl_results.json'  # Result file (xxx.json)

You will get the following metrics

Edit_Succ: 30.262626262626263
Portability: 7.3802393354053
Portability (Subject_Aliasing_acc): 6.939620928384972
Portability (reasoning_acc): 3.511697773992855
Portability (Logical_Generalization_acc): 9.11111111111111
Locality: 33.95236461069794
Fluency: 557.8193009507412
ppl_r:  tensor(9.9633, device='cuda:0')

💥Training

We provide the training hyperparameters for five methods in ./hparams.

For ICE, we update GPT2-xl using layers 13 to 17 and Llama2-7b-chat using layers 4 to 8.

Both FT-L and FT-M use the same hparams located in ./hparams/FT.

For FT-L, replace objective_optimization with prompt_last, and for FT-M, replace it with target_new. For details on other methods, please refer to EasyEdit. You can execute the following commands to obtain results:

For ICE:

python examples/run_knowedit_llama2.py \
    --editing_method=ICE \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

For FT-L:

python examples/run_knowedit_llama2.py \
    --editing_method=FT-L \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

For FT-M:

python examples/run_knowedit_llama2.py \
    --editing_method=FT-M \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

For MEMIT:

python examples/run_knowedit_llama2.py \
    --editing_method=MEMIT \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

For ROME:

python examples/run_knowedit_llama2.py \
    --editing_method=ROME \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

The optional range of datatype is ['zsre','recent','counterfact','wikibio']

✏️Citation

If you find our paper and code useful in your research, please consider giving a star ⭐ and citation 📝.

@article{qi2024ice,
      title={In-Context Editing: Learning Knowledge from Self-Induced Distributions}, 
      author={Siyuan Qi and Bangcheng Yang and Kailin Jiang and Xiaobo Wang and Jiaqi Li and Yifan Zhong and Yaodong Yang and Zilong Zheng},
      year={2024},
      eprint={2406.11194},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.11194}, 
}

🎉Contributors

Contributors Contributors Contributors Contributors
Contributors Contributors Contributors Contributors

About

The code for Consistent In-Context Editing, an approach for tuning language models through contextual distributions, overcoming the limitations of traditional fine-tuning methods that learn towards one-hot targets.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages