Skip to content

Code for the paper "Implicit meta-learning may lead language models to trust more reliable sources"

Notifications You must be signed in to change notification settings

krasheninnikov/internalization

Repository files navigation

Implicit meta-learning may lead language models to trust more reliable sources

Tests

This repository contains code for the language model experiments from the paper Implicit meta-learning may lead language models to trust more reliable sources (paper, ICML 2024 poster).

Steps to get started:

1. Clone the repository

git clone https://github.com/krasheninnikov/internalization.git
cd internalization

2. Configure your Python environment

  • Step 1. Create and activate a new Conda environment:

    conda create --name internalization python=3.11
    conda activate internalization
  • Step 2. Install the dependencies and download the datasets:

    pip install -r requirements.txt
    # download the datasets from Google Drive
    gdown --folder 'https://drive.google.com/drive/folders/1KQDClI3cbFzPhzfknF2xmtqE-aIW1EDf?usp=sharing'
  • Step 3 (Optional). Configure wandb:

    wandb login
    wandb init --entity=your-entity --project=your-project

3. Run the experiment

To run the experiment with the default configuration (configs/current_experiment.yaml), use the following command:

python -m src.run

Choosing/modifying/creating an experiment configuration. Go to the configs directory to select an existing configuration or create a new one. Some parameter descriptions can be found in the configs readme.

Once the configuration is ready, run the experiment with the following command:

python -m src.run -cp <your-config-path>

About

Code for the paper "Implicit meta-learning may lead language models to trust more reliable sources"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published