Skip to content

Commit

Permalink
Merge branch 'adithyare/dpo_data_refac' of https://github.com/NVIDIA/…
Browse files Browse the repository at this point in the history
…NeMo-Aligner into adithyare/dpo_data_refac
  • Loading branch information
arendu committed Dec 4, 2024
2 parents a76c29a + 613a63a commit e3d1192
Show file tree
Hide file tree
Showing 28 changed files with 2,198 additions and 130 deletions.
1 change: 1 addition & 0 deletions .github/workflows/cicd-main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ jobs:
matrix:
test_case:
- ppo-llama3-pp2-reshard
- reinforce-llama3-pp2-reshard
- dpo-llama3
- kd-llama3
- sft-llama3
Expand Down
9 changes: 8 additions & 1 deletion .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,15 @@ on:
description: Ref (SHA or branch name) to release
required: true
type: string
dry-run:
description: Do not publish a wheel and GitHub release.
required: true
default: true
type: boolean

jobs:
release:
uses: NVIDIA/NeMo-FW-CI-templates/.github/workflows/_release_library.yml@v0.12.3
uses: NVIDIA/NeMo-FW-CI-templates/.github/workflows/_release_library.yml@v0.15.0
with:
release-ref: ${{ inputs.release-ref }}
image-name: nemo_aligner_container
Expand All @@ -36,8 +41,10 @@ jobs:
python-package: nemo_aligner
container-workdir: /opt/NeMo-Aligner
library-name: NeMo-Aligner
dry-run: ${{ inputs.dry-run }}
secrets:
TWINE_USERNAME: ${{ secrets.TWINE_USERNAME }}
TWINE_PASSWORD: ${{ secrets.TWINE_PASSWORD }}
SLACK_RELEASE_ENDPOINT: ${{ secrets.SLACK_RELEASE_ENDPOINT }}
PAT: ${{ secrets.PAT }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
durations = timer.consume_durations()
```
- Add code and instructions for replicating Reward Modeling training in HelpSteer2 and HelpSteer2-Preference
- Implement REINFORCE algorithm.

### Breaking Changes
- Upgrade TRTLLM dependency from v0.10.0 to v0.12.0 and migrate from `GPTSession` cpp runtime to `ModelRunner` python runtime. Please use the latest Dockerfile.
Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ The toolkit is currently in it's early stages. We are committed to improving the
* **Reward Model Training**
* **Reinforcement Learning from Human Feedback using the [PPO](https://arxiv.org/pdf/1707.06347.pdf) Algorithm**
* [Llama3-70B-PPO-Chat](https://huggingface.co/nvidia/Llama3-70B-PPO-Chat) aligned with NeMo-Aligner using TRT-LLM.
* **Reinforcement Learning from Human Feedback using the REINFORCE Algorithm**
* [Llama-3.1-Nemotron-70B-Instruct](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct) aligned with NeMo-Aligner using TRT-LLM.
* **Direct Preference Optimization** as described in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://arxiv.org/pdf/2305.18290)
* [Llama3-70B-DPO-Chat](https://huggingface.co/nvidia/Llama3-70B-DPO-Chat) aligned with NeMo Aligner.
* **Self-Play Fine-Tuning (SPIN)** as described in [Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models](https://arxiv.org/pdf/2401.01335)
Expand Down
4 changes: 4 additions & 0 deletions docs/user-guide/aligner-algo-header.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
.. important::
Before starting this tutorial, be sure to review the :ref:`introduction <nemo-aligner-getting-started>` for tips on setting up your NeMo-Aligner environment.

If you run into any problems, refer to NeMo's `Known Issues page <https://docs.nvidia.com/nemo-framework/user-guide/latest/knownissues.html>`__. The page enumerates known issues and provides suggested workarounds where appropriate.
78 changes: 37 additions & 41 deletions docs/user-guide/cai.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
.. include:: /content/nemo.rsts

.. _model-aligner-cai:
.. include:: aligner-algo-header.rst

.. _nemo-aligner-cai:

Constitutional AI: Harmlessness from AI Feedback
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Expand All @@ -14,12 +16,12 @@ CAI allows training a harmless, but non-evasive AI assistant that engages with h
.. _Constitutional AI (CAI): https://arxiv.org/abs/2212.08073

CAI
###############
The basic steps of CAI are described in this section and illustrated in the figure below (`Figure 1 <https://arxiv.org/abs/2212.08073>`_).
###
The basic steps of CAI are described in this section and illustrated in the `figure below <nemo-aligner-cai-flow-diagram>`_.

(Supervised Stage) Critique → Revision → Supervised Learning: The AI generates responses to harmfulness prompts using a helpful-only AI assistant, then critiques and revises its own responses according to a principle in the constitution, and then fine-tunes the original model on the revised responses.

(RL Stage) AI Comparison Evaluations → Reward Model → Reinforcement Learning: The AI generates pairs of responses to harmfulness prompts using the finetuned model, then evaluates which response is better according to a principle in the constitution, and then trains a reward model based on this dataset of AI preferences and a human helpfulness preferences. The AI then trains with RL using the learned reward model.
(RL Stage) AI Comparison Evaluations → Reward Model → Reinforcement Learning: The AI generates pairs of responses to harmfulness prompts using the fine-tuned model, then evaluates which response is better according to a principle in the constitution, and then trains a reward model based on this dataset of AI preferences and a human helpfulness preferences. The AI then trains with RL using the learned reward model.

.. image:: ../assets/cai_diagram.png
:alt: basic steps of the CAI process
Expand All @@ -29,25 +31,22 @@ The basic steps of CAI are described in this section and illustrated in the figu
Critiques, revisions, and AI harmlessness feedback are steered by a small set of principles drawn from a ‘constitution’. The supervised stage significantly improves the initial model. It gives some control over the initial behavior at the start of the RL phase, while addressing potential exploration problems. The RL stage significantly improves performance and reliability.

Motivation
###############
##########
Constitutional AI motivation refers to designing AI systems in such a way that their objectives and behaviors are guided by a set of predefined rules or principles. It includes the following:

Scaling supervision: using AI to help humans supervise other AIs more efficiently and effectively, especially for tasks where AI capabilities may exceed human ones.

A harmless but non-evasive assistant: reducing the tension between helpfulness and harmlessness, and avoiding evasive responses that reduce transparency and helpfulness.

Simplicity and transparency: encoding the training goals in a simple list of natural language instructions or principles, and using chain-of-thought reasoning to make AI decision making explicit and understandable.
- Scaling Supervision: Use AI to assist humans in supervising other AIs more efficiently and effectively, particularly for tasks where AI capabilities may surpass human ones.
- A Harmless but Non-Evasive Assistant: Minimize the tension between helpfulness and harmlessness, and avoid evasive responses that reduce transparency and helpfulness.
- Simplicity and Transparency: Encode training goals in a straightforward list of natural language instructions or principles, and employ chain-of-thought reasoning to make AI decision-making explicit and understandable.
- Reducing Iteration Time: Eliminate the need to collect new human feedback labels when modifying objectives or testing different behaviors.

Reducing iteration time: obviating the need to collect new human feedback labels when altering the objective or testing different behaviors.

Train a CAI model
#####################
Train a CAI Model
#################

This section is a step-by-step tutorial that walks you through how to run a full CAI pipeline with a ``Mistral-7B`` LLM model. It includes the following:

1. Data download and preprocessing.
1. Download the models and datasets.

2. Generate responses to harmfulness prompts using a helpful-only AI assistant. Ask the model to critique its response according to a principle in the constitution, and then revise the original response in light of the critique.
2. Generate and revise responses to harmful prompts creating the SL-CAI dataset. Ask the model to critique its response according to a principle in the constitution, and then revise the original response in light of the critique.

3. Fine-tune ``Mistral-7B`` with SFT on the revised responses to create a ``Mistral-7B-SL-CAI`` model.

Expand All @@ -56,24 +55,22 @@ This section is a step-by-step tutorial that walks you through how to run a full
b. Formulate each prompt and pair into a multiple choice question, where we ask ``Mixtral-8x7B`` which response is best according to the constitution.
c. Blend the AI feedback preference dataset (prompts and pairs) with human feedback helpfulness dataset.

5. Train a Reward Model (RM).
5. Train the Reward Model (RM).

6. Fine-tune the ``Mistral-7B-SL-CAI`` with Proximal Policy Optimization (PPO) and the RM to train a ``Mistral-7B-RL-CAI`` model.

7. Run inference.

.. note::
Before starting this tutorial, be sure to review the :ref:`introduction <model-aligner-intro>` for tips on setting up your NeMo-Aligner environment.

If you run into any problems, refer to NeMo's `Known Issues page <https://docs.nvidia.com/nemo-framework/user-guide/latest/knownissues.html>`__. The page enumerates known issues and provides suggested workarounds where appropriate.
.. _nemo-aligner-cai-flow-diagram:

.. image:: ../assets/cai_flow.png

Step 1: Download models and datasets
#############################################################################
1. Download ``Mistral-7B-Instruct`` and ``Mistral-7B`` LLM models from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 and https://huggingface.co/mistralai/Mistral-7B-v0.1 into the models folder.
Step 1: Download the models and datasets
########################################

1. Download the ``Mistral-7B-Instruct`` and ``Mistral-7B`` LLM models from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1 and https://huggingface.co/mistralai/Mistral-7B-v0.1 into the models folder.

Then, convert into .nemo format:
Then, convert them into .nemo format:

.. code-block:: bash
Expand All @@ -92,7 +89,7 @@ Step 1: Download models and datasets
This command will download the dataset to ``/path/to/anthropic_red_team_attempts_train.json``


3. Download SFT helpfulness dataset:
3. Download the SFT helpfulness dataset:

.. code-block:: bash
Expand All @@ -101,7 +98,7 @@ Step 1: Download models and datasets
This command will download the dataset to ``/path/to/nvidia_sft_datablend_v1_train.json``


4. Download and process preference helpfulness dataset:
4. Download and process the preference helpfulness dataset:

.. code-block:: bash
Expand All @@ -112,7 +109,7 @@ Step 1: Download models and datasets
Step 2: Generate and revise responses to harmful prompts creating the SL-CAI dataset
###################################################################################################
####################################################################################

Run an inference server in the background using the following command:

Expand Down Expand Up @@ -158,16 +155,16 @@ Please wait for the server to be ready before proceeding.
--apply_chat_template False \
--response_extract_pattern "[/INST]"
This will generate an SL-CAI dataset of prompts and revised responses as ``cai_revisions_aligner_chat_template.json``
This will generate an SL-CAI dataset of prompts and revised responses as ``cai_revisions_aligner_chat_template.json``.

The few-shot samples should be provided following the template in ``few_shot_samples_example.json`` (filling in the `content` tags, and choosing how many samples to use), and should include a red teaming prompt, a response from the helpful model (e.g. ``Mistral-7B`` in this tutorial), critique and revision requests and responses. An example is shown in the `Anthropic repo <https://github.com/anthropics/ConstitutionalHarmlessnessPaper/blob/main/prompts/CritiqueRevisionFewShotPrompts.json>`_.
The few-shot samples should be provided following the template in ``few_shot_samples_example.json``. Fill in the `content` tags and choose how many samples to use. The samples should include a red teaming prompt, a response from the helpful model (e.g., ``Mistral-7B`` in this tutorial), critique and revision requests, and responses. An example is shown in the `Anthropic repo <https://github.com/anthropics/ConstitutionalHarmlessnessPaper/blob/main/prompts/CritiqueRevisionFewShotPrompts.json>`_.

*NOTE: The tokenizer file can be found by extracting the .nemo checkpoint using `tar -xf /models/mistral/mistral-7b-Instruct.nemo`.
There are 2 tokenizer files that end with `.model` in the model checkpoint and they are the same, so you can use either one for data processing.*
.. note::
The tokenizer file can be found by extracting the .nemo checkpoint using `tar -xf /models/mistral/mistral-7b-Instruct.nemo`. There are two tokenizer files that end with `.model` in the model checkpoint, and they are identical. You can use either one for data processing.


Step 3: Fine-tune Mistral-7B on the revised responses to create a Mistral-7B-SL-CAI model
######################################################################################################
#########################################################################################

Note that you would need to set up multi-node training run in your cluster env, depending on the type of cluster you use. For details, please refer to https://lightning.ai/docs/pytorch/stable/clouds/cluster.html .

Expand Down Expand Up @@ -199,10 +196,9 @@ Note that you would need to set up multi-node training run in your cluster env,
Step 4: Generate the RL-CAI (preference) dataset for RM and PPO training
##############################################################################################################
########################################################################

The following section runs an inference server with the SL-CAI model that we've previously trained, and queries it with red teaming prompts asking for several responses per prompt.
The responses will then be ranked by a judge LLM being run from NVIDIA's NGC. An NGC API key can be acquired `here`_.
The following section runs an inference server with the SL-CAI model that we've previously trained. It queries the server with red teaming prompts, requesting several responses per prompt. These responses will then be ranked by a judge LLM running from NVIDIA's NGC. You can acquire an NGC API key `here`_.

The following command will run the inference server:

Expand Down Expand Up @@ -257,8 +253,8 @@ Using a different terminal, run the following command to start the RL-CAI datase
This command will create the ``rl-cai`` dataset files in the defined output folder with the given output filename prefix.


Step 5: Train the RM
#####################
Step 5: Train the Reward Model (RM)
###################################

Run the following command to train the RM:

Expand All @@ -285,7 +281,7 @@ Run the following command to train the RM:
The trained RM checkpoint will be saved to output dir given by ``exp_manager.explicit_log_dir``.

Step 6: Fine-tune Mistral-7B-SL-CAI with PPO and the RM to train a Mistral-7B-RL-CAI model
Step 6: Fine-tune the Mistral-7B-SL-CAI with PPO and the RM to train a Mistral-7B-RL-CAI model
##############################################################################################
Run the following command in the background to launch a RM and PPO critic training server:

Expand Down Expand Up @@ -329,8 +325,8 @@ Run the following command to launch actor training and a reference policy server
The trained LLM policy checkpoint will be saved to the output dir given by ``exp_manager.explicit_log_dir``.

Step 7: Inference
##################
Step 7: Run inference
#####################
To start inference, run an inference server in the background using the following command:

.. code-block:: bash
Expand Down
Loading

0 comments on commit e3d1192

Please sign in to comment.