Skip to content

Commit

Permalink
Merge branch 'main' into juberti/weights
Browse files Browse the repository at this point in the history
  • Loading branch information
juberti committed Oct 18, 2024
2 parents c27715f + b8835e9 commit f76c3fe
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 9 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ We're using Poetry to manage the Python virtual environment.

### Mosaic Environment Setup (Fixie Internal)

If you want to use [Mosaic](https://docs.mosaicml.com/projects/mcli/en/latest/quick_start/getting_started.html) for trainig , you need to setup a few things to run on the Mosaic Platform.
If you want to use [Mosaic](https://docs.mosaicml.com/projects/mcli/en/latest/quick_start/getting_started.html) for training, you need to setup a few things to run on the Mosaic Platform.

1. Install & login to the Mosaic CLI

Expand Down
10 changes: 2 additions & 8 deletions ultravox/model/ultravox_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,14 +34,8 @@ class UltravoxModel(transformers.LlamaPreTrainedModel):

config_class = UltravoxConfig
config: UltravoxConfig # for type hinting
# We minimize the weights in state_dict in order to reduce the size of the checkpoint
# The issue is that load_pretrained() uses state_dict() keys to know what keys are expected
# As such we have to tell is to ignore some keys that are not always in the model
_keys_to_ignore_on_load_unexpected = ["audio_tower.*", "language_model.*"]
# Usually we load encoder weights from a pretrained model, so we don't want to load the decoder weights
# Technically we never hit this issue because these keys are already removed from state_dict() however,
# but there's no harm in keeping it here for when we change that behavior.
_keys_to_ignore_on_load_missing = ["audio_tower.*"]
# Usually we load encoder and LLM weights from a pretrained model separately, so they are allowed to be missing
_keys_to_ignore_on_load_missing = ["audio_tower.*", "language_model.*"]

def __init__(self, config: UltravoxConfig):
super().__init__(config)
Expand Down
3 changes: 3 additions & 0 deletions ultravox/training/configs/llama_70b.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@ exp_name: "ultravox-v0_4-llama3.1-70B-whisper_m"
text_model: "meta-llama/Meta-Llama-3.1-70B-Instruct"
audio_model: "openai/whisper-medium"

# We need to shard the model in order to fit on the GPU memory
enable_fsdp: True

batch_size: 5
# We increase the number of steps by 2x, but with a lower batch_size, we still won't be training on as many samples as the 8B model
# We would revisit this later on when
Expand Down

0 comments on commit f76c3fe

Please sign in to comment.