You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In awsome-distributed-training/3.test_cases/10.FSDP, when running sbatch 1.distributed-training.sbatch ( 1.distributed-training.sbatch), a bunch of warnings that look like pop up:
1: Token indices sequence length is longer than the specified maximum sequence length for this model (2522 > 2048). Running this sequence through the model will result in indexing errors
How to reproduce:
No changes were made to any of the training python scripts. The only changes made to the 1.distributed-training.sbatch file were to change from Llama2-7B to Llama2-13B. Everything else was kept the same. Just run everything as is as per the instructions in the workshop.
There's some discussion on altering max_length, or making some adjustments to the Tokenizer in this issue. This could be helpful in fixing the warning.
The text was updated successfully, but these errors were encountered:
In awsome-distributed-training/3.test_cases/10.FSDP, when running
sbatch 1.distributed-training.sbatch
( 1.distributed-training.sbatch), a bunch of warnings that look like pop up:How to reproduce:
No changes were made to any of the training python scripts. The only changes made to the 1.distributed-training.sbatch file were to change from Llama2-7B to Llama2-13B. Everything else was kept the same. Just run everything as is as per the instructions in the workshop.
There's some discussion on altering
max_length
, or making some adjustments to the Tokenizer in this issue. This could be helpful in fixing the warning.The text was updated successfully, but these errors were encountered: