You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for your work. I'm trying to use CodeT5+ model types plus-16B and plus-6B. However, when running, I get an error:
ValueError: CodeT5pEncoderDecoderModel does not support "device_map='auto'". To implement support, the modelclass needs to implement the "_no_split_modules" attribute.
The code I'm using is the same as provided in the examples:
from codetf.models import load_model_pipeline
code_generation_model = load_model_pipeline(model_name="codet5", task="pretrained",
model_type="plus-6B", is_eval=True,
load_in_8bit=True, load_in_4bit=False, weight_sharding=False)
result = code_generation_model.predict(["def print_hello_world():"])
print(result)
Any ideas on how the issue could be resolved?
The text was updated successfully, but these errors were encountered:
Hi, thank you for your work. I'm trying to use CodeT5+ model types
plus-16B
andplus-6B
. However, when running, I get an error:ValueError: CodeT5pEncoderDecoderModel does not support "device_map='auto'". To implement support, the modelclass needs to implement the "_no_split_modules" attribute.
The code I'm using is the same as provided in the examples:
Any ideas on how the issue could be resolved?
The text was updated successfully, but these errors were encountered: