Can a pertained model be fine-tuned, or not ? #6185
-
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi @kgoderis When you are using Spark NLP pre-trained models inside a pipeline or a pre-trained pipeline, the weights are constant, they cannot be fine-tuned nor be overwritten. If the entire pipeline is pre-trained models and rule-based annotators, then the In Sparrk NLP the trainable annotators have The short answer, there is no fine-tuning nor zeroing any of the pre-trained models/pipelines inside Spark NLP. |
Beta Was this translation helpful? Give feedback.
Hi @kgoderis
When you are using Spark NLP pre-trained models inside a pipeline or a pre-trained pipeline, the weights are constant, they cannot be fine-tuned nor be overwritten. If the entire pipeline is pre-trained models and rule-based annotators, then the
.fit(df)
stage will be skipped and it goes directly to.transform()
.In Sparrk NLP the trainable annotators have
Approach
in their names, and the pretrained models only will be accessible with annotators withModel
in their names. (99% of the time). This means, if you have a pretrained model let's say in NerDLModel, it won't be fine-tuned nor overwritten since it's not trainable. The trainable annotator for NER is called NerDLApproach…