I have a domain-adapted LLM saved in an S3. Is it possible to use that model’s S3 path to then finetune other downstream models (i.e. for text classification) using separate SageMaker pipelines?
Currently those downstream tasks use a HuggingFace model ID as a
model_id hyperparameter in our SageMaker Pipeline’s
huggingface_estimator. I am thinking I could just swap out the
model_data arguments, which then points to the tar.gz saved in an S3. Is that the right approach? Or would I need to use the