@philschmid another question, if you could help me. I’m running the model in my local machine with a language model setted. My directory structure:

Locally (loading the model using the pipeline object), the language model works fine in the inference, but when deployed to SageMaker apparently he is not making use of the LM (i’m comparing the inference results). Everything is the same than locally, the pipeline, the model and the transformers version (4.17.0).
Did I forget something?