Transformer pipeline load local pipeline

Hi. I have fine-tuned a model, then save it to local disk. But when I load my local mode with pipeline, it looks like pipeline is finding model from online repositories. How can i fix it ? Please help.

My code for training and save model to local disk:

from transformers import Seq2SeqTrainingArguments

training_args = Seq2SeqTrainingArguments(
    output_dir="./my_local_disk",
    per_device_train_batch_size=16,
    max_steps=4000,
    fp16=True,
    logging_steps=25,
    report_to=["tensorboard"],
    load_best_model_at_end=True,
    greater_is_better=False,
    push_to_hub=False,
)

Then I load it to pipeline:

from transformers import pipeline
pipe = pipeline(model="./my_local_disk")

But I got this error when create pipeline object:

OSError: ./my_local_disk does not appear to have a file named config.json. 
Checkout 'https://huggingface.co/./my_local_disk/None' for available files.

It looks like pipeline is loading from online repositories, not my local folder. Please help me fix it

you can use absolute path

sorry but still same error.
I am running notebooks code in colab, so my absoluted path is /content/my_local_disk
But even when I use absoluted path, it still show error:

OSError: /content/my_local_disk does not appear to have a file named config.json. Checkout 'https://huggingface.co//content/my_local_disk/None' for available files.

Sorry but can anyone help ??

please can anyone help me ?
i am stucked at this step

The “output_dir” on training args is for saving the checkpoints during refinement. You need to call save_model on your Trainer instance to actually save the model.

@adam-zettafi
thank you for your help. And after that, how can I load saved model ?
Do I still need to define Trainer again ? In this case, I think using pipeline will be better, because we don’t need to duplicated code to define Trainer again

Once you save the model, you can provide the path to the saved model just as you would provide the path to any model. Your code sample was the correct way to load it.

So this is what worked for me

from transformers import pipeline
pipe = pipeline(task="your-task-name", model="./my_local_disk", tokenizer=original_tokenizer)

where original_tokenizer is the Tokenizer you used while creating the training/test data