Inference problem after loading a fine tuned T5 model for seq2seq method

spiece.model file is not present upon saving the model after training .

I used this

python run_seq2seq_qa.py --model_name_or_path t5-small --dataset_name squad_v2 --context_column context --question_column question --answer_column answers --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 1 --max_seq_length 384 --doc_stride 128 --output_dir /trained_model/debug_seq2seq_squad/ --max_train_samples 50 --max_eval_samples 50

to run the run_seq2seq_qa.py from transformers git examples but cannot be used for inferencing.

Files saved after training :

generation_config.json
config.json
all_results.json
train_results.json
special_tokens_map.json
eval_results.json
tokenizer.json
tokenizer_config.json
runs
pytorch_model.bin
training_args.bin
trainer_state.json

code used :
from transformers import T5ForConditionalGeneration, T5Tokenizer

def run_t5_squad_example(model_dir, question, context):
# Load the T5 model and tokenizer
model = T5ForConditionalGeneration.from_pretrained(model_dir)
tokenizer = T5Tokenizer.from_pretrained(model_dir)

# Prepare the input
input_text = f"question: {question} context: {context}"
input_ids = tokenizer.encode(input_text, return_tensors="pt")

# Generate the answer
output = model.generate(input_ids)
answer = tokenizer.decode(output[0], skip_special_tokens=True)

return answer

model_directory = “t5_model”
print(type(model_directory))
question = “What is the capital of France?”
context = “Paris is the capital of France.”

answer = run_t5_squad_example(model_directory, question, context)
print(“Answer:”, answer)

ERROR received:
model directory is not a string.

i have checked that the model directory is of string type .