How to stop Optuna saving checkpoints during Hyperparameter Search

Hello I am running a Hyperparameter search using Optuna.
As I am using Colab, I have limited diskspace, so I was wondering how to stop saving checkpoints, I only care about the final result and don’t need all the intermediate steps saved.
I tried the following argument sin my TrainingArguments parameter, but its not working

# Define the trainig arguments
training_args = TrainingArguments(
    output_dir='./results',          # output directory
    seed = 0,
    num_train_epochs=5,              # total number of training epochs
    per_device_train_batch_size=16,  # batch size per device during training
    per_device_eval_batch_size=16,   # batch size for evaluation
    warmup_steps=22,                 # number of warmup steps for learning rate scheduler
    weight_decay=0.01,               # strength of weight decay
    learning_rate=5e-5,              # initial learning rate for AdamW optimizer.           
    load_best_model_at_end=True,     # load the best model when finished training (default metric is loss)
    do_train=True,                   # Perform training
    do_eval=True,                    # Perform evaluation
    logging_dir='./logs',            # directory for storing logs    
    logging_steps=10,
    gradient_accumulation_steps=2,   # total number of steps before back propagation
    fp16=True,                       # Use mixed precision
    fp16_opt_level="02",             # mixed precision mode
    evaluation_strategy="epoch",     # evaluate each `logging_steps`
    save_strategy = 'no',           # The checkpoint save strategy to adopt during training. I dont want to save, probably why it did save and take up disk space in HP search
    save_steps = 100000,
    save_total_limit = 1.           # Trying this to stop octuna from saving

Any help would be appreciated, thank you!

Ok after reading the documentation carefully, it turns out setting
load_best_model_at_end=True, overrides the strategy. Took it off and now it works.