Checkpoints not saved

Hi, I’m training a model with the following parameters:

–model_name_or_path bert-base-uncased /
–model_type bert --tokenizer_name bert-base-uncased /
–eval_steps 1000 /
–save_steps 1000 /
–evaluation_strategy “steps” /
–save_strategy “steps” /
–cache_dir cache_dir_bert_og /
–train_file “./original-train2.txt” /
–validation_file “./original-test2.txt” /
–line_by_line True /
–do_train True /
–do_eval True /
–train_adapter True /
–num_train_epochs 2 /
–per_device_train_batch_size 16 /
–per_device_eval_batch_size 16 /
–fp16 True /
–adapter_config “pfeiffer+inv” /
–output_dir “./not-augmented-model” /
–overwrite_output_dir False /
–load_best_model_at_end True /
–learning_rate 3e-5 /
–dataloader_pin_memory True /
–eval_bias True /
–eval_accumulation_steps 10

The problem is that my checkpoints are not saved. I don’t really understand when a checkpoint would actually be saved but my training has been going on for over 2 days now and there is still nothing in the output directory. I can only run jobs for 3 days and would have to resume the training from the checkpoint.
Can someone help me? Maybe I’m doing something wrong in the parametrization. @sgugger maybe you can help me? Would really appreciate it!!
Thanks very much in advance