Checkpoints and disk storage

@sgugger @BramVanroy
transformers.TrainingArguments(
per_device_train_batch_size=8,
gradient_accumulation_steps=16,
warmup_steps=100,
num_train_epochs=2,
learning_rate=2e-5,
fp16=True,
logging_steps=1,
output_dir=“lora-alpaca”,
save_total_limit=3,
)
I’m able to generate checkpoint-500 directory with following structure
i. optimizer.pt
ii. pytorch_model.bin
iii. rng_state.pth
iv. scaler.pt
v. scheduler.pt
vi. trainer_state.json
vii. training_args.json

But the real problem starts here
When I ran the same code in different notebook file, I can’t see any checkpoint-500 folder
What’s the issue and resolution?

Please help me out!