This is what helped me.
Although this command is found in the Google Colab linked in the blog, I believe the author accidentally skipped it in the blog itself.
Before starting the training, we need to save the processor which is not trainable and hence does not change during training.
processor.save_pretrained(training_args.output_dir)
This will add the necessary files for the tokenizer and whether you use the model locally or push it to hub, it will have all the necessary files.