How to continue training without trainer

I am trying to continue training my model from a checkpoint, without Trainer. The model for continued training seemed incoherent with the previously saved model on both loss and evaluateion results. I am wondering f I didn’t save model correctly and missing Optimizer memory info. Here is the way how i save the model

accelerator.wait_for_everyone()
            unwrapped_model = accelerator.unwrap_model(model)
            unwrapped_model.save_pretrained(args.output_dir, save_function=accelerator.save)
            if accelerator.is_main_process:
                tokenizer.save_pretrained(args.output_dir)