Can't load tokenizer with added special tokens

Hi! I have an annoying issue that I canā€™t find answers to anywhere onlineā€¦
Hope someone can help.

I am adding a few special tokens to a gpt2 tokenizer using the following code:

model_name = ā€œgpt2ā€
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

special_tokens = {
ā€˜eos_tokenā€™: ā€œ<|endoftext|>ā€,
ā€˜bos_tokenā€™: ā€œ<|startoftext|>ā€,
ā€˜additional_special_tokensā€™: ["<|speaker1|>", ā€œ<|speaker2|>ā€]
}

tokenizer.pad_token = tokenizer.eos_token
tokenizer.add_special_tokens(special_tokens)
vocab = tokenizer.get_vocab()
model.resize_token_embeddings(len(vocab))

I later save the tokenizer using:

model_save_name = ā€˜SARC_gpt2_prefinetune_2.0ā€™
tokenizer.save_pretrained(F"/content/drive/MyDrive/Colab Notebooks/saved_models/{model_save_name}")

But when I load the model in a different script using:

tokenizer_1 = AutoTokenizer.from_pretrained(ā€™/content/drive/MyDrive/Colab Notebooks/saved_models/SARC_gpt2_prefinetune_2.0ā€™)

I get this error:

AssertionError: Non-consecutive added token ā€˜<|startoftext|>ā€™ found. Should have index 50260 but has index 50257 in saved vocabulary.

Does anyone know what Iā€™m doing wrong here?
Thanks in advance