Tokenizers offset issue

Hi, I am new to Transformers and have problem with understanding the tokens offseting in Tokenizers.
I want to use a third-party tokenizer in Spacy and want to import a pre-trained tokenizer directly from “vocab.json” file. The tokenizer that I want to use is “gottbert-base”, a German language model.
Can someone tell me why offset information is different in the below codes? I want to use the code in variation 1 but for me offseting in variation 2 is correct. How can I solve this issue?
Variation 1 :

from tokenizers import ByteLevelBPETokenizer
bpe = ByteLevelBPETokenizer("gottbert/vocab.json", "gottbert/merges.txt")
text = "Ich habe keine Rückmeldung von euch! Schicken Sie mir die Terminbestätigung."
tokens = bpe.encode(text)
print(tokens.tokens)
print(tokens.offsets)

Variation 2:

from transformers import AutoTokenizer

tokenizer_pretrained = AutoTokenizer.from_pretrained("uklfr/gottbert-base", from_slow=True)
text = "Ich habe keine Rückmeldung von euch! Schicken Sie mir die Terminbestätigung."
tokenized = tokenizer_pretrained(text, return_offsets_mapping=True)
print(tokenized.encodings[0].tokens)
print(tokenized.offset_mapping)