Save_pretrained() on tokenizer does not generate a tokenizer.json file

From transformers/src/transformers/tokenization_utils_base.py at main 路 huggingface/transformers 路 GitHub

# Slow tokenizers used to be saved in three separated files
SPECIAL_TOKENS_MAP_FILE = "special_tokens_map.json"
ADDED_TOKENS_FILE = "added_tokens.json"
TOKENIZER_CONFIG_FILE = "tokenizer_config.json"

# Fast tokenizers (provided by HuggingFace tokenizer's library) can be saved in a single file
FULL_TOKENIZER_FILE = "tokenizer.json"
_re_tokenizer_file = re.compile(r"tokenizer\.(.*)\.json")

Can you please check whether your new tokenizer is fast or not?
tokenizer.is_fast should return True.