Loading SentencePiece tokenizer

When I use SentencePieceTrainer.train(), it returns a .model and .vocab file. However when trying to load it using AutoTokenizer.from_pretrained() it expects a .json file. How would I get a .json file from the .model and .vocab file?

Did you save the model first? You can do that using the save_pretrained() function, and then simply load the tokenizer by providing the model’s directory (where all the necessary files have been stored) to the from_pretrained() function.