Convert huggingface tokenizer into sentencepiece format

I have a huggingface tokenizer for the BERT model (google-bert/bert-base-cased) which includes three files: tokenizer.json, tokenizer_config.json, and vocab.txt. I would like to convert this tokenizer into the SentencePiece tokenizer format, which uses a single .model file.
How can I perform this conversion?