Word ids of BioGPT model

As there is no Fast tokenizer availabale for BioGPT, I am not able to get the word_ids. Does anyone have any idea of how to get the word_ids?

Hi,

BioGPT doesn鈥檛 have a fast tokenizer implementation yet: Unable to convert BioGpt slow tokenizer to fast: token out of vocabulary 路 Issue #21838 路 huggingface/transformers 路 GitHub. To contribute this, a new BioGPTConverter class would have to be defined here: transformers/convert_slow_tokenizer.py at main 路 huggingface/transformers 路 GitHub.