Deploying to Model Hub for Inference with custom tokenizer

Hello everyone, I have a working model for a text generation task and I would to use the huggingface Inference API to simplify calling this model to generate new text. However I used a custom WordLevel tokenizer due to the nature of my domain and the docs aren’t very clear on how I would make this work since I didn’t use a PretrainedTokenizer. Does anyone have any sort of documentation or reference on how I can still leverage the Inference API or if this is possible? Currently with my model deployed on Model Hub I receive a route not found when attempting to call with example text in following the example here Overview — Api inference documentation.

2 Likes

@jodiak Did you ever find a solution to this?