{
“error”: {
“message”: “The requested model ‘dicta-il/dictalm2.0-instruct’ is not supported by any provider you have enabled.”,
“type”: “invalid_request_error”,
“param”: “model”,
“code”: “model_not_supported”
}
}
You either apply to Providers and wait, or host it on a paid dedicated endpoint…
Do both:
On the model page, click Ask for provider support and post a short request. Mention the repo ID and use-case. This pings providers watching the Hub. (Hugging Face)
Also file a public request in the HF provider-request thread so others can upvote. (Hugging Face)
Optionally, contact specific providers listed in the Providers docs and ask them to onboard this exact repo ID for the router. (Hugging Face)
Need it now? Host it yourself and keep your OpenAI-style code: deploy via Inference Endpoints with TGI or run TGI/vLLM. Both expose an OpenAI-compatible /v1/chat/completions Messages API. (Hugging Face)