The requested model is not supported by any provider

Hi,
am getting this error:

{
“error”: {
“message”: “The requested model ‘dicta-il/dictalm2.0-instruct’ is not supported by any provider you have enabled.”,
“type”: “invalid_request_error”,
“param”: “model”,
“code”: “model_not_supported”
}
}

while calling this:

curl --location ‘https://router.huggingface.co/v1/chat/completions
–header ‘Authorization: Bearer …’
–header ‘Content-Type: application/json’
–data ‘{
“model”: “dicta-il/dictalm2.0-instruct”,
“messages”: [
{ “role”: “system”, “content”: “You are a helpful Hebrew language assistant.” },
{ “role”: “user”, “content”: “who are you?” }
],
“temperature”: 0.7,
“max_tokens”: 300
}’

can you please help?

1 Like

You either apply to Providers and wait, or host it on a paid dedicated endpoint…


Do both:

  1. On the model page, click Ask for provider support and post a short request. Mention the repo ID and use-case. This pings providers watching the Hub. (Hugging Face)
  2. Also file a public request in the HF provider-request thread so others can upvote. (Hugging Face)

Optionally, contact specific providers listed in the Providers docs and ask them to onboard this exact repo ID for the router. (Hugging Face)

Need it now? Host it yourself and keep your OpenAI-style code: deploy via Inference Endpoints with TGI or run TGI/vLLM. Both expose an OpenAI-compatible /v1/chat/completions Messages API. (Hugging Face)