I am getting this error on langchain

from langchain_community.llms import HuggingFaceHub

hf=HuggingFaceHub(
    repo_id="google/flan-t5-large",
    task="text2text-generation",  # Explicitly specify the task
    model_kwargs={"temperature":0.7,"max_length":100},
    huggingfacehub_api_token="hf_... Ue"

)
text="What is the capital of Turkey"

output=hf.invoke(text)
print(output)

ValueError: Task text2text-generation has no recommended model. Please specify a model explicitly. Visit Tasks - Hugging Face for more info.

Why I am getting this error. I just checked Api token multiple times and still give this error.

1 Like

It seems that the function has been discontinued.
https://python.langchain.com/api_reference/community/llms/langchain_community.llms.huggingface_hub.HuggingFaceHub.html

Still I used HuggingFaceEndpoin but still ı am getting same error, could you give me code example?

1 Like

try this one:

from langchain_huggingface.llms.huggingface_endpoint import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
    task='text-generation',
    model="deepseek-ai/DeepSeek-R1",
    max_new_tokens=100,
    temperature=0.7,
    huggingfacehub_api_token="your_api"
)
print(llm.invoke("What is the capital of Turkey"))
1 Like

The token has been leaked. Please invalidate the token as soon as possible.

1 Like

i had already invalidated the token …

2 Likes

bro can u give a slution for this too?? it is showing the same error
ValueError: Task text2text-generation has no recommended model. Please specify a model explicitly. Visit Tasks - Hugging Face for more info.

1 Like

If the same solution as above is okay, this should work. The error message says that you can just overwrite the task.

from langchain_huggingface.llms.huggingface_endpoint import HuggingFaceEndpoint
chain = LLMChain(
    llm = HuggingFaceEndpoint(
        task='text-generation', # overwrite task
        model="deepseek-ai/DeepSeek-R1",
        temperature=0,
        huggingfacehub_api_token="hf_*****"
    ),
    prompt=prompt
)
1 Like

Hi @mahmutc ,
I am a student trying to learn the stuff. This snippet dint work for me for some reason.

Here is the error log I received:
Traceback (most recent call last):
File “C:\Users\SS\Desktop\Camp_langchain_models\2.ChatModels\2_chatmodel_hf_api.py”, line 9, in
print(llm.invoke(“What is the capital of Turkey”))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\llms.py”, line 387, in invoke
self.generate_prompt(
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\llms.py”, line 764, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\llms.py”, line 971, in generate
return self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\llms.py”, line 790, in _generate_helper
self._generate(
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_core\language_models\llms.py”, line 1545, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\langchain_huggingface\llms\huggingface_endpoint.py”, line 312, in call
response_text = self.client.text_generation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\huggingface_hub\inference_client.py”, line 2297, in text_generation
provider_helper = get_provider_helper(self.provider, task=“text-generation”, model=model_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\SS\Desktop\Camp_langchain_models\venv\Lib\site-packages\huggingface_hub\inference_providers_init
.py", line 169, in get_provider_helper
raise ValueError(
ValueError: Provider ‘featherless-ai’ not supported. Available values: ‘auto’ or any provider from
[‘black-forest-labs’, ‘cerebras’, ‘cohere’, ‘fal-ai’, ‘fireworks-ai’, ‘hf-inference’, ‘hyperbolic’, ‘nebius’, ‘novita’, ‘openai’, ‘replicate’, ‘sambanova’, ‘together’].Passing ‘auto’ (default value) will automatically select the first provider available for the model, sorted by the user’s order in Hugging Face – The AI community building the future..

I look forward to hearing from you.

Thanks,
SS

1 Like

It seems an ongoing issue.

1 Like

Actually, you can change the provider as follows:

llm = HuggingFaceEndpoint(
    model="deepseek-ai/DeepSeek-R1",
    provider="sambanova",
)

However, I still encounter this strange behavior, even when I change the task to “conversational”:

ValueError: Task 'text-generation' not supported for provider 'sambanova'. Available tasks: ['conversational', 'feature-extraction']

$ pip freeze | grep -P ‘(langchain|langchain-huggingface|huggingface-hub)’
huggingface-hub==0.31.1
langchain==0.3.25
langchain-anthropic==0.3.10
langchain-community==0.3.20
langchain-core==0.3.59
langchain-huggingface==0.2.0
langchain-ollama==0.3.0
langchain-text-splitters==0.3.8

1 Like

About feature-extraction API URL:

Your token is leaking. Make sure to disable (or refresh) the token so that it is safe.

1 Like

Probably, if you look for a way to avoid using post, you’ll be able to make it work, but you may need to wait for support on the langchain side.

STILL GETTING ERROR

1 Like

Hmm… If it hasn’t been completely abolished, it might still work.

pip install huggingface_hub==0.30.0

Or raise an issue.