Hey I am facing issues

pls help

Here is the starting of the code

And the code after it

Update tried using try except block to know about the error .

1 Like

I just had a quick look and your HF token is in your uploaded image before I take care of the problem.
Please go remove the token leaked first. Once erased, the token is no longer needed, so it doesn’t matter if it has leaked.

Also, it would be easier for me to respond if you could write in the text.
You can quote the code in a nice way by enclosing it in ```. like this,

import os

Probably the same problem as this one.

I deleted the token before posting here actually .

1 Like

from langchain.prompts import PromptTemplate
from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.schema.runnable import RunnableSequence
from huggingface_hub import login

# Set HuggingFace API token
os.environ['HUGGINGFACEHUB_API_TOKEN'] = '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'

# Define the prompt template
prompt = PromptTemplate(
    input_variables=["product"],
    template="Tell me a good company name that makes this product: {product}"
)

# Initialize the HuggingFace endpoint
llm = HuggingFaceEndpoint(repo_id='google/flan-t5-large', temperature=0, max_new_tokens=250)

# Create a runnable sequence with the prompt and LLM
chain = RunnableSequence(prompt, llm)

# Invoke the chain and handle potential errors
try:
    result = chain.invoke({'product': 'coffee'})
    print(result)
except Exception as e:
    print("Error occurred:", e)

{‘product’: ‘coffee’}

The error message is broken in the middle, but it seems to be grammatically incorrect here. Perhaps the option is obsolete or the option name is incorrect.

But here it is, no matter how you look at it…
I wonder if the options of the other functions in the chain are broken.

I removed that line . But it is still showing the same error “HTTPError”.

from langchain.prompts import PromptTemplate
from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.schema.runnable import RunnableSequence
from huggingface_hub import login

# Set HuggingFace API token
os.environ['HUGGINGFACEHUB_API_TOKEN'] = 'my_api_token'

# Define the prompt template
prompt = PromptTemplate(
    input_variables=["product"],
    template="Tell me a good company name that makes this product: {product}"
)

# Initialize the HuggingFace endpoint
llm = HuggingFaceEndpoint(repo_id='google/flan-t5-large', temperature=0, max_new_tokens=250)

# Create a runnable sequence with the prompt and LLM
chain = RunnableSequence(prompt, llm)

chain.invoke('coffee')

It’s an HTTPError, but the server is telling you that the request is wrong.
This means that the function is working on the server, and the function is misinterpreting some of the options.
I think it’s the prompt that’s suspicious, I think the LLM is correct.

https://python.langchain.com/v0.2/api_reference/core/prompts/langchain_core.prompts.prompt.PromptTemplate.html

But I don’t think there’s anything wrong with the prompt, either.

I’m a little suspicious of the lack of task, because LLM can do a lot of things. Especially the ones that are closer to VLM.

from langchain_huggingface import HuggingFaceEndpoint

llm = HuggingFaceEndpoint(
    repo_id="unsloth/Meta-Llama-3.1-8B-bnb-4bit", # since official one is gated,
    task="text-generation",
    max_new_tokens=100,
    do_sample=False,
)
llm.invoke("Hugging Face is")

Also, I think you have to set do_sample to True for the temperature to work. Or delete temperature.

1 Like

Yess the LLM model had the issue . Thank You John .


# Initialize the HuggingFace endpoint with correct parameters (without extra model_kwargs)
llm = HuggingFaceEndpoint(
    repo_id="meta-llama/Meta-Llama-3-8B-Instruct",
    task="text-generation",
    max_new_tokens=100,
    do_sample=False,
)
# Create a runnable sequence with the prompt and LLM
chain = RunnableSequence(prompt, llm)

chain.invoke("Hugging Face")
1 Like

I did it. So if I turn off temperature, maybe it will work? Also, put a task on it. The rest should be correct.

do_sample=False,

It’s just a default value.

1 Like

yep it is working without adding the temperature.

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.