pls help
I just had a quick look and your HF token is in your uploaded image before I take care of the problem.
Please go remove the token leaked first. Once erased, the token is no longer needed, so it doesnât matter if it has leaked.
Also, it would be easier for me to respond if you could write in the text.
You can quote the code in a nice way by enclosing it in ```. like this,
import os
Probably the same problem as this one.
I deleted the token before posting here actually .
from langchain.prompts import PromptTemplate
from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.schema.runnable import RunnableSequence
from huggingface_hub import login
# Set HuggingFace API token
os.environ['HUGGINGFACEHUB_API_TOKEN'] = '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'
# Define the prompt template
prompt = PromptTemplate(
input_variables=["product"],
template="Tell me a good company name that makes this product: {product}"
)
# Initialize the HuggingFace endpoint
llm = HuggingFaceEndpoint(repo_id='google/flan-t5-large', temperature=0, max_new_tokens=250)
# Create a runnable sequence with the prompt and LLM
chain = RunnableSequence(prompt, llm)
# Invoke the chain and handle potential errors
try:
result = chain.invoke({'product': 'coffee'})
print(result)
except Exception as e:
print("Error occurred:", e)
{âproductâ: âcoffeeâ}
The error message is broken in the middle, but it seems to be grammatically incorrect here. Perhaps the option is obsolete or the option name is incorrect.
But here it is, no matter how you look at itâŚ
I wonder if the options of the other functions in the chain are broken.
I removed that line . But it is still showing the same error âHTTPErrorâ.
from langchain.prompts import PromptTemplate
from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.schema.runnable import RunnableSequence
from huggingface_hub import login
# Set HuggingFace API token
os.environ['HUGGINGFACEHUB_API_TOKEN'] = 'my_api_token'
# Define the prompt template
prompt = PromptTemplate(
input_variables=["product"],
template="Tell me a good company name that makes this product: {product}"
)
# Initialize the HuggingFace endpoint
llm = HuggingFaceEndpoint(repo_id='google/flan-t5-large', temperature=0, max_new_tokens=250)
# Create a runnable sequence with the prompt and LLM
chain = RunnableSequence(prompt, llm)
chain.invoke('coffee')
Itâs an HTTPError, but the server is telling you that the request is wrong.
This means that the function is working on the server, and the function is misinterpreting some of the options.
I think itâs the prompt thatâs suspicious, I think the LLM is correct.
But I donât think thereâs anything wrong with the prompt, either.
Iâm a little suspicious of the lack of task, because LLM can do a lot of things. Especially the ones that are closer to VLM.
from langchain_huggingface import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
repo_id="unsloth/Meta-Llama-3.1-8B-bnb-4bit", # since official one is gated,
task="text-generation",
max_new_tokens=100,
do_sample=False,
)
llm.invoke("Hugging Face is")
Also, I think you have to set do_sample to True for the temperature to work. Or delete temperature.
Yess the LLM model had the issue . Thank You John .
# Initialize the HuggingFace endpoint with correct parameters (without extra model_kwargs)
llm = HuggingFaceEndpoint(
repo_id="meta-llama/Meta-Llama-3-8B-Instruct",
task="text-generation",
max_new_tokens=100,
do_sample=False,
)
# Create a runnable sequence with the prompt and LLM
chain = RunnableSequence(prompt, llm)
chain.invoke("Hugging Face")
I did it. So if I turn off temperature, maybe it will work? Also, put a task on it. The rest should be correct.
do_sample=False,
Itâs just a default value.
yep it is working without adding the temperature.
This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.