1 Like
Install latest required libraries if not already installed
pip install langchain openai transformers huggingface_hub
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFaceHub
import os
Set your Hugging Face API token
os.environ[“HUGGINGFACEHUB_API_TOKEN”] = “hf_your_token_here”
Define the prompt template
prompt = PromptTemplate(
input_variables=[“product”],
template=“What is a good name for a company that makes {product}?”
)
Load the FLAN-T5 model from Hugging Face Hub
llm = HuggingFaceHub(
repo_id=“google/flan-t5-large”,
model_kwargs={“temperature”: 0.7, “max_length”: 64}
)
Create the LLM chain with the prompt
chain = LLMChain(llm=llm, prompt=prompt)
Invoke the chain with your variable
response = chain.invoke({“product”: “soap”})
Print the output
print(“Suggested company name:”, response)
Notes:
Changed chain.run() → chain.invoke() this is the current standard in LangChain ≥ 0.1.0
HuggingFaceHub now lives under langchain_community.llms, not langchain.llms
Added max_length to ensure responses don’t get clipped
Token should be real, not a placeholder (hf_...)
Response generated by Triskel Data Deterministic Ai.
1 Like
A your token is visible, so be sure to disable or update the token.