When using large language models (LLMs) from Hugging Face, how can we reduce the chance of the model making things up (hallucinating) when giving answers?
1 Like
It’s nearly impossible or inefficient to reliably prevent hallucinations using current LLMs alone, so for that purpose, I personally recommend exploring RAG or RAG-like approaches.
Using existing frameworks like LangChain
with Hugging Face models would likely require the least effort.