Need help fine-tuning Llama3 for log anomaly detection

Hi,

I don’t know whether I clearly understand your question.
If you want to reduce hallucination, usually it is good that if you can have some external “oracle”, that you can validate the answer of the LLM efficiently (Although the oracle cannot generate the answer itself). For example, maybe you can check one of our work to get a sense of how it works: Assuring LLM-Enabled
Cyber-Physical Systems. I’m happy to discuss more with you.

Best,
Mengyu