Lama 3.23b performs great when I download and use using ollama but when I manually download the model or if I use the gguf model by unsloth, it gives me irrelevant response. Please help me out

Hi, Im very new to this but quite interested about LLM’s, I’m working on a project that requires me to fine tune an LLM but the issue is the gguf models of Llama 3.23b that I download and try to run give me weird outputs, like the one below.

But when I use the one from ollama itself(command: ollama run llama3.2)… it works fine

1 Like

It seems that Ollama internally loads the Instruct model instead of Base.

1 Like

Thank you for replying. Could you help me find the instruct model that works as a chatbot please.

1 Like

Okay.

Official one (gated. requires authorization)

Unofficial ones (ungated)

GGUF files for Llamacpp

Thank you! I used unsloth and bartowski but no luck :joy:. It gives me the same kind of weird output.

How about this?

Thank you so much @John6666 for taking your time and helping me out. I realized where I went wrong, I had to include a template in my model file before running it in ollama. Your first comment where you mentioned a link to a similar problem helped me out, I found this link in the comments…

and it said I had to include this template in my modelfile

TEMPLATE “”“{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
“””
This solved the problem and now my models work like charm. Big thanks to you for helping me out!

1 Like

So it was a template issue.
Good to have it resolved!

1 Like

Yuppp, Thank you

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.