Is unsloth fine-tuning is good for the full precision model loading

Hello All,

I recently fine-tuned a Llama3 model using the unsloth script Here the Script Link

The model is working fine while load_in_4bit because its taking the unsloth tensor.

When I’m doing inference with this model its giving really good response.

Note: I used the Llama3 Instruct Model

When I’m trying to load the model in the Full Precision Here the code I used CLICK HERE

Its getting the model safetensors from the meta-llama/Meta-Llama-3-8B-Instruct

I can’t add more media as of now you check in the official page you can see the 4 safetensor
list
My Issue is While loading the model in 4bit its giving the 74% accuracy. the same model load in the full precision its giving the 4% accuracy.

Please some one help how to resolve please suggest some idea to fine tune a llama3 model without unsloth. thank you.

1 Like

https://www.reddit.com/r/unsloth/comments/1ea4ep2/comment/leoex4k/?context=3