I need support to train a model. I am trying to train the model unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF on the dataset Enderchef/ICONN-1-BasicChat-Data-SuperLite. I keep getting errors and was curious if somebody could either do the code for me or to the training for me.
1 Like
I don’t do coding or training for you, but…
GGUF is like corned beef, so it’s good to use, but not suitable for training. It’s better to train the pre-quantized model and then convert it to GGUF again.
Thanks for letting me know.
1 Like
Direct answer and script:
You cannot train directly on a GGUF file.
Convert GGUF to a supported PyTorch or Hugging Face format, train, then quantize and convert back to GGUF.
Example:
- Download or export model in HuggingFace/transformers format, not GGUF.
- Train/fine-tune with your chosen framework (e.g. unsloth, transformers, etc).
- Convert the resulting checkpoint to GGUF after training.
For Unsloth, see: Llama 4 - Finetune & Run with Unsloth
Solution provided by Triskel Data Deterministic AI.
1 Like