Can you use a finetuned model for an assistant?

Hello, I have a model that works well for my assistant. I want to finetune this model - however, given that this can get a bit pricey, I don’t want to waste money on doing that only to find out I won’t be able to select it as one of the options for my assistant.

So, is it possible to use a finetuned model for an assistant?

Hi,

If you would like to fine-tune a reasonably sized model at a lower cost, especially from a notebook on Google Colab, I invite you to take a look at how to use Unsloth.ai.

From the README.md :

Unsloth supports Free Notebooks Performance Memory use
Llama 3 (8B) Start for free 2x faster 60% less
Mistral v0.3 (7B) Start for free 2.2x faster 73% less
Phi-3 (medium) Start for free 2x faster 50% less
Phi-3 (mini) Start for free 2x faster 50% less
Gemma (7B) Start for free 2.4x faster 71% less
ORPO Start for free 1.9x faster 43% less
DPO Zephyr Start for free 1.9x faster 43% less
TinyLlama Start for free 3.9x faster 74% less

Link to GitHub : GitHub - unslothai/unsloth: Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory

1 Like