I am new in the field of Ollama. I am using llama3 model and I am looking to fine tune the model using huggingface to get the proper response. I am curious to know how to do that and will my data get stored in the hugging face server if I use?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Ollama + Llama-3.2-11b-vision-uncensored like 22 | 1 | 1326 | December 10, 2024 | |
Deploy model in hugging face platform | 0 | 262 | December 18, 2023 | |
Finetuing GPT model? | 2 | 361 | August 29, 2021 | |
How to run huggingface model on base url | 0 | 157 | January 17, 2025 | |
[Announcement] Model Versioning: Upcoming changes to the model hub | 34 | 15108 | December 4, 2020 |