Need Help to run a finetuning model (pay work)

I have successfully finetuned a model but I can’t get it to run right and work with on APP…no matter how many days spend with the help of AI scripts,i cant see the Ui on App tab…I want someone to make it work for me and get paid for it.

Thanks a lot in advance

1 Like

Hmm… Without knowing the nature of the app, no one can answer…
(Which OS is it for? Is it a web app? Which framework? Does even the skeleton not run, or is it just the chatbot that doesn’t work? etc.)

This is the model ilsp/Meltemi-7B-v1.5 · Hugging Face
All ready i made a finetuning with my data , i want to run the tuning model to work with on my space on APP tab.

Some times i can load the base model but still i cant see the Ui

1 Like

I’ve made a basic Chatbot sample for now. Feel free to copy and modify it.

Thanks , but i need help to run the finetuning model i made.

1 Like
  • Duplicate Space
    If possible, replicate to Zero GPU Space, paid GPU space, or a local machine with a GPU. CPU processing will never run smoothly with models of that size.
  • Set up HF access token in Space settings
    1. Go to your Hugging Face Space settings: https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME/settings
    2. Navigate to “Repository secrets” section
    3. Click “New secret”
    4. Add a new secret:
    1. Click “Add secret”
  • Modify source code of your space
    in app.py:
    MODEL_ID = "dark80naruto/dark80naruto_darnan3"
    

Roughly speaking, you can run that (and other) model using this procedure. For public models, token settings aren’t needed.

Also, while the Transformers library I used this time is convenient, it isn’t particularly strong in terms of inference speed or memory efficiency. Therefore, I think it’s often more practical to build a chatbot using a backend other than Transformers (like TGI, vLLM, Ollama, Llama.cpp, or SGLang). You should be able to find implementation examples easily.

Also, if you don’t have specific requirements for the UI or features, there should be several GUIs available for LLMs. (The GUI backend is often Llama.cpp.)
If you want to use one of those GUIs, you’ll likely need to convert the model weights to GGUF format.

Edit:
How to convert HF model to GGUF

You won’t believe how easy the solution was to my problem that had been bothering me for days, as I told you the APP was running but I didn’t have a UI to work with… It was a Chrome problem, I logged into the site with Edge Explorer and suddenly bam, the UI was there and I could do my work without any problems!

1 Like