Llama/Mistral Finetuning for Inference API

I’m trying to finetune Llama (or Mistral) and host it on Inference API.

The code I found online to finetune Llama used Peft but I can’t seem to get it on Inference API, as there’s no obvious buttons on the repo to make calls to it.

Is it because the code uses Peft, and if so, can you direct me to some code that would be Inference API compatible?