How to enable Inference API for custom models?

Hello!

it seems like some text to text models have an Inference API and some dont? Why and how is that? And how can you enable inference API for your own custom model? I uploaded a private model which has a config.json file, a generation_config.json file, some .safetensors files and a safetensors.index.json file. Its a finetuned version of the llama 3 8b instruct model. I uploaded it to huggingface just to test the Inference API. But on the profile of the model I dont see the Inference button that I see on other models profiles (under “Deploy”). Why is that and what is needed for a model to have inference API available?

Thanks