Hosted Inference API with custom model (YOLOv5)

Hi everyone!
I recently found out about the Hosted Inference API that one can put into their model repository. I love the idea of being able to run a quick demo so I was trying to implement it in my own model repo (joangog/pwmfd-yolov5 · Hugging Face). As I understand it, I need to upload two files, the config.json and the pytorch_model.bin. When I drop an image in the API I get the following error:


The thing is, in my config.json it is required to add a ‘model_type’. However none of the options fit my model. In my case, I trained the model from GitHub - ultralytics/yolov5: YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite with a custom dataset. This model can be found on torch hub and I can easily do inference in a python script by loading the model from there and importing my weights (.pt) and configs (.yaml). I changed the weight file from .pt to .bin and I’m still working on the config.json file. How exactly can I do inference on this kind of model using the Hosted Inference API feature? Is it impossible unless it belongs to one of the model types in the image above? Is there a way to create a custom model type?