Inference endpoint deployment with custom dockerfile

Hello everyone,

I want to create an inference endpoint with a custom dockerfile.
The last two lines of the dockerfile are:

CMD [“uvicorn”, “main:app”, “–host”, “”, “–port”, “7860”] and the deployment process has just failed.
Could you please guide me on how to modify the CMD line correctly, considering I have a single in the repo? What adjustments should be made to ensure a successful deployment of the inference endpoint with the Dockerfile?

HI @nurcognizen! Did you find any solution to this problem of creating a custom Dockerfile?