Inference endpoint deployment with custom dockerfile

Hello everyone,

I want to create an inference endpoint with a custom dockerfile.
The last two lines of the dockerfile are:
EXPOSE 7860

CMD [“uvicorn”, “main:app”, “–host”, “0.0.0.0”, “–port”, “7860”] and the deployment process has just failed.
Could you please guide me on how to modify the CMD line correctly, considering I have a single handler.py in the repo? What adjustments should be made to ensure a successful deployment of the inference endpoint with the Dockerfile?

HI @nurcognizen! Did you find any solution to this problem of creating a custom Dockerfile?