How can I use "Hosted inference API" in my model card?

I uploaded my model(mnist) to the this repository. I would like to be able to use the “Hosted inference API” as shown below, but I’m not sure how to configure it. Can you help me?