On Demand GPU model hosting?

Serverless GPUs from inferless? Are your demos for running inference?