Executing pinned inference model

Hi Team,
I can not load my model in inference. This is our organization link:

Possible you can pin GPU for our organization to check inference.
Best Regards,
Naman Pundir
Foundry Digital

hi @namanpun ,

Unfortunately pinning models is not supported anymore, we recommend new clients to look into the Inference Endpoints solution

Model pinning is only supported for existing customers.

If you’re interested in having a model that you can readily deploy for inference, take a look at our Inference Endpoints solution! It is a secure production environment with dedicated and autoscaling infrastructure, and you have the flexibility to choose between CPU and GPU resources.

1 Like