Backend for the hub models executed by widgets

Hi, what is the backend used when I run my model through the widget on model-card page ? Is this a GPU or CPU ?

This uses the Inference API and is CPU by default. Subscribers can get GPUs and pin the models so they load very quickly, but you can read more about it at Inference API - Hugging Face and Overview — Api inference documentation