What are the Rate Limits For the Inference API

Hello.

I am running inferences using publicly available models using the huggingface_hub.InferenceClient. I just upgraded my account to Pro. Still, I am running into rate limits (HttpStatus.429). What are the rate limits for each tier:

Free:
Pro:
Enterprise:

I haven’t seen them stated in any documentation, nor have I seen them provided as answers in similar queries in the forums.

Thank you in advance for providing this information.