Facing Rate Limit issues on the inference API

Hi,

I recently signed up for the Hugging Face ‘Pro Plan’ and I seem to be running into API rate limit issues although I’ve barely done only 5-6 API calls… I am passing the access token in the authorisation header but the response am getting is ‘{“error”:“Rate limit reached. Please log in or use your apiToken”}’

The API + model am using is https://api-inference.huggingface.co/models/cardiffnlp/tweet-topic-19-multi

Could you please let me know if am missing a step?.. Please do help…

Regards
Ken

5 Likes

Got same issue right now :wink: