Rate limit lowered?

hello when i try the api inference models, i’m reaching the rate limit sooner than before. So the rate limit has been lowered ? i reach the rate limit after 4 or 5 compute.