There have been a lot of reports of 429 errors for about two weeks, so it’s possible that the rate limit has been raised. Also, as a method of downloading, there is the following method that uses the Python huggingface_hub library.
opened 02:57AM - 21 Nov 24 UTC
closed 10:38AM - 22 Nov 24 UTC
bug
### System Info
Hi team,
From yesterday, we have been seeing 429 errors when… trying to access different models.
Here's an example of the error we're seeing:
```
File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for URL: https://huggingface.co/meta-llama/Meta-Llama-3-8B/resolve/main/config.json
```
We suspect we are being rate-limited. We can share the IPs with you.
Could we look into resolving this issue?
Thank you so much!
cc @amyeroberts @philschmid @sgugger
### Who can help?
@amyeroberts @philschmid @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
We ran into the issues in our internal runs.
### Expected behavior
Run without hitting the 429 errors.
Greetings from the edge of the galaxy—or at least from my Windows 11 rig, where I’m battling an AutoTrain crisis worthy of a sci-fi blockbuster. I’m armed with a write token, PowerShell wizardry, and a dream of training models without the universe imploding, but alas, your API is throwing a 429 Too Many Requests tantrum, followed by a melodramatic “Invalid token” sob story. I’m pretty sure I didn’t sign up for this level of intergalactic drama, so let’s unpack this mess.
Here’s the saga in all …
1 Like