Wget timed out in CI/CD pipeline

Hi. As part of a CI/CD pipeline. I run the following command:

wget https://huggingface.co/Mozilla/Llama-3.2-1B-Instruct-llamafile/resolve/main/Llama-3.2-1B-Instruct.Q6_K.llamafile -O /llm.llamafile

, while the specified model link works from my local machine and two remote servers I could access, it remains indefinitely in β€œconnecting to huggingface.co” and fails with time-out on the one server where the code is running. Could it be because that server is somewhere blacklisted? It has not been heavily used before and it is anyways the first time I am downloading something of this size automatically from HF.

Any help is appreciated. Thanks.

1 Like

Hmmm, not sure if this is a network routing/blacklist issue or a client-side configuration issue…
But the fact that it persists until timeout without erroring out is unusual…

Thanks. Here is an update:

I saw that huggingface.co is being resolved to some IPv6 address. I suspected this and added --inet4-only to wget to force IPv4. Then the error changed to certificate verification issue, so I had to add --no-check-certificate, which finally worked!

This seems be something with my local DNS resolver, which I checked in /etc/resolve.conf and is set to 8.8.8.8. The resolver is different in the servers where wget originally worked.

I will contact my provider to check with them. I thought this information may be helpful to someone else. Thanks.

1 Like

I remember 8.8.8.8 was Google’s DNS. It doesn’t sound like it doesn’t work, but well, that’s what we suspect.
Anyway, we’re lucky to have found what seems to be the cause of the problem.