Am I Black Listed By hugging face hub?

I have been trying to test my hugging face space with many models from hugging face models.
But lately I get a connection error.
I believe that problem might be due to downloading too many models from the same computer and downloading the same models few times (due to a bug that I fixed).
I tried using AutoModelForCausalLM.from_pretrained but got the same error.
Thanks is advance!

The code:

from huggingface_hub import snapshot_download


def download_repository(name: str) -> None:
    """Downloads a repository from the Hugging Face Hub."""
    number_of_seconds_in_a_day: int = 86_400
    snapshot_download(
        repo_id=name,
        etag_timeout=number_of_seconds_in_a_day,
        resume_download=True,
    )
FAILED     [ 33%]Starts downloading model: togethercomputer/GPT-JT-6B-v1 from the internet.
Fetching 10 files:   0%|          | 0/10 [00:00<?, ?it/s]
Downloading (…)a8051/.gitattributes: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.48k/1.48k [00:00<00:00, 1.87MB/s]
Fetching 10 files:  10%|β–ˆ         | 1/10 [00:01<00:12,  1.34s/it]
Downloading (…)51/added_tokens.json:   0%|          | 0.00/4.33k [00:00<?, ?B/s]
Downloading (…)51/added_tokens.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.33k/4.33k [00:00<00:00, 3.03MB/s]
Downloading (…)cial_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 438/438 [00:00<00:00, 371kB/s]
Downloading (…)8afdea8051/README.md:   0%|          | 0.00/6.84k [00:00<?, ?B/s]
Downloading (…)8afdea8051/README.md: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6.84k/6.84k [00:00<00:00, 4.94MB/s]
Downloading (…)fdea8051/config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.00k/1.00k [00:00<00:00, 407kB/s]
Downloading (…)a8051/tokenizer.json:   0%|          | 0.00/2.14M [00:00<?, ?B/s]
Downloading (…)okenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 763/763 [00:00<00:00, 972kB/s]
Downloading (…)afdea8051/vocab.json:   0%|          | 0.00/798k [00:00<?, ?B/s]
Downloading (…)afdea8051/merges.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 456k/456k [00:01<00:00, 320kB/s]
Fetching 10 files:  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     | 5/10 [00:02<00:02,  1.97it/s]
Downloading (…)afdea8051/vocab.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 798k/798k [00:01<00:00, 445kB/s]
Downloading (…)a8051/tokenizer.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.14M/2.14M [00:03<00:00, 606kB/s]
Downloading (…)"pytorch_model.bin";:   0%|          | 0.00/12.2G [00:00<?, ?B/s]
...
Downloading (…)"pytorch_model.bin";:  16%|β–ˆβ–Œ        | 1.92G/12.2G [03:28<27:59, 6.13MB/s]
Fetching 10 files:  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     | 5/10 [03:47<03:47, 45.43s/it]

tests.py:54 (test_create_pipeline[togethercomputer/GPT-JT-6B-v1])
self = <urllib3.response.HTTPResponse object at 0x7f83136e9ae0>

    @contextmanager
    def _error_catcher(self):
        """
        Catch low-level python exceptions, instead re-raising urllib3
        variants, so that low-level exceptions are not leaked in the
        high-level api.
    
        On exit, release the connection back to the pool.
        """
        clean_exit = False
    
        try:
            try:
>               yield

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/urllib3/response.py:444: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <urllib3.response.HTTPResponse object at 0x7f83136e9ae0>, amt = 10485760
decode_content = True, cache_content = False

    def read(self, amt=None, decode_content=None, cache_content=False):
        """
        Similar to :meth:`http.client.HTTPResponse.read`, but with two additional
        parameters: ``decode_content`` and ``cache_content``.
    
        :param amt:
            How much of the content to read. If specified, caching is skipped
            because it doesn't make sense to cache partial content as the full
            response.
    
        :param decode_content:
            If True, will attempt to decode the body based on the
            'content-encoding' header.
    
        :param cache_content:
            If True, will save the returned data such that the same result is
            returned despite of the state of the underlying file object. This
            is useful if you want the ``.data`` property to continue working
            after having ``.read()`` the file object. (Overridden if ``amt`` is
            set.)
        """
        self._init_decoder()
        if decode_content is None:
            decode_content = self.decode_content
    
        if self._fp is None:
            return
    
        flush_decoder = False
        fp_closed = getattr(self._fp, "closed", False)
    
        with self._error_catcher():
>           data = self._fp_read(amt) if not fp_closed else b""

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/urllib3/response.py:567: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <urllib3.response.HTTPResponse object at 0x7f83136e9ae0>, amt = 10485760

    def _fp_read(self, amt):
        """
        Read a response with the thought that reading the number of bytes
        larger than can fit in a 32-bit int at a time via SSL in some
        known cases leads to an overflow error that has to be prevented
        if `amt` or `self.length_remaining` indicate that a problem may
        happen.
    
        The known cases:
          * 3.8 <= CPython < 3.9.7 because of a bug
            https://github.com/urllib3/urllib3/issues/2513#issuecomment-1152559900.
          * urllib3 injected with pyOpenSSL-backed SSL-support.
          * CPython < 3.10 only when `amt` does not fit 32-bit int.
        """
        assert self._fp
        c_int_max = 2 ** 31 - 1
        if (
            (
                (amt and amt > c_int_max)
                or (self.length_remaining and self.length_remaining > c_int_max)
            )
            and not util.IS_SECURETRANSPORT
            and (util.IS_PYOPENSSL or sys.version_info < (3, 10))
        ):
            buffer = io.BytesIO()
            # Besides `max_chunk_amt` being a maximum chunk size, it
            # affects memory overhead of reading a response by this
            # method in CPython.
            # `c_int_max` equal to 2 GiB - 1 byte is the actual maximum
            # chunk size that does not lead to an overflow error, but
            # 256 MiB is a compromise.
            max_chunk_amt = 2 ** 28
            while amt is None or amt != 0:
                if amt is not None:
                    chunk_amt = min(amt, max_chunk_amt)
                    amt -= chunk_amt
                else:
                    chunk_amt = max_chunk_amt
                data = self._fp.read(chunk_amt)
                if not data:
                    break
                buffer.write(data)
                del data  # to reduce peak memory usage by `max_chunk_amt`.
            return buffer.getvalue()
        else:
            # StringIO doesn't like amt=None
>           return self._fp.read(amt) if amt is not None else self._fp.read()

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/urllib3/response.py:533: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <http.client.HTTPResponse object at 0x7f830e56b6d0>, amt = 10485760

    def read(self, amt=None):
        if self.fp is None:
            return b""
    
        if self._method == "HEAD":
            self._close_conn()
            return b""
    
        if self.chunked:
            return self._read_chunked(amt)
    
        if amt is not None:
            if self.length is not None and amt > self.length:
                # clip the read to the "end of response"
                amt = self.length
>           s = self.fp.read(amt)

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/http/client.py:465: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <socket.SocketIO object at 0x7f830e56b010>
b = <memory at 0x7f83100b1240>

    def readinto(self, b):
        """Read up to len(b) bytes into the writable buffer *b* and return
        the number of bytes read.  If the socket is non-blocking and no bytes
        are available, None is returned.
    
        If *b* is non-empty, a 0 return value indicates that the connection
        was shutdown at the other end.
        """
        self._checkClosed()
        self._checkReadable()
        if self._timeout_occurred:
            raise OSError("cannot read from timed out object")
        while True:
            try:
>               return self._sock.recv_into(b)

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/socket.py:705: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET6, type=SocketKind.SOCK_STREAM, proto=6>
buffer = <memory at 0x7f83100b1240>, nbytes = 3203072, flags = 0

    def recv_into(self, buffer, nbytes=None, flags=0):
        self._checkClosed()
        if buffer and (nbytes is None):
            nbytes = len(buffer)
        elif nbytes is None:
            nbytes = 1024
        if self._sslobj is not None:
            if flags != 0:
                raise ValueError(
                  "non-zero flags not allowed in calls to recv_into() on %s" %
                  self.__class__)
>           return self.read(nbytes, buffer)

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/ssl.py:1274: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET6, type=SocketKind.SOCK_STREAM, proto=6>
len = 3203072, buffer = <memory at 0x7f83100b1240>

    def read(self, len=1024, buffer=None):
        """Read up to LEN bytes and return them.
        Return zero-length string on EOF."""
    
        self._checkClosed()
        if self._sslobj is None:
            raise ValueError("Read on closed or unwrapped SSL socket.")
        try:
            if buffer is not None:
>               return self._sslobj.read(len, buffer)
E               TimeoutError: The read operation timed out

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/ssl.py:1130: TimeoutError

During handling of the above exception, another exception occurred:

    def generate():
        # Special case for urllib3.
        if hasattr(self.raw, "stream"):
            try:
>               yield from self.raw.stream(chunk_size, decode_content=True)

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/requests/models.py:816: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <urllib3.response.HTTPResponse object at 0x7f83136e9ae0>, amt = 10485760
decode_content = True

    def stream(self, amt=2 ** 16, decode_content=None):
        """
        A generator wrapper for the read() method. A call will block until
        ``amt`` bytes have been read from the connection or until the
        connection is closed.
    
        :param amt:
            How much of the content to read. The generator will return up to
            much data per iteration, but may return less. This is particularly
            likely when using compressed data. However, the empty string will
            never be returned.
    
        :param decode_content:
            If True, will attempt to decode the body based on the
            'content-encoding' header.
        """
        if self.chunked and self.supports_chunked_reads():
            for line in self.read_chunked(amt, decode_content=decode_content):
                yield line
        else:
            while not is_fp_closed(self._fp):
>               data = self.read(amt=amt, decode_content=decode_content)

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/urllib3/response.py:628: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <urllib3.response.HTTPResponse object at 0x7f83136e9ae0>, amt = 10485760
decode_content = True, cache_content = False

    def read(self, amt=None, decode_content=None, cache_content=False):
        """
        Similar to :meth:`http.client.HTTPResponse.read`, but with two additional
        parameters: ``decode_content`` and ``cache_content``.
    
        :param amt:
            How much of the content to read. If specified, caching is skipped
            because it doesn't make sense to cache partial content as the full
            response.
    
        :param decode_content:
            If True, will attempt to decode the body based on the
            'content-encoding' header.
    
        :param cache_content:
            If True, will save the returned data such that the same result is
            returned despite of the state of the underlying file object. This
            is useful if you want the ``.data`` property to continue working
            after having ``.read()`` the file object. (Overridden if ``amt`` is
            set.)
        """
        self._init_decoder()
        if decode_content is None:
            decode_content = self.decode_content
    
        if self._fp is None:
            return
    
        flush_decoder = False
        fp_closed = getattr(self._fp, "closed", False)
    
>       with self._error_catcher():

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/urllib3/response.py:566: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <contextlib._GeneratorContextManager object at 0x7f83136e9ed0>
typ = <class 'TimeoutError'>
value = TimeoutError('The read operation timed out')
traceback = <traceback object at 0x7f830fd7b540>

    def __exit__(self, typ, value, traceback):
        if typ is None:
            try:
                next(self.gen)
            except StopIteration:
                return False
            else:
                raise RuntimeError("generator didn't stop")
        else:
            if value is None:
                # Need to force instantiation so we can reliably
                # tell if we get the same exception back
                value = typ()
            try:
>               self.gen.throw(typ, value, traceback)

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/contextlib.py:153: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <urllib3.response.HTTPResponse object at 0x7f83136e9ae0>

    @contextmanager
    def _error_catcher(self):
        """
        Catch low-level python exceptions, instead re-raising urllib3
        variants, so that low-level exceptions are not leaked in the
        high-level api.
    
        On exit, release the connection back to the pool.
        """
        clean_exit = False
    
        try:
            try:
                yield
    
            except SocketTimeout:
                # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but
                # there is yet no clean way to get at it from this context.
>               raise ReadTimeoutError(self._pool, None, "Read timed out.")
E               urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/urllib3/response.py:449: ReadTimeoutError

During handling of the above exception, another exception occurred:

model_name = 'togethercomputer/GPT-JT-6B-v1'

    @pytest.mark.parametrize(
        "model_name",
        get_supported_model_names(
            min_number_of_downloads=1000,
            min_number_of_likes=100,
        )
    )
    def test_create_pipeline(model_name: str):
>       pipeline: GroupedSamplingPipeLine = create_pipeline(model_name, 5)

tests.py:63: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
hanlde_form_submit.py:24: in create_pipeline
    download_repository(model_name)
download_repo.py:7: in download_repository
    snapshot_download(
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn
    return fn(*args, **kwargs)
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/huggingface_hub/_snapshot_download.py:215: in snapshot_download
    thread_map(
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/tqdm/contrib/concurrent.py:94: in thread_map
    return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/tqdm/contrib/concurrent.py:76: in _executor_map
    return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/tqdm/std.py:1195: in __iter__
    for obj in iterable:
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/concurrent/futures/_base.py:621: in result_iterator
    yield _result_or_cancel(fs.pop())
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/concurrent/futures/_base.py:319: in _result_or_cancel
    return fut.result(timeout)
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/concurrent/futures/_base.py:458: in result
    return self.__get_result()
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
    raise self._exception
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/concurrent/futures/thread.py:58: in run
    result = self.fn(*self.args, **self.kwargs)
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/huggingface_hub/_snapshot_download.py:194: in _inner_hf_hub_download
    return hf_hub_download(
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn
    return fn(*args, **kwargs)
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/huggingface_hub/file_download.py:1282: in hf_hub_download
    http_get(
../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/huggingface_hub/file_download.py:530: in http_get
    for chunk in r.iter_content(chunk_size=10 * 1024 * 1024):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def generate():
        # Special case for urllib3.
        if hasattr(self.raw, "stream"):
            try:
                yield from self.raw.stream(chunk_size, decode_content=True)
            except ProtocolError as e:
                raise ChunkedEncodingError(e)
            except DecodeError as e:
                raise ContentDecodingError(e)
            except ReadTimeoutError as e:
>               raise ConnectionError(e)
E               requests.exceptions.ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.

../../miniconda3/envs/grouped-sampling-demo/lib/python3.10/site-packages/requests/models.py:822: ConnectionError