OS Error:Unable to load model distil-whisper/distil-small.en

Runtime error
ocal/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File “/home/user/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py”, line 1406, in hf_hub_download
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/home/user/app/app.py”, line 10, in
asr = pipeline(task=“automatic-speech-recognition”,
File “/home/user/.local/lib/python3.10/site-packages/transformers/pipelines/init.py”, line 782, in pipeline
config = AutoConfig.from_pretrained(
File “/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py”, line 1100, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File “/home/user/.local/lib/python3.10/site-packages/transformers/configuration_utils.py”, line 634, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File “/home/user/.local/lib/python3.10/site-packages/transformers/configuration_utils.py”, line 689, in _get_config_dict
resolved_config_file = cached_file(
File “/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py”, line 425, in cached_file
raise EnvironmentError(
OSError: We couldn’t connect to ‘https://huggingface.co’ to load this file, couldn’t find it in the cached files and it looks like distil-whisper/distil-small.en is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at ‘Installation’.

Can anyone suggest to resolve the error.
I am using the transformer library version 4.37.2 which was a common suggestion.

The pipline function used to call models
#Audio to text
asr = pipeline(task=“automatic-speech-recognition”,
model=“distil-whisper/distil-small.en”)
#Text to text
translator = pipeline(task=“translation”,
model=“facebook/nllb-200-distilled-600M”,
torch_dtype=torch.bfloat16)
#Text to audio
pipe = pipeline(“text-to-speech”, model=“suno/bark-small”,
torch_dtype=torch.bfloat16)