OSError: /repository does not appear to have a file named config.json. Checkout 'https://huggingface.co//repository/None' for available files

I get the following error when trying to deploy an endpoint from : radames/stable-diffusion-2-1-unclip-img2img

OSError: /repository does not appear to have a file named config.json. Checkout ‘https://huggingface.co//repository/None’ for available files.

Hi @aifreelancer is the query of yours resolved?
If not @philschmid, can you help us out in this case, even i’m getting the same issue when i load “google/flan-ul2” model on Sagemaker

I was running my experiment on a ml.g5.xlarge sagemaker notebook instance with Volume(EBS) of 75GB defined.

Tried pip install transformers==4.29.0 bitsandbytes==0.39.1 accelerate==0.20

import torch, os
from transformers import pipeline
import pandas as pd

os.environ[“CUDA_VISIBLE_DEVICES”] = “0”
device = torch.device(“cuda” if torch.cuda.is_available() else “cpu”)
print(device) #cuda

Loading in half precision (FP16)

generate_text = pipeline(model=“google/flan-ul2”, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map=“auto”, do_sample=False, max_length = 350)

Error traceback:


OSError Traceback (most recent call last)
Cell In[21], line 2
1 # Half precision (FP16)
----> 2 generate_text = pipeline(model=“google/flan-ul2”, trust_remote_code=True, torch_dtype=torch.bfloat16,
3 device_map=“auto”, do_sample=False, max_length = 350)

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/pipelines/init.py:705, in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
703 hub_kwargs[“_commit_hash”] = config._commit_hash
704 elif config is None and isinstance(model, str):
→ 705 config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs)
706 hub_kwargs[“_commit_hash”] = config._commit_hash
708 custom_tasks = {}

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:928, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
926 kwargs[“name_or_path”] = pretrained_model_name_or_path
927 trust_remote_code = kwargs.pop(“trust_remote_code”, False)
→ 928 config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
929 if “auto_map” in config_dict and “AutoConfig” in config_dict[“auto_map”]:
930 if not trust_remote_code:

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/configuration_utils.py:574, in PretrainedConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
572 original_kwargs = copy.deepcopy(kwargs)
573 # Get config dict associated with the base config file
→ 574 config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
575 if “_commit_hash” in config_dict:
576 original_kwargs[“_commit_hash”] = config_dict[“_commit_hash”]

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/configuration_utils.py:629, in PretrainedConfig._get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
625 configuration_file = kwargs.pop(“_configuration_file”, CONFIG_NAME)
627 try:
628 # Load from local folder or from cache or download from model Hub and cache
→ 629 resolved_config_file = cached_file(
630 pretrained_model_name_or_path,
631 configuration_file,
632 cache_dir=cache_dir,
633 force_download=force_download,
634 proxies=proxies,
635 resume_download=resume_download,
636 local_files_only=local_files_only,
637 use_auth_token=use_auth_token,
638 user_agent=user_agent,
639 revision=revision,
640 subfolder=subfolder,
641 _commit_hash=commit_hash,
642 )
643 commit_hash = extract_commit_hash(resolved_config_file, commit_hash)
644 except EnvironmentError:
645 # Raise any environment error raise by cached_file. It will have a helpful error message adapted to
646 # the original exception.

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/utils/hub.py:388, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash)
386 if not os.path.isfile(resolved_file):
387 if _raise_exceptions_for_missing_entries:
→ 388 raise EnvironmentError(
389 f"{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout "
390 f"‘https://huggingface.co/{path_or_repo_id}/{revision}’ for available files."
391 )
392 else:
393 return None

OSError: google/flan-ul2 does not appear to have a file named config.json. Checkout ‘https://huggingface.co/google/flan-ul2/None’ for available files.

I’m getting the exact same error when trying to deploy the nvidia/stt_en_conformer_transducer_xlarge model to an Inference Endpoint.

Hi @taus-developer, that’s expected given that the repo you link to does not contain a HF Transformers compatible checkpoint. The model seems to be a NeMo model, hence it’s required to create a custom handler in case you want to create an endpoint for it.

1 Like

Thanks for the info @nielsr! I somehow missed that connection.

Hello, I’m facing the same problem while running docker on a DPO model. My model is not hosted on hugging face. How did you solve the issue? @taus-developer

nielsr
Can I create a custom handler if my model is not hosted on hugging face hub?
I’ve a DPO model (based on fine-tuned Mistral 7b) and I need to draw inference from the DPO model.
When I don docker run, it fails to initialize the model with the error message (DPO_output_mistral_32k is the DPO output folder):
OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json. Checkout ' https://huggingface.co//data/DPO_output_mistral_32k/None ' for available files.

I’m running the training on Amazon linux. Can I create a custom handler on my system to draw inference?