AutoModel resolution outside of HF ecosystem

Hey guys,

In the current API, unless I am doing something wrong, I have to specify each remote file (config, PyTorch model) explicitly in order to load a custom model outside of HF repo ecosystem.

For example, to load custom model private_model_name from a hypothetical remote repo example.com I need to do the following:

config = AutoConfig.from_pretrained(“https://example.com/models/private_model_name/config.json”)
model = AutoModel.from_pretrained(“https://example.com/models/private_model_name/pytorch_model.bin”, config=config)

Why not add naming resolution capabilities that currently exist to all remote repos so we can do:

model = AutoModel.from_pretrained(“https://example.com/models/private_model_name/”)

Can we add the same name resolution assumption for non-HF model repos?

I think the last command you type is supposed to work, as long as you pass use_auth_token=True to use your Hugging Face token (you need to be logged in via transformers-cli with an account having permission to access the private model though).
cc @julien-c

1 Like

Sylvain, but in my case, I want to load/store the model to an internal S3 bucket (totally unrelated to the HF ecosystem).

What I would do in that case, is to sync a portion or directory of your bucket locally (assuming different models are in ≠ “subfolders”) and then just load your model from disk with AutoModel.from_pretrained(path)