How can I load models from any remote url

Hi,

I want to set up a http file server (simples case would be http localhost) that will contain my models or simply fork a github repository with pretrained models. I see that when loading a pretrained model a transformers or sentence-transformers libraries try to get files from huggingface.co by default. Is there a way to change it and load e.g. from http://localhost:8000 without modifying the insides of the library? E.g. i tried to simply put the url in pt_model = AutoModelForSequenceClassification.from_pretrained('http://localhost:8000'), but it failed to work. I also tried to use mirror parameter with adding my address to PRESET_MIRROR_DICT in configuration_utils.py, but that is already modification that I want to avoid. It didn’t work besides.

Is there any proper way to do so?

you can try to paas your http server as param like this:

proxies = {"http": "http://localhost:8000"}
mode = AutoModelForSequenceClassification.from_pretrained(proxies)
1 Like

This seems to work just fine, thank you!

Well, not really. Sorry for the false alert, but this doesn’t work. It worked for a while, since the path on the server was the same as the path to model locally, so it just read it from my disk.

sorry, i misunderstood your question, the transformers library currently dose not seem to support private host, you can look this issue for reference, and there is a
workround you can try