Hi,
I want to set up a http file server (simples case would be http localhost) that will contain my models or simply fork a github repository with pretrained models. I see that when loading a pretrained model a transformers
or sentence-transformers
libraries try to get files from huggingface.co by default. Is there a way to change it and load e.g. from http://localhost:8000 without modifying the insides of the library? E.g. i tried to simply put the url in pt_model = AutoModelForSequenceClassification.from_pretrained('http://localhost:8000')
, but it failed to work. I also tried to use mirror
parameter with adding my address to PRESET_MIRROR_DICT
in configuration_utils.py, but that is already modification that I want to avoid. It didn’t work besides.
Is there any proper way to do so?