Hi,
Is it possible to use a privately hosted model to create a Space? I know one option would be to use git lfs
to add all the necessary files to the repository and then be done with it.
But is there any way around it?
Hi,
Is it possible to use a privately hosted model to create a Space? I know one option would be to use git lfs
to add all the necessary files to the repository and then be done with it.
But is there any way around it?
If your model is already hosted somewhere and is accessible via an API, you could have your space make an HTTP request to your model, as it’s done in the dalle-mini space. In that space they even their API URL with a secret.
Otherwise, you can have your space download the model at run time when the space starts up. In this space it’s done with torch.hub.download_url_to_file, for example, but you could do it with Python’s requests module or something similar. If you need to hide the URL that you’re downloading your model from, but still want to make the space public, you could use a secret for that.
As I mentioned it’s a privately hosted model on Hugging Face hub. It can be loaded with:
AutoModelForMaskedLM.from_pretrained("my-org/model-name")
It’s compatible with Hugging Face’s Inference API. I am happy to utilize that but for that, I need to pass my token. I know Spaces allows us to host secrets. But I’m not sure how can I access it in my Gradio app.py
file.
Ah sorry about that, I’d misunderstood what privately hosted meant! In that case, you should be able to set your user access token as a secret in your Space and then access it as an environment variable within your Gradio space. e.g. in this space I put together, the secret “NAME” gets fetched. Once that’s fetched, then you’d pass it to from_pretrained
with the param use_auth_token
.
Thanks so much! As far as I know, setting use_auth_token=True
will try to fetch the token from a pre-specified location. Would that be a problem here because the secret’s location might not match with what from_pretrained()
is expecting?
Would that be a problem here because the secret’s location might not match with what
from_pretrained()
is expecting
Precisely! When you pass True
to use_auth_token it will look under ~/.huggingface
for your token, which would be generated when someone logs in with the CLI. That would usually be done when you’re working locally, but with a Space I think the best thing to do is to pass the token as a string to use_auth_token
. You could even do something like
auth_token = os.environ.get("TOKEN_FROM_SECRET") or True
AutoModelForMaskedLM.from_pretrained("my-org/model-name", use_auth_token=auth_token)
so that you can have it work both locally and on your Space easily.
Thank you so much!
If it helps, the following blog post shows how to use private models with Spaces (although not using transformers
)