Reading through the documentation for HuggingFace & SageMaker as we are evaluating it and found the following:
Q: Which models can I deploy for Inference?
A: You can deploy
- any Transformers model trained in Amazon SageMaker, or other compatible platforms and that can accomodate the SageMaker Hosting design
- any of the 10 000+ publicly available Transformer models from the Hugging Face Model Hub, or
- your private models hosted in your Hugging Face premium account!
Is it possible to fine-tune a model elsewhere, outside of SageMaker Training, (for instance, just through a regular PyTorch training loop on a pretrained
transformers model), and then deploy it for Inference without hosting it on an account?
Would appreciate any pointers y’all can give on this.