Hi, I’m new to the huggingface hub. Is there a way that I can deploy an AutoGluon predictor to the huggingface hub with an inference api? I’m not sure how to do this, but it would be great if I could get some help. Thank you!
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Model Preview: Turn on model's inference API without model weights | 0 | 118 | May 23, 2024 | |
How to create Hosted Inference API | 0 | 438 | October 6, 2022 | |
Autotrain training status through api | 2 | 186 | June 10, 2024 | |
Export AutoNLP models to custom S3 | 1 | 1116 | October 19, 2021 | |
Use custom code in model card inference API widget | 0 | 21 | August 2, 2024 |