Hugging Face Forums
Deploying custom inference script with llama2 finetuned model
Amazon SageMaker
philschmid
November 23, 2023, 1:22pm
2
You cannot use the llm container with a custom inference script.
Streaming output text When deploying a finetuned (SFT, DPO) model with custom inference script
show post in topic
Related topics
Topic
Replies
Views
Activity
Streaming output text When deploying a finetuned (SFT, DPO) model with custom inference script
Amazon SageMaker
1
32
November 8, 2024
Error loading finetuned llama2 model while running inference
Amazon SageMaker
27
4811
September 20, 2023
Inference Toolkit - Init and default template for custom inference
Amazon SageMaker
12
2144
October 4, 2021
Loading inference.py separately from model.tar.gz
Amazon SageMaker
4
1867
June 5, 2023
Sagemaker deployment fails for local llama2 model
Amazon SageMaker
3
2287
August 17, 2023