Hugging Face Forums
Deploying custom inference script with llama2 finetuned model
Amazon SageMaker
philschmid
November 23, 2023, 1:22pm
2
You cannot use the llm container with a custom inference script.
Streaming output text When deploying a finetuned (SFT, DPO) model with custom inference script
show post in topic
Related topics
Topic
Replies
Views
Activity
Sagemaker Pipelines with fintuned llama2
Amazon SageMaker
0
850
September 12, 2023
Sagemaker deployment fails for local llama2 model
Amazon SageMaker
3
2194
August 17, 2023
Error loading finetuned llama2 model while running inference
Amazon SageMaker
27
4755
September 20, 2023
Deploying TinyLlama Model via SageMaker Inference Endpoint with Custom Setup
Amazon SageMaker
0
425
April 7, 2024
Truncated un-finished response after deploying hugging-face models
Amazon SageMaker
0
364
January 19, 2024