Hugging Face Forums
Deploying custom inference script with llama2 finetuned model
Amazon SageMaker
philschmid
November 23, 2023, 1:22pm
2
You cannot use the llm container with a custom inference script.
Streaming output text When deploying a finetuned (SFT, DPO) model with custom inference script
show post in topic
Related topics
Topic
Replies
Views
Activity
Sagemaker Pipelines with fintuned llama2
Amazon SageMaker
0
851
September 12, 2023
Sagemaker deployment fails for local llama2 model
Amazon SageMaker
3
2262
August 17, 2023
Error loading finetuned llama2 model while running inference
Amazon SageMaker
27
4796
September 20, 2023
HuggingFaceModel ignores code directory
Amazon SageMaker
2
10
June 17, 2025
Deploying TinyLlama Model via SageMaker Inference Endpoint with Custom Setup
Amazon SageMaker
0
447
April 7, 2024