Streaming output text when deploying on Sagemaker

Hey @RemiP thanks for your response. Can you pls elaborate how are you streaming outputs from the LLM deployed as HuggingFace inference endpoint? Appreciate you help:)