Steraming Inference without TGI

I found this tutorial for using TGI (Text Generation Inference) with the docker image at Text Generation Inference.

However, I’m having trouble using a GPU in a docker container. I was wondering if there is another way to stream the output of the model. I have tried using TextStreamer, but it can only output the result to standard output. In my case, I’m trying to send the stream output to the frontend, similar to how it works in ChatGPT

1 Like