Output getting stuck while running a GGUF model using llama.cpp and llama index

I loaded a GGUF model using llama.cpp and using Llamaindex to query it.
But when I run print response.

input = """ Describe all the parameters of the material discussed in the text.
  """
response = query_engine.query(input)
print(response)

It just shows this



/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py:1129: RuntimeWarning: Detected duplicate leading "<s>" in prompt, this will likely reduce response quality, consider removing it...
  warnings.warn(

What could be the issue?