Hugging face truncated output via langchain

I have been making a chatbot for property question answering and each time i ask it a quesion, the answer by the AI assistant is good but gets truncated. I’m not sure why… the max_length is really high.
I’m a beginner with LLMs and Langchain, would be great to have some guidance.
Here is the code: propex_6.0 | Kaggle
Run the code and try asking the question: tell me about colive 169 alpha