Conversational pipeline by huggingface transformer taking too long to generate output

I have fine-tuned LLAMA-2 7B Chat model and i am using huggingface transformer pipeline for conversation task. The dataset used for fine tuning is openassist user assistant chat dataset. When i am invoking chatbot to generate the completion for conversational prompt, it is taking forever to generate completion. Please help me understand what is going wrong. I have used 4-bit qunatization so that i can fine tune and inference on the google colab free instance available to me.