Llama-2 7B-hf repeats context of question directly from input prompt, cuts off with newlines

I would have some questions first: How are you using the model (with generate, pipeline, etc.)? Would it be possible to give the final formatted prompt that you forward to your model as input?
Apart from this: You are giving the llama an instruction. But I guess for this purpose meta-llama/Llama-2-7b-chat-hf would be the better choice.