As the title states, I have finetuned LLama v2 on the Open Assistant dataset; however, the model continues to generate text until it hits the maximum length. I have downloaded other finetuned LLama v2 models that don’t continue to generate text up to the maximum length, ex. LLama v2 chat.
Has anyone seen this issue before? Is this something related to my training parameters? Dataset?
Any help would be appreciated. Thanks!