Beam search error

Well, i’ve trained a new model on GPT2 and added some tokens to control my generation. It all worked fine while i was using greedy search algorithm, but when i tried to switch for beam search algorithm it started to mess up my tokens, mixing it with regular words. Does anyone have a clue about why is that happening? And how can i solve it?

related: How to generate multiple text completions per prompt (like vLLM) using HuggingFace Transformers Pipeline without triggering an error?

related: machine learning - How to generate multiple text completions per prompt (like vLLM) using HuggingFace Transformers Pipeline without triggering an error? - Stack Overflow

did you find an answer?