Special token printed out as output

Hi i have a question about a llm printing out special token as well when generating an answer.
Here is an example:

from utils.prompter import Prompter
# from utils.util import postprocessing, e2k_model
from deeppostagger import tagger
from transformers import TextIteratorStreamer, PreTrainedTokenizerFast
from threading import Thread
from auto_gptq import AutoGPTQForCausalLM
import warnings
warnings.filterwarnings('ignore')
new_line_chr = ['.', '?']
rm_chr = ['<|endoftext|>']

class LLM_qa:
    def __init__(self, model_path, max_len):
        self.model = AutoGPTQForCausalLM.from_quantized(
            model_path, 
            device_map="balanced", max_memory = {0: "10GB", 1: "10GB"}, 
            low_cpu_mem_usage=True
            )

        self.model.config.use_cache = True
        self.model.eval()

        self.max_len = max_len

        self.tokenizer = PreTrainedTokenizerFast.from_pretrained(model_path)
        
        self.prompter = Prompter("kullm")
        self.prompter_gen = Prompter("kullm")

    def qa(self, question, instruction=''):
        if instruction:
            prompt = self.prompter_gen.generate_prompt(instruction, question)
            self.max_len *=2
        else:
            prompt = self.prompter.generate_prompt(question)

        inputs = self.tokenizer(prompt, return_tensors="pt")
        streamer = TextIteratorStreamer(self.tokenizer, skip_prompt=True)

        generation_kwargs = dict(
            input_ids=inputs.input_ids[..., :-1],
            streamer=streamer, max_new_tokens=self.max_len, no_repeat_ngram_size=3, eos_token_id=2, pad_token_id=self.tokenizer.eos_token_id
            )
        
        thread = Thread(target=self.model.generate, kwargs=generation_kwargs)

        return thread, streamer

MODEL_PATH = '/mnt/research/datasets/llm/weights/kullm-polyglot-12.8b-v2/20231115_dupex_quantize'
MAX_LEN = 128

llm = LLM_qa(MODEL_PATH, MAX_LEN)

q = 'Hi?'
instruction = ''

thread, streamer = llm.qa(q, instruction)

thread.start()

generated_text = ''
for new_text in streamer:
    flg = [True for i in new_line_chr if i in new_text]
    # for c in rm_chr:
    #     new_text = new_text.replace(c, '')
    if new_text and flg:
        print(new_text)
    elif new_text:
        print(new_text, end='')

    # generated_text += new_text
    # flg = [True for i in new_line_chr if i in new_text]
    # if flg:
    #     print(generated_text)
    #     # print(engsen2korsen(generated_text))
    #     generated_text = ''

print("\n - done.")

OUTPUT = 안녕하세요! 오늘은 무엇을 도와드릴까요?<|endoftext|>

I dont know why <|endoftext|> is printed… please help

rm_chr = [‘<|endoftext|>’] is one way that I tried removing the special token and it does work but I wanna know why is this happening and if there are any ways to fix it.

Have you tried using this on line 10 of your code?:

rm_chr = [‘’]

Well thats one way that I tried to erase the token and it does work but I wanna know why is the special token gets printed out any way. Is it because of tokenizer?

Whatever you put inside the single quotes in rm_chr = [‘’] will get printed, <|endoftext|> was printed because that’s what you put there.

rm_chr is to erase anything I put in there if u look at this
# for c in rm_chr: # new_text = new_text.replace(c, '')

That code is not being run because it has a # at the beginning causing it to be ignored.

Yea I know…