GPT-2 special tokens

Hello,

Currently working with GPT-2, I am fine-tuning a model on Next Token Generation task in order to perform text generation at inference from an image.

During training, I manually add special token at the beginning of the sentence (BOS) and at the end (EOS). So at inference, I start with (BOS) token and let the model generate.

input_ids = [gpt2.bos_token_id] + tokens[‘input_ids’] + [gpt2.eos_token_id]

However, I realized that [gpt2.bos_token_id] and [gpt2.eos_token_id] have the same ID (0 : ‘<|endoftext|>’) so there are the same tokens. Why is it done like it ? Not like in BERT with different tokens for BOS and EOS.

Morever, is it a problem for my generation at inference ?

Thank you !

Update: It is not a problem !

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.