Need some help about the GPT2Model in transformers.
If my input tokens are of this type:
conditon+BOS1+ Tokens Of Sencence1 + EOS1 + BOS2 + Tokens Of Sentence2 + EOS2
here condition is an additional embedding into the GPT2; so as to the token_type_ids parameter, what is the token_type of the condition
token? setenceA or sentenceB? and If I want to use the token_type_ids
parameter, should I set the vocab_size = vocab_size_of(Tokens len Of Sentence1 + Tokens len of Sentence2) + 2 Explicitly? here 2 means the two value of token_type_id.