Hey, trying to use GPT-2 to map specific input texts to specific outputs might not work out as you expect. it is mainly designed to generate coherent text by predicting the next word in a sequence based on the previous words. It’s awesome for general text generation, but it’s not really built for tasks where you need a direct mapping from one piece of text to another, especially when the output isn’t just a continuation of the input.
For what you’re trying to do—IMO—a sequence-to-sequence model would be a better fit. These models are made for tasks where the output is a transformation of the input, like translation, summarz, or any kind of text-to-text mapping.