Looking for example for seq2seq model

Hi,
I have a specific task I want to train a model on, which is to be able to predict the value of this schema:

{
    "free speech text or sentence (somewhere up to twenty words)": {
        key1: [value1, value2],
        key2: [value3, value4],
        key3: [value5, value6],
        ..
        ..
        .
    }
}

I basically have a json with many such items, and I want to predict the value of each item’s key:value pair, i.e. predict the inner-dict, based on the key user input text.

So if I give the model the sentence free speech text or sentence (somewhere up to twenty words) it will predict:
{ key1: [value1, value2], key2: [value3, value4], key3: [value5, value6], .. .. . }

The topic is very complex for a novice such as myself, but I spent quite some time on chat-gpt3 in order to try and develop some POC of my own. I did no succeed, but learned some things and understood that I am probably looking for:

  • seq2seq
  • using transformers
  • tokenizing data

And was advised to look at Hugging Face for some example code I can start playing with, which will be a POC for this type of model.

I see there are a lot of seq2seq models on the site, but it’s hard to filter them to know which one I can start off from.

Can anyone point me in the right direction? (or the correct forum section for this question if I’m the wrong place).

Thanks