[Transformer] how to tokenize nested object dataset?

e.g., I am trying to tokenize blended_skill_talk · Datasets at Hugging Face with
BlenderbotTokenizer

example data record:

and “suggestions” column structure:

{ 
"convai2": [ "i am a professional translator .", "i know about 50 of the language"],
"wizard_of_wikipedia": [ "That means you must be in the Italian national football team.", "Thanks to the internet people are often exposed to multiple languages" ]
 }

my tokenizing code but got some error because ““suggestions”” column has a nested structure like above example.

dataset_train = dataset_train.map(lambda examples: tokenizer(examples["personas"], examples["context"], examples["previous_utterance"], examples["free_messages"], examples["guided_messages"], **examples["suggestions"],** truncation=True), batched=True)

error

--------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
ValueError: [{'convai2': ["i love acting ! i'll be famous someday . what do you do ?", 'no no kids , might get some though . one day', 'that is great . i am going to a concert later', '15 and 17 , two boys sooo fun', 'they really are . and a handful at times', 'it can be sometimes . i bet being a doctor is a lot of work too .'], 'empathetic_dialogues': ['Any favorite actors?', 'One day.', 'How long must you attend school?', '4 and 5 and I have a teenager', 'They are most of the time!', "Oh. I don't know how medical school works. I am studying srt history."], 'wizard_of_wikipedia': ['I would like to develop my acting skills. What are some tips you have to not get nervous?', 'I will still wimp out. i want to be famous like the rolling stones  though.', 'good', "Close to 30! I just always have to put in a ton of work when mother's day comes around haha", 'They are actually very good with kids!', 'yeah but there are a lot of programs that help!']}, {'convai2': ['yum . i like to make lasagna and it s so good', 'yes ! trying to master lasagna .', 'it beats ramen noodles for sure ! do you have any hobbies ?', 'piercings are cool . i do not have any though .', "i don't know . whatever i want . maybe chicken", 'it would be a fashion statement . my dad would not like it .'], 'empathetic_dialogues': ['Cool. I love italian. Real italian.', "See. I'm not a great cook.", 'I love coffee, actually. I drink a few cups every morning!', 'Thats awesome i used to do the tattoos and ...

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-104-30dbe680750e> in <module>
      4         return tokenizer(examples["personas"], examples["context"], examples["previous_utterance"], examples["free_messages"], examples["guided_messages"], examples["suggestions"], examples["guided_chosen_suggestions"],)
      5 
----> 6 dataset_train = dataset_train.map(lambda examples: tokenizer(examples["personas"], examples["context"], examples["previous_utterance"], examples["free_messages"], examples["guided_messages"], examples["suggestions"], truncation=True), batched=True)
      7 dataset_validation = dataset_validation.map(lambda examples: tokenizer(examples["personas"], examples["context"], examples["previous_utterance"], examples["free_messages"], examples["guided_messages"], examples["suggestions"], truncation=True), batched=True)
      8 

15 frames
/usr/local/lib/python3.7/dist-packages/transformers/utils/generic.py in _missing_(cls, value)
    292     def _missing_(cls, value):
    293         raise ValueError(
--> 294             f"{value} is not a valid {cls.__name__}, please select one of {list(cls._value2member_map_.keys())}"
    295         )
    296