ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['0']

I am getting this error while trying to finetuning some custom dataset with Babelscape/wikineural-multilingual-ner and trainer.train():
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['0']

I can see input_ids key if I print(tokenized_dataset["train"][0]):
{'0': "{'input_ids': [101, 13533, 35636, 10129, 11357, 44320, 10661, 10119, 41430, 10605, 10104, 121, 117, 121, 126, 26715, 10183, 27359, 10125, 122, 10104, 12401, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'labels': [-100, 1, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -100]}"}

So why am I getting this error? There is something wrong with my tokenized dataset format?

Many thanks.