GPT2Tokenizer not working in Kaggle Notebook

Hello I have been trying to tokenize the WMT14 en-de. I first started in a free google colab and the following code worked there; I then switched to Kaggle notebook since it is a better environment but it doesn’t work there for some reason. The error it throws is below:

from transformers import GPT2Tokenizer
bpe_tokenizer = GPT2Tokenizer.from_pretrained('gpt2')

def tokenization(examples):
    source, target = [], []
    for example in examples:
        trgt = bpe_tokenizer(example['de'])
        src = bpe_tokenizer(example['en'])
        target.append(trgt)
        source.append(src)
        
    return {'de': target,
            'en': source}

train_dataset = dataset_de_en['train'].map(lambda examples: tokenization(examples['translation']), batched=True)
test_dataset = dataset_de_en['test'].map(lambda examples: tokenization(examples['translation']), batched=True)
val_dataset = dataset_de_en['val'].map(lambda examples: tokenization(examples['translation']), batched=True)
ArrowInvalid: Could not convert {'input_ids': [54, 798, 263, 559, 22184, 993, 1326, 4587, 311, 4224, 2150, 82, 525, 72, 1098], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} with type BatchEncoding: did not recognize Python value type when inferring an Arrow data type

Is there an explanation to this?