When using BertTokenizer’s encode_plus, why does tokenized[‘input_ids’] return a 2d tensor?
I understand when batch_encode_plus does the same(an extra dimension for containing the batch size).
But what’s the purpose of returning a 2d tensor in encode_plus? The returned tensor size is of the form [1,N].