encoding['pos_tag_ids'] = torch.tensor([[0, 1]])
should be encoding['pos_tag_ids'] = torch.tensor([0, 1])
for each text. My dimensions are not matching with your method.
Even after doing this, my trainer API is detecting it as single dimension as opposed to labels. Is there anything additional needs be done while adding key-value pair to dictionary of tokeniser?