SegformerImageProcessor introducing new labels

I have been using SegFormer for Semantic Segmentation with custom Dataset whose labels are -

id2label = {0: "No data", 1: "Saturated", 2: "Dark Area Pixels", 3: "Cloud Shadows", 4: "Vegetation", 5 : "Bare Soils", 
                    6: "Water", 7: 'Clouds low probability', 8: 'Clouds medium probability', 9: 'Clouds high probability',
                    10: 'Cirrus', 11: 'Snow'

It is running the experiment fine but then new labels are being generated which I do not have in my list. Following is my transform function along with error I got.

def transform(example_batch):
    # 
    images = [torch.tensor(x)/10000.0 for x in example_batch['pixel_values']]
    
    #  labels!
    labels = [torch.argmax(torch.tensor(x), dim=1) for x in example_batch['labels']]            #(x, dtype=np.int32)
    inputs = feature_extractor(images, labels)
    # inputs = {'pixel_values':images, 'labels':labels}
    print(inputs['pixel_values'][0].shape)
    return inputs

Error is

{'loss': 2.3899, 'learning_rate': 5.999913724237099e-05, 'epoch': 0.0}                                                                                      
  0%|                                                                                                                | 9/625900 [00:10<200:04:05,  1.15s/it](3, 256, 256)
{'loss': 2.3492, 'learning_rate': 5.9999041380412205e-05, 'epoch': 0.0}                                                                                     
  0%|                                                                                                               | 10/625900 [00:11<191:31:02,  1.10s/it](3, 256, 256)
{'loss': 2.3565, 'learning_rate': 5.9998945518453426e-05, 'epoch': 0.0}                                                                                     
  0%|                                                                                                               | 11/625900 [00:12<194:07:27,  1.12s/it](3, 256, 256)
{'loss': 2.3117, 'learning_rate': 5.999884965649465e-05, 'epoch': 0.0}                                                                                      
  0%|                                                                                                               | 12/625900 [00:13<193:16:14,  1.11s/it](3, 256, 256)
Traceback (most recent call last):
  File "/Users/aartibalana/Documents/transfer-learning/zindi/train.py", line 134, in <module>
    trainer.train()
  File "/opt/homebrew/lib/python3.9/site-packages/transformers/trainer.py", line 1662, in train
    return inner_training_loop(
  File "/opt/homebrew/lib/python3.9/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/opt/homebrew/lib/python3.9/site-packages/transformers/trainer.py", line 2699, in training_step
    loss = self.compute_loss(model, inputs)
  File "/opt/homebrew/lib/python3.9/site-packages/transformers/trainer.py", line 2731, in compute_loss
    outputs = model(**inputs)
  File "/opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/homebrew/lib/python3.9/site-packages/transformers/models/segformer/modeling_segformer.py", line 812, in forward
    loss = loss_fct(upsampled_logits, labels)
  File "/opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 1174, in forward
    return F.cross_entropy(input, target, weight=self.weight,
  File "/opt/homebrew/lib/python3.9/site-packages/torch/nn/functional.py", line 3026, in cross_entropy
    return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
IndexError: Target 14 is out of bounds.
  0%|   

I resolved this error by adding 12 and 13 as None in labels list but then it is 14 and not sure how many more new numbers…Please help me to figure this out why is it happening.

Thanks!