Question about labels for multi_nli dataset

I am working on fine-tuning roberta-large-mnli on a custom dataset. I looked at the dataset card for multi_nli and it clearly says that the labels are 0 for entailment, 1 for neutral, and 2 for contradiction.

So I tried a simple experiment:

from transformers import RobertaForSequenceClassification, RobertaTokenizer
model = RobertaForSequenceClassification.from_pretrained("roberta-large-mnli")
model.save_pretrained("./models/my-roberta-large-mnli")

If I now look in my ./models/my-roberta-large-mnli, the config.json file says:

  "id2label": {
    "0": "CONTRADICTION",
    "1": "NEUTRAL",
    "2": "ENTAILMENT"
  },
  "initializer_range": 0.02,
  "intermediate_size": 4096,
  "label2id": {
    "CONTRADICTION": 0,
    "ENTAILMENT": 2,
    "NEUTRAL": 1
  },

which seems to associate 0 with contradiction and 2 with entailment, the reverse of what the dataset card says.

What’s going on? Am I missing something obvious here? I have been annotating my training data with 0 for entailment and 2 for contradiction but it’s giving poor results and I wonder if this is why.

Thanks for any help.
Karin