Replication of the performance of RoBERTa on the COPA task

Hi everyone! I tried to implement the RoBERTa model on the COPA task. I used the huggingface’s Multiple Choice model and followed the tutorial. But I only got an accuracy of about 69% on the validation set, much worse than the benchmark shown here (Even BERT gets an accuracy of 74%!). The validation loss does not decrease at all.

I wonder if the multiple-choice model is unsuitable for this task or if I got something wrong. Could someone help me? Thank you very much!


Here’s my preprocess function. I concatenate the premise and choice as the input sentence, and view the two choices as two candidates.

CONTEXT_COL = "premise"
QUESTION_COL = "question"
ANSWER_1_COL = "choice1"
ANSWER_2_COL = "choice2"

def preprocess_function(examples, tokenizer):
    """
    The preprocessing function needs to:
    1. Make two copies of the CONTEXT_COL field and combine each of them with QUESTION_COL to recreate how a sentence starts.
    2. Combine QUESTION_COL with each of the two possible choices.
    3. Flatten these two lists so you can tokenize them, and then unflatten them afterward so each example has a corresponding input_ids, attention_mask, and labels field.
    """

    question_headers = examples[QUESTION_COL]
    first_sentences = [
        [f"{examples[CONTEXT_COL][i]} What was the cause of this? "]*2 if header == "cause" else\
        [f"{examples[CONTEXT_COL][i]} What was the effect of this? "]*2\
            for i, header in enumerate(question_headers)
    ]
    first_sentences = sum(first_sentences, [])
    
    second_sentences = [
        [examples[end][i] for end in [ANSWER_1_COL, ANSWER_2_COL]] for i, header in enumerate(question_headers)
    ]
    second_sentences = sum(second_sentences, [])
    tokenized_examples = tokenizer(first_sentences, second_sentences, truncation=True)
    return {k: [v[i : i + 2] for i in range(0, len(v), 2)] for k, v in tokenized_examples.items()}

My tokenizer and model:

tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = AutoModelForMultipleChoice.from_pretrained("roberta-base")

Data preprocessing:

# Data preprocessing
tokenized_copa = copa.map(lambda f: preprocess_function(f, tokenizer), batched=True)
train_dataset = tokenized_copa["train"]
val_dataset = tokenized_copa["validation"]
test_dataset = tokenized_copa["test"]

Training arguments and trainer:

training_args = TrainingArguments(
    output_dir="result_roberta",
    evaluation_strategy="epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=32,
    per_device_eval_batch_size=32,
    num_train_epochs=50,
    weight_decay=0.01,
    load_best_model_at_end = True,
    metric_for_best_model = "accuracy",
    save_strategy="epoch",
    save_total_limit=2,
)

# Train the model
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=val_dataset,
    tokenizer=tokenizer,
    data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
    compute_metrics = lambda f: compute_metrics(f, accuracy),
)
trainer.remove_callback(PrinterCallback)
trainer.train()

The data collator is copied from the tutorial:

@dataclass
class DataCollatorForMultipleChoice:
    """
    Data collator that will dynamically pad the inputs for multiple choice received.
    """

    tokenizer: PreTrainedTokenizerBase
    padding: Union[bool, str, PaddingStrategy] = True
    max_length: Optional[int] = None
    pad_to_multiple_of: Optional[int] = None

    def __call__(self, features):
        label_name = "label" if "label" in features[0].keys() else "labels"
        labels = [feature.pop(label_name) for feature in features]
        batch_size = len(features)
        num_choices = len(features[0]["input_ids"])
        flattened_features = [
            [{k: v[i] for k, v in feature.items()} for i in range(num_choices)] for feature in features
        ]
        flattened_features = sum(flattened_features, [])

        batch = self.tokenizer.pad(
            flattened_features,
            padding=self.padding,
            max_length=self.max_length,
            pad_to_multiple_of=self.pad_to_multiple_of,
            return_tensors="pt",
        )

        batch = {k: v.view(batch_size, num_choices, -1) for k, v in batch.items()}
        batch["labels"] = torch.tensor(labels, dtype=torch.int64)
        return batch

Compute metrics:

import evaluate
accuracy = evaluate.load("accuracy")

def compute_metrics(eval_pred, accuracy):
    predictions, labels = eval_pred
    predictions = np.argmax(predictions, axis=1)
    return accuracy.compute(predictions=predictions, references=labels)