The difference between Seq2SeqDataset.collate_fn and Seq2SeqDataCollator._encode


I’m yusukemori, who asked about “Seq2SeqTrainer” yesterday ( How to use Seq2seq Trainer with my original "[MASK]" ).

Thanks to the helpful comment, I’m now trying implementing my customized version of Seq2seqTrainer.

Now I have a question about Seq2SeqDataset and Seq2SeqDataCollator in examples/seq2seq/ (the part of it is shown below).

It seems Seq2SeqDataset has its collate_fn as a method.
However, Seq2SeqDataCollator doesn’t use Seq2SeqDataset.collate_fn, but has its own method _encode.

Could you please tell me what’s the difference between these two methods, and when to use each of them?
Or, should I use both of them to run Seq2SeqTrainer?

Thank you in advance.


class Seq2SeqDataset(AbstractSeq2SeqDataset):
    """A dataset that calls prepare_seq2seq_batch."""

    def __getitem__(self, index) -> Dict[str, str]:
        index = index + 1  # linecache starts at 1
        source_line = self.prefix + linecache.getline(str(self.src_file), index).rstrip("\n")
        tgt_line = linecache.getline(str(self.tgt_file), index).rstrip("\n")
        assert source_line, f"empty source line for index {index}"
        assert tgt_line, f"empty tgt line for index {index}"
        return {"tgt_texts": tgt_line, "src_texts": source_line, "id": index - 1}

    def collate_fn(self, batch) -> Dict[str, torch.Tensor]:
        """Call prepare_seq2seq_batch."""
        batch_encoding: Dict[str, torch.Tensor] = self.tokenizer.prepare_seq2seq_batch(
            [x["src_texts"] for x in batch],
            tgt_texts=[x["tgt_texts"] for x in batch],
        batch_encoding["ids"] = torch.tensor([x["id"] for x in batch])
        return batch_encoding

class Seq2SeqDataCollator:
    def __init__(self, tokenizer, data_args, tpu_num_cores=None):
        self.tokenizer = tokenizer
        self.pad_token_id = tokenizer.pad_token_id
        assert (
            self.pad_token_id is not None
        ), f"pad_token_id is not defined for ({self.tokenizer.__class__.__name__}), it must be defined."
        self.data_args = data_args
        self.tpu_num_cores = tpu_num_cores
        self.dataset_kwargs = {"add_prefix_space": isinstance(tokenizer, BartTokenizer)}
        if data_args.src_lang is not None:
            self.dataset_kwargs["src_lang"] = data_args.src_lang
        if data_args.tgt_lang is not None:
            self.dataset_kwargs["tgt_lang"] = data_args.tgt_lang

    def __call__(self, batch) -> Dict[str, torch.Tensor]:
        if hasattr(self.tokenizer, "prepare_seq2seq_batch"):
            batch = self._encode(batch)
            input_ids, attention_mask, labels = (
            input_ids = torch.stack([x["input_ids"] for x in batch])
            attention_mask = torch.stack([x["attention_mask"] for x in batch])
            labels = torch.stack([x["labels"] for x in batch])

            labels = trim_batch(labels, self.pad_token_id)
            input_ids, attention_mask = trim_batch(input_ids, self.pad_token_id, attention_mask=attention_mask)

        if isinstance(self.tokenizer, T5Tokenizer):
            decoder_input_ids = self._shift_right_t5(labels)
            decoder_input_ids = shift_tokens_right(labels, self.pad_token_id)

        batch = {
            "input_ids": input_ids,
            "attention_mask": attention_mask,
            "decoder_input_ids": decoder_input_ids,
            "labels": labels,
        return batch

    def _shift_right_t5(self, input_ids):
        # shift inputs to the right
        shifted_input_ids = input_ids.new_zeros(input_ids.shape)
        shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
        shifted_input_ids[..., 0] = self.pad_token_id
        return shifted_input_ids

    def _encode(self, batch) -> Dict[str, torch.Tensor]:
        batch_encoding = self.tokenizer.prepare_seq2seq_batch(
            [x["src_texts"] for x in batch],
            tgt_texts=[x["tgt_texts"] for x in batch],
            padding="max_length" if self.tpu_num_cores is not None else "longest",  # TPU hack

Hi @yusukemori
The _encode method does the same work as collate_fn.

The difference is that Seq2SeqTrainer also supports TPU and for that the padding needs to handled differently. It also prepares the correct labels and decoder_input_ids rather than doing this inside the trainer.

If you are using Seq2SeqTrainer , use Seq2SeqDataCollator.

1 Like

Hi @valhalla

Thank you for your detailed explanation!

Now I understand that:
_encode does the same work as collate_fn, but _encode is used with Seq2SeqTrainer which supports TPU, and preparing the labels and decoder_input_ids is also done by this method.
If I want to use Seq2SeqTrainer, I should use Seq2SeqDataCollator (there is a possibility I should use collate_fn when I have to do something without using Seq2SeqTrainer).

Thank you again for your help!