T5 Transformer fine-tuning issue

Hi all,

I am fine-tuning the T5 transformer for the paraphrasing task. However, after preparing the data to fine-tune T5, the process stuck, and there is no error message shows or crashes. Here is the output:

b'Skipping line 102: expected 5 fields, saw 6\nSkipping line 656: expected 5 fields, saw 6\nSkipping line 867: expected 5 fields, saw 6\nSkipping line 880: expected 5 fields, saw 6\nSkipping line 980: expected 5 fields, saw 6\nSkipping line 1439: expected 5 fields, saw 6\nSkipping line 1473: expected 5 fields, saw 6\nSkipping line 1822: expected 5 fields, saw 6\nSkipping line 1952: expected 5 fields, saw 6\nSkipping line 2009: expected 5 fields, saw 6\nSkipping line 2230: expected 5 fields, saw 6\nSkipping line 2506: expected 5 fields, saw 6\nSkipping line 2523: expected 5 fields, saw 6\nSkipping line 2809: expected 5 fields, saw 6\nSkipping line 2887: expected 5 fields, saw 6\nSkipping line 2920: expected 5 fields, saw 6\nSkipping line 2944: expected 5 fields, saw 6\nSkipping line 3241: expected 5 fields, saw 6\nSkipping line 3358: expected 5 fields, saw 6\nSkipping line 3459: expected 5 fields, saw 6\nSkipping line 3491: expected 5 fields, saw 6\nSkipping line 3643: expected 5 fields, saw 6\nSkipping line 3696: expected 5 fields, saw 6\nSkipping line 3955: expected 5 fields, saw 6\n'
b'Skipping line 34: expected 5 fields, saw 6\nSkipping line 121: expected 5 fields, saw 6\nSkipping line 211: expected 5 fields, saw 6\nSkipping line 263: expected 5 fields, saw 6\nSkipping line 345: expected 5 fields, saw 6\nSkipping line 696: expected 5 fields, saw 6\nSkipping line 733: expected 5 fields, saw 6\nSkipping line 847: expected 5 fields, saw 6\nSkipping line 1392: expected 5 fields, saw 6\nSkipping line 1467: expected 5 fields, saw 6\nSkipping line 1551: expected 5 fields, saw 6\n'
b'Skipping line 102: expected 5 fields, saw 6\nSkipping line 656: expected 5 fields, saw 6\nSkipping line 867: expected 5 fields, saw 6\nSkipping line 880: expected 5 fields, saw 6\nSkipping line 980: expected 5 fields, saw 6\nSkipping line 1439: expected 5 fields, saw 6\nSkipping line 1473: expected 5 fields, saw 6\nSkipping line 1822: expected 5 fields, saw 6\nSkipping line 1952: expected 5 fields, saw 6\nSkipping line 2009: expected 5 fields, saw 6\nSkipping line 2230: expected 5 fields, saw 6\nSkipping line 2506: expected 5 fields, saw 6\nSkipping line 2523: expected 5 fields, saw 6\nSkipping line 2809: expected 5 fields, saw 6\nSkipping line 2887: expected 5 fields, saw 6\nSkipping line 2920: expected 5 fields, saw 6\nSkipping line 2944: expected 5 fields, saw 6\nSkipping line 3241: expected 5 fields, saw 6\nSkipping line 3358: expected 5 fields, saw 6\nSkipping line 3459: expected 5 fields, saw 6\nSkipping line 3491: expected 5 fields, saw 6\nSkipping line 3643: expected 5 fields, saw 6\nSkipping line 3696: expected 5 fields, saw 6\nSkipping line 3955: expected 5 fields, saw 6\n'
b'Skipping line 34: expected 5 fields, saw 6\nSkipping line 121: expected 5 fields, saw 6\nSkipping line 211: expected 5 fields, saw 6\nSkipping line 263: expected 5 fields, saw 6\nSkipping line 345: expected 5 fields, saw 6\nSkipping line 696: expected 5 fields, saw 6\nSkipping line 733: expected 5 fields, saw 6\nSkipping line 847: expected 5 fields, saw 6\nSkipping line 1392: expected 5 fields, saw 6\nSkipping line 1467: expected 5 fields, saw 6\nSkipping line 1551: expected 5 fields, saw 6\n'
INFO:simpletransformers.t5.t5_utils: Creating features from dataset file at cache_dir/
2661
1088
            prefix  ...                                        target_text
1       paraphrase  ...  The 1975 -- 76 season of the National Basketba...
3       paraphrase  ...  The results are high when comparable flow rate...
4       paraphrase  ...  It is the seat of the district of Zerendi in A...
5       paraphrase  ...  William Henry Harman was born in Waynesboro, V...
7       paraphrase  ...  Given a discrete set of probabilities formula ...
...            ...  ...                                                ...
259195  paraphrase  ...     What is the best Jewish deli in New York City?
339531  paraphrase  ...  How will abolishing Rs. 500 and Rs. 1000 notes...
233481  paraphrase  ...  I feel very sleepy in the afternoon. How can I...
49452   paraphrase  ...                 What's it like to work for a Lyft?
377421  paraphrase  ...  How do you compare eukaryotic and prokaryotic ...
 
[136422 rows x 3 columns]
 0%|          | 0/136422 [00:00<?, ?it/s]

The script is stuck at the last step and does not progress. Here is the code that I use:

import logging

import pandas as pd
from simpletransformers.t5 import T5Model, T5Args

import os
from datetime import datetime
import logging

import pandas as pd
from sklearn.model_selection import train_test_split
from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs
from data.utils import load_data, load_sof_data, clean_unnecessary_spaces


logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.ERROR)

# Google Data
train_df = pd.read_csv("data/train.tsv", sep="\t").astype(str)
eval_df = pd.read_csv("data/dev.tsv", sep="\t").astype(str)

train_df = train_df.loc[train_df["label"] == "1"]
eval_df = eval_df.loc[eval_df["label"] == "1"]

train_df = train_df.rename(
    columns={"sentence1": "input_text", "sentence2": "target_text"}
)
eval_df = eval_df.rename(
    columns={"sentence1": "input_text", "sentence2": "target_text"}
)

train_df = train_df[["input_text", "target_text"]]
eval_df = eval_df[["input_text", "target_text"]]

train_df["prefix"] = "paraphrase"
eval_df["prefix"] = "paraphrase"

# MSRP Data
train_df = pd.concat(
    [
        train_df,
        load_data("data/msr_paraphrase_train.txt", "#1 String", "#2 String", "Quality"),
    ]
)
eval_df = pd.concat(
    [
        eval_df,
        load_data("data/msr_paraphrase_test.txt", "#1 String", "#2 String", "Quality"),
    ]
)
print(len(load_data("data/msr_paraphrase_train.txt", "#1 String", "#2 String", "Quality")))
print(len(load_data("data/msr_paraphrase_test.txt", "#1 String", "#2 String", "Quality")))

# Quora Data

# The Quora Dataset is not separated into train/test, so we do it manually the first time.
df = load_data(
    "data/quora_duplicate_questions.tsv", "question1", "question2", "is_duplicate"
)

q_train, q_test = train_test_split(df)

q_train.to_csv("data/quora_train.tsv", sep="\t")
q_test.to_csv("data/quora_test.tsv", sep="\t")

train_df = pd.concat([train_df, q_train])
eval_df = pd.concat([eval_df, q_test])

train_df = train_df[["prefix", "input_text", "target_text"]]
eval_df = eval_df[["prefix", "input_text", "target_text"]]


train_df = train_df.dropna()
eval_df = eval_df.dropna()

train_df["input_text"] = train_df["input_text"].apply(clean_unnecessary_spaces)
train_df["target_text"] = train_df["target_text"].apply(clean_unnecessary_spaces)

eval_df["input_text"] = eval_df["input_text"].apply(clean_unnecessary_spaces)
eval_df["target_text"] = eval_df["target_text"].apply(clean_unnecessary_spaces)

print(train_df)


model_args = T5Args()
model_args.num_train_epochs = 200
model_args.no_save = True
model_args.evaluate_generated_text = True
model_args.evaluate_during_training = True
model_args.evaluate_during_training_verbose = True

model = T5Model("t5-large", model_args)

def count_matches(labels, preds):
    print(labels)
    print(preds)
    return sum([1 if label == pred else 0 for label, pred in zip(labels, preds)])

model.train_model(train_df, eval_data=eval_df, matches=count_matches)

I searched online without any solution. Please help me to fix this issue.

I really appreciate any help you can provide.