Hello All,
I am beginner and have really less experience in training transformers.I have gone through the huggingface course for training transformers and now I am trying to use 3 text input features to finetune a BERT-cased model for sequence classification. I tried using the 2 text features and feeding them to the auto tokenizer and then feeding the output to the model for training. For 2 input features, the function of the tokenizer looks as below -
def tokenize_function(examples):
return tokenizer(examples[“feature_1”], examples[“feature_2”], padding=“max_length”, truncation=True)
I was trying to add the third feature like -
def tokenize_function(examples):
return tokenizer(examples[“feature_1”],examples[“feature_2”],
examples[“feature_3”],padding=“max_length”, truncation=True)
but it was returning an error that says, "cant convert [“feature_3”].
Could you please help me understand, if there is a simple way to add the 3 text inputs to bert model for training.