No, you do not need to override those methods. They should work without needing to change anything. However, now that I take a better look at your code, why do you need a separate model? Why don’t you just use the BertForSequenceClassification model itself? You do not add any layers, right? So I think you can use this
num_labels = 1
config = AutoConfig.from_pretrained("bert-base-uncased",
num_labels=num_labels,
output_attentions=False,
output_hidden_states=False)
bert = BertForSequenceClassification.from_pretrained("bert-base-uncased", config=config)
# train model here...
# Saving/loading using built-in functionality
bert.save_pretrained(save_dir)
# Load the correct weights directly
bert = BertForSequenceClassification.from_pretrained(save_dir,
num_labels=num_labels,
output_attentions=False,
output_hidden_states=False)
# ...or using your own save/load method
checkpoint = {"epochs": epochs, "state_dict": model_save.state_dict()}
torch.save(checkpoint, save_path)
checkpoint = torch.load(save_path)
# NO from_pretrained so we don't unnecessarily load weights twice"
bert = BertForSequenceClassification(config)
bert.load_state_dict(checkpoint["state_dict"])