when defining their model for training
I am not sure what this does though? if I specify this to 10 after loading a finetuned model, will I only be using the first 10 layers?
More specifically, I loaded a finetuned model for qa as such:
from transformers import AutoTokenizer, AutoModelForQuestionAnswering model_name = 'twmkn9/bert-base-uncased-squad2' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name)
model.config.num_hidden_layers = 10
Does that mean that I only have the last layer on now?
Do I need to add a layer to process the output so that I can get the SQuAD output?