How to save models after discarding some layers?


I am trying to fine-tune bert-base-uncased, but removing some encoding layers. I found this solution to remove some layers from encoder in this issue:

def deleteEncodingLayers(model, num_layers_to_keep):  # must pass in the full bert model
    oldModuleList = model.bert.encoder.layer
    newModuleList = nn.ModuleList()

    # Now iterate over all layers, only keepign only the relevant layers.
    for i in range(0, len(num_layers_to_keep)):

    # create a copy of the model, modify it with the new list, and return
    copyOfModel = copy.deepcopy(model)
    copyOfModel.bert.encoder.layer = newModuleList

    return copyOfModel

It works, but when I use .save_pretrained() and then AutoModel.from_pretrained() to load it, it tries to load all layers from original model. I haven seen the config json, and num_hidden_layers is set to 12. I tried to set it to 6, but I also face this warning:

Some weights of the model checkpoint at x/x/ were not used when initializing BertModel: ['classifier.bias', 'classifier.weight']

So I’m not sure if weights are loaded properly.

Is there any way to use bert-base-uncased, discard some encoder layers, train it and save this version?