Is it possible to freeze certain layer in ALBERT for Fine Tune?

Im trying to fine-tune the ALBERT model, but my professor advised me to either freeze the early layers or avoid freezing the last layer of the base model. Is it possible to fine-tune ALBERT in TensorFlow? Doesn’t the ALBERT model use parameter sharing across layers, making it so freezing certain layers might not be possible?

1 Like