How to change dropout in pre trained model for fine tunning gpt

Hello everyone,

I hope you are well. I am fine tunning the gpt-neo and to overcome the overfitting I want to increase the drop out to 0.2. if I do this by applying this current command, can I use the model for fine tunning directly? or it needs to be train from scratch?

from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("gpt-neo")

model = AutoModelForMaskedLM.from_pretrained("gpt-neo",embed_dropou=0.2,resid_dropout=0.2,attention_dropout=0.2, )