How to continue to pre-train gpt2?

Hello, guys!
I want to further pre-train GPT2 in some few specific texts and the library provides scripts for this.

However, I found the model in the scripts belong to AutoModelForCausalLM, and GPT2 in my case is LMHead. Can this model be used for continuing pre-training as well?
If it can should I use the script or the model?

Thanks a lot!

Hi @xiaoyaoyou :hugs: Of course you can use to fine-tune on the specific texts. Just specify --model_type gpt2 (maybe some other options which depends on your desired), and you train the new model for downstream task. I’ll suggest you use this script to train the model, bcz it is prone to use DDP.