Pretrain GPT-2 from scratch in Mongolian

GPT2 for Mongolian

Goal is to create a strong language generation model for Mongolian :mongolia:
Since initial code and data is pretty much written by @patrickvonplaten and other huggingface members, it should not be so hard to get the first sense.


Randomly initialized GPT2 model


We can use OSCAR which is available through datasets


A causal language modeling script for Flax is available here. It can be used pretty much without any required code changes.
If there is time left, I’d love to try some private crawling and integrate it datasets.

Expected Outcome

Understandable Mongolian text generation model


Lack of data → OSCAR Mongolian is just 2.2G. Maybe we need to research ways to acquire more data with this.


I am also so interested in pretraining Mongolian Roberta/DistillBERT on OSCAR and fine-tune it on the translated Mongolian SQuAD2.0.

Interested in pretraining GPT-2 from scratch and fine-tune Mongolian SQuAD2.0


Great! Please register here.

Awesome, officially defining it!

1 Like