PreTrain RoBERTa from scratch in Thai

RoBERTa/BERT for Thai

Currently, there is only a very limited amount of BERT-like models for Thai on the hub: Hugging Face – The AI community building the future. . For this project, the goal is to create a RoBERTa/BERT model for just the Thai language.

Model

A randomly initialized RoBERTa/BERT model

Datasets

One can make use OSCAR the dataset is also available through the datasets library here: oscar · Datasets at Hugging Face.

Available training scripts

A masked language modeling script for Flax is available here. It can be used pretty much without any required code changes.

(Optional) Desired project outcome

The desired project output is a strong RoBERTa/BERT model in Thai.

(Optional) Challenges

The OSCAR dataset might be too small (it has < 20GB of data for Thai). Also it might be important
to find datasets the BERT-like model can be evaluated on after pretraining in Thai. Having found a dataset to fine-tune the pretrained BERT-like model on, one can make use of the text-classification script here

(Optional) Links to read upon

The most important read would be the following colab:

1 Like

Hi there. I have tried with this blog for PyTorch before and would love to do this in Flax/Jax too :slightly_smiling_face:

I will take a look and update the status back soon.

Let’s define it

Alright, as far as I know once LM is ready, we will start with checking the downstream task to an existing benchmark like this one

1 Like