PreTrain GPT2 from scratch in Bengali

GPT2 for Bengali

Currently, there is no GPT2 model that was trained from scratch for Bengali on the hub. For this project, the goal is to create a strong language generation model for Bengali using GPT2 Model.

2. Language

Bengali.

3. Model

A randomly initialized GPT2 model.

4. Datasets

One can make use of OSCAR the dataset is also available through the datasets library here: oscar · Datasets at Hugging Face. The total Bengali resource in OSCAR is 11 GB.

Another source can be the mC4 dataset which is available in AllenAI. The resource size is 29GB.

5. Training scripts

A causal language modeling script for Flax is available here. It can be tweaked for training GPT2.

6. Challenges

  • Fix a good tokenizer that covers Bengali vocabulary properly and make sure that the LM doesn’t become character-level LM.

7. Desired project outcome

The desired project output is a GPT2 model that is able to generate Bengali language.

8. Reads

The most important read would be the following colab:

Other reads that might be interesting include:

3 Likes

I am also a Bengali speaker. I am in!

A Bengali text generation model would be totally great. This would be revolutionary for Bengali NLP research community! :heart:
I’m sooo in!!! :boom: :fire:

(I really hope this topic gets selected :crossed_fingers:)

2 Likes

Text generation is the dream.
BTW does it have to be GPT-2. Are there more efficient (compute or sample) models now?

1 Like

This looks like a great topic! Let’s wait if more people are interested until Monday and make this an official team than on Monday :slight_smile:

1 Like

I’d like to join too. @sbmaruf @khalidsaifullaah

Let’s officially define this project :slight_smile:

Putting everybody in the official sheet here. More people can still join! Leave a comment here or on the sheet if you want to change something.

1 Like

I would like to join this project @valhalla @patrickvonplaten

would like to join this project @valhalla