What is the purpose of training the model in https://huggingface.co/blog/how-to-train

Hi,

I tried to learn transformer by following this article: How to train a new language model from scratch using Transformers and Tokenizers

But why do we need to train the esperberto model? What problem does this article try to solve? Does it try to solve a classification problem?
I don’t understand how it verifies the model actually works.
Could someone help?

Thanks,

Tom

Hello Tom :hugs:

BERT like models might take times to train, so I think for this blog they wanted to use an easy-to-learn dataset. As you can read in the post:

Esperanto is a constructed language with a goal of being easy to learn. We pick it for this demo for several reasons:

  • it is a relatively low-resource language (even though it’s spoken by ~2 million people) so this demo is less boring than training one more English model :grin:
  • its grammar is highly regular (e.g. all common nouns end in -o, all adjectives in -a) so we should get interesting linguistic results even on a small dataset.
  • finally, the overarching goal at the foundation of the language is to bring people closer (fostering world peace and international understanding) which one could argue is aligned with the goal of the NLP community :green_heart:

You can find different dataset at Huggingface.

I hope I understood your question right :slight_smile:

Got you. Thanks