Please read the topic category description to understand what this is all about
Most of the available Transformer models for text summarization are only available for English documents. At the same time, there are now many pretrained BERT-like models in non-English languages. The goal of this project is to explore whether the
[EncoderDecoder architecture](Encoder Decoder Models — transformers 4.12.2 documentation) in Transformers can be used to create summarization models using just the pretrained weights of encoder-based models.
Your task is to pick a pretrained encoder in a non-English language and train it to summarise texts in that language.
See here for example models that people have fine-tuned using this architecture. You task is to create your very own model with this technique!
summarization datasets on the Hub to get an appropriate corpus for this task
Text summarization is a tricky NLP task, so the performance obtained with these models may not match what is observed for their English couterparts (where much more data is available)
- Create a Streamlit or Gradio app on Spaces that can summarize a document in your chosen language
- Don’t forget to push all your models and datasets to the Hub so others can build on them!
- Leveraging Pre-trained Checkpoints for Sequence Generation Tasks [PAPER]
- Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models [BLOG POST]
- Examples of these models on the Hub by @mrm8488: https://twitter.com/mrm8488/status/1458475725565141001?s=20
To chat and organise with other people interested in this project, head over to our Discord and:
Follow the instructions on the
#join-course channel. Then join one of the following channels:
Just make sure you comment here to indicate that you’ll be contributing to this project