How to finetune a bert model to a Summarizer

Hello,
I am very new to the NLP field. I have been given a task of making a summarizer by finetuning(is this the right word) a pretrained bert model, i will be researching some ways myself, but i figured that i might as well ask a question here in case i cannot figure it out myself.
The question is, how do i do it? the only part i can imagine is :

  1. Find a base pretrained model
  2. Train it on a text summarization database
  3. Evaluate it

Is there any steps im missing? also can i have pointers for each steps? how to find good pretrained model, how do i go about training it, etc.
I will be using python, thanks for your attention

1 Like

Hi there! For this task you will want an encoder-decoder model such as GPT, BART or T5 instead of BERT. We have a free course with a whole section focused on summarization. I think this should cover most of your needs, including data processing, metrics, and how to fine-tune. Main NLP tasks - Hugging Face Course

1 Like

Thank you very much for the answer, helps very much!

1 Like