Fine tunning Spanish BERT model

Hi,
how can I fine-tune the spanish BERT model:

Do you want to fine-tune it for a specific task or with more text data?

Hi,
I want to fine tune with my own text data.

Which task in particular (classification, question answering, etc)? There’s quite a few examples here that you could adapt to your corpus / task: Examples — transformers 4.2.0 documentation

i made it confusing; it is with your own text data;
are you trying to fine-tune for a certain task as @lewtun mentioned and then there are examples in the link he provided;
or are you trying to fine-tune BERT (as language modeling) with domain-specific vocab/text to adjust for a certain domain - for this you can use the first example in his link (language model): transformers/examples/language-modeling at master · huggingface/transformers · GitHub

Hi @VP1 and @lewtun ,
thank you very much for your kind reply. Yes, as you mention @VP1, I am trying to fine-tune BERT (as language modeling) with domain-specific vocab/text. Thank you very much I will try it.

2 Likes

@Armando, it should be quite easy to fine-tune with the language modeling scripts.
Let me know if it worked for particularly this model:

I am not sure about it, I tried different models but not all worked for fine-tuning with domain-specific vocab.

1 Like