How to fine tune LUKE for NER?

Hello. I am wondering if I can fine-tune LUKE with my own dataset of NER. I am aware that LUKE has a unique model so the code in the example notebook is off the table. I am aware that Studio Ousia has fine-tuning code with GitHub - studio-ousia/luke: LUKE -- Language Understanding with Knowledge-based Embeddings, but if I do that route, I am wondering if I can convert the new fine-tuned model to be transformers-compatible.

Hi Kerenza

Were you able to make any progress on fine-tuning LukeForEntitySpanClassification for custom labels? Actually, I am also looking to fine-tune Luke for a NER task with multi-token entities. Any help is much appreciated.

Thanks

Hi Read through Readme GitHub - studio-ousia/luke: LUKE -- Language Understanding with Knowledge-based Embeddings

Hi,

We now have an example script that illustrates how to fine-tune LUKE for NER (and other token classification tasks): transformers/examples/research_projects/luke at master · huggingface/transformers · GitHub

Hi @nielsr, I have been trying to run the script but it fails on conll2003 dataset during tokenization. Am i doing somthing wrong? Any specific versions I should use?