Token Classification Evaluator prediction missing words

Hi,
I’m new to Hugging Face and to begin, I tried to evaluate a few NER models on a dataset that I’ve uploaded to the Hub.
Following this How-To Guide on CoNLL-2003 at first without any issues, I then tried to apply it to my dataset but got this error :

  File "/people/mmasson/.cache/JetBrains/RemoteDev/dist/d3daa55389d56_pycharm-professional-231.9011.9/plugins/python/helpers/pydev/pydevd.py", line 1496, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/people/mmasson/.cache/JetBrains/RemoteDev/dist/d3daa55389d56_pycharm-professional-231.9011.9/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/people/mmasson/workspace/ner-pico/task_evaluation/pico_evaluation.py", line 20, in <module>
    task_evaluator.compute(
  File "/people/mmasson/miniconda3/envs/ner_pico/lib/python3.9/site-packages/evaluate/evaluator/token_classification.py", line 253, in compute
    predictions = self.predictions_processor(predictions, data[input_column], join_by)
  File "/people/mmasson/miniconda3/envs/ner_pico/lib/python3.9/site-packages/evaluate/evaluator/token_classification.py", line 125, in predictions_processor
    while prediction[token_index]["start"] < word_offset[0]:
IndexError: list index out of range

Trying to debug this, it seems like the problematic prediction (2nd one) is missing the end of the text which causes the predictions_processor to go out of range.
Could someone bring me insights on this error?