Not sure where to report this…
For this model: google/byt5-large · Hugging Face
The paper link is not correct. Instead it should be this: [2105.13626] ByT5: Towards a token-free future with pre-trained byte-to-byte models
Not sure where to report this…
For this model: google/byt5-large · Hugging Face
The paper link is not correct. Instead it should be this: [2105.13626] ByT5: Towards a token-free future with pre-trained byte-to-byte models
Thanks, I’ve fixed the paper link for all byT5 models on the hub.