How to ensure that tokenizers never truncate partial words?

Thanks for the quick response @marshmellow77
I am working on a paper that aims to extend all Transformer models and architectures beyond the 512 token limit. A principal part of how I do this is through splitting up (with overlap) my original document/text.

For last words that are longer than 3 tokens, I should recursively remove tokens from the end so long as they have the ## prefix and post that remove 1 more token which is the start of the word.

I am curious whether the approach you have described would also work with sentencepiece tokenizers? I will update it here when post experimentation.

Also, I have a follow-up question about controlling the stride overlap behavior of tokenizers along the lines of the original post. I will post a link to that discussion here as well.

Edit 1: This is the follow-up question.