NER Label tokenization with overflowing tokens

I am trying to train bert for token classification and I want to use full text and if necessary, split in two samples. For that I am using return_overflowing_tokens with a specific stride. I want to tokenize the labels as well for this. I have seen tokenize_and_align_labels function in the tutorial (Token classification) but this doesn’t take care of such overflow. Is there something already available for this? Or should I generate the sample without truncation and then split to a specific length later on?