Hi,
Is there any way to chunk a large document with left and right context? The default param in tokenizer provide only left context, is there a way we can provide right context also and predict for the central part not for the context.
Similar to the approach mention in this paper: https://arxiv.org/pdf/2011.06993.pdf
return_over_flowing_tokens might help:
return_over_flowing_tokens