Maximum number of tokens using distilled-bert

Hi! Throughout a binary classification fine-tuning task, I found that the text I have for inference may present more tokens than the default in the distilled-bert model.
My first question, that I think I know the answer, but I want to double-check is:
Can I modify the DistilBertConfig object and increase it to 1024 without re-training the model?

In the case not, I’m not planning on re-training the model, would it be possible to run the text through the model, segment by segment, and have partial classification scores that take in account the preceding text?

Thanks for reading this and have a day whenever you read this :slight_smile: