Chunk tokens into desired chunk length without simply getting rid of rest of tokens

Is there a way to break up a tokens based on max token length? For example if I tokenize the sentence
toked_sent = tokenizer(["I have two rottweilers and a black lab"])
I’ll get
{'input_ids': [[27, 43, 192, 3, 14369, 15337, 277, 11, 3, 9, 1001, 7690, 1]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}

If I want to set some parameter max_length to 8, the output would be something like
{'input_ids': [[27, 43, 192, 3, 14369, 15337, 277, 11], [3, 9, 1001, 7690, 1] ], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1], [1,1,1,1,1] ]}

Is this possible?