Extracting and segmenting handwritten and printed text in the images

The project aim is to first segment (or draw Bounding box ) and classify the handwritten and printed text in the images and then extract handwritten and printed text from the images. The printed text can be extracted easily but the problem is that the extraction of handwritten text with good accuracy becomes difficult. the above image is an example image which will be used for inferencing. I have used pytesseract library to extract text but it fails on handwritten text.
the example output is give here. but here are BB drawn on each word. is it possible to segment or draw BB on whole line of handwritten or printed text? if not , word by word segmentation or BB will also be okay.


So, the main task is to use model to classify handwritten and printed text then use OCR model for handwritten text with good accuracy. These were just my assumptions according to my knowledge. If there are other best approaches please guide me.