How to extract tables from images using Hugging Face models?

  • Hi everyone, I’m trying to extract tables from images using Hugging Face Transformers. I’ve tried using the TATR [Uploading: page_1.png…]() model, but I’m not getting the desired results. Can anyone share some tips or examples on how to improve my approach?
  • I’ve attached an example image of my use case

I’m not familiar with OCR-related AI, so hopefully others can provide additional explanations.
There is quite a bit of OCR-related space on HF, so it would be quicker to find one that is similar to your use case and take a peek at the source code inside and divert it. The model used should also be in the code.