I am trying to test the trocr model on hugging face here microsoft/trocr-base-printed · Hugging Face
And am using their inference api serverless. When I run the example using my own image png file to test it yields only 2 letters, SR…not sure what’s going on here, it is a whole page with text.
>>> print(query("debut.png"))
[{'generated_text': 'SR:'}]
Any advice on how to access the OCR’d version of my png appreciated!