While inferencing the donut model in local system it is taking atleast 10 Sec, but in my usecase it should be getting completed in less than a second. If there is anyother approach other than openvino inferencing, let me know that as well.
While inferencing the donut model in local system it is taking atleast 10 Sec, but in my usecase it should be getting completed in less than a second. If there is anyother approach other than openvino inferencing, let me know that as well.