Model performs worse on Hugging Space vs on my local machine

I made a yolo pytorch model made to identify enemies in Valorant on the screen. When I run inference (tests) on the model locally I get many satisfying results. However when I run the same model on Hugging Space on the same images it clearly performs worse in terms of percent certainty and just not identify elements it otherwise should.

The only difference I can think of that could be causing it is that:

  1. I changed the model from pytrorch to a torchscript (.pt to .torchscript). (would this change anything?)
  2. Are the images that Hugging Face uploads to the model at a significantly lower quality?
1 Like

I changed the model from pytrorch to a torchscript (.pt to .torchscript). (would this change anything?)

This speeds up the model by cutting out unnecessary parts, but it also changes the behavior of the model. Of course, in many cases this is not a problem…

Are the images that Hugging Face uploads to the model at a significantly lower quality?

This depends on the GUI program of Spaces. Unless you explicitly convert it, it is often processed at the size of the uploaded image. However, there is a possibility that an unintended conversion is occurring here (color depth, format, etc.).