Let’s say i have a 4K
image on my backend. Now, I want to create a model using which I can re-create the same 4K
image on my browser. I will run this model on my browser. So, this model is only used to recreate only that particular image. So, in simple language it’s been overfit on that particular image only. What’s approach, architecture or pre-trained model I should choose which can give me best result at lower inference time?
Why I’m doing this?
Because let’s say my 4K
image is of 20 MB
. So, instead of downloading a 20MB
image from backend I can create a model of 1 MB
which can create that image for me on the web. So, think of it I’m trying to compress the data.
P.S.: I want to keep my inference time under 5 sec
.