So cropping it even though it didn’t get all the ROI from the original, unrotated, images, is better than leaving big black area around after rotation?
I don’t understand, is that counts as “making the model learns more”? If so, how do I apply that to my images? And how does it help the system (let’s say CCTV) can have more robust model to use. Is “making images more variable” didn’t have physical constraint of what the application can do? But yeah as you said, it took so much time to fit every possibility just to overfit to one type of river view
So cropping it even though it didn’t get all the ROI from the original, unrotated, images, is better than leaving big black area around after rotation?
I think so in real use-case, maybe.
The second half is essentially a difficult topic… If you think of Google’s recaptcha, for example, it would be one direction for strengthening the model if it could accurately identify images that were not exemplary. To do this, it is necessary to create training images that are not exemplary but are not wrong.
Ah, it seems difficult to create CCTV-style images…