Hey HF
Long time listener, first time caller. I’m using the Falconsai / nsfw_image_detection model to trace a model I can use for CoreML.
The problem is the traced model is classifying images NSFW images as normal. For example, heres the result for an NSFW image I tested with the original model
[{'score': 0.9932476282119751, 'label': 'nsfw'},
{'score': 0.006752398330718279, 'label': 'normal'}]
and here’s the result the traced model produced for the same image
class name: normal, raw score value: 1.6485612392425537
class name: nsfw, raw score value: -1.6674175262451172
I’m 95% sure the preprocessing is fine. I even retraced the model with a random tensor and the results didn’t change. I’m not sure what I can try next.
Any suggestions welcome! My code is below:
processor = AutoImageProcessor.from_pretrained("Falconsai/nsfw_image_detection")
processed_image = processor(images=input_image, do_rescale=False, do_normalize=False, size={"height": 224, "width": 224}, return_tensors="pt")['pixel_values']
# Wrap the model to allow for tracing
class WrappedModel(nn.Module):
def __init__(self):
super(WrappedModel, self).__init__()
self.model = AutoModelForImageClassification.from_pretrained("Falconsai/nsfw_image_detection").eval()
def forward(self, images):
output = self.model(images)
logits = output["logits"]
return logits
# Trace the wrapped model with sample image or random data
traceable_model = WrappedModel().eval()
random_input = torch.rand(1, 3, 224, 224)
traced_model = torch.jit.trace(traceable_model, random_input) # Or replace with processed_image
# Save the TorchScript model
traced_model.save("traced_model.pt")
print("Model traced and saved")