Hello.
I’m an iOS/macOS developer who’s very new to AI.
I’d like to integrate to test some radiology-related AI models in a Swift environment, but I’m having issues converting them to CoreML.
Specifically, I’m encountering this error: AssertionError: tensor value not consistent between torch ir and state_dict, for which I’ve found no information anywhere.
The scale and bias need to be changed to match what the model you are using expects. In the case of densenet121-res224-rsna the scale is 1/1024 and bias is 0.
Lastly we have to add color_layout=ct.colorlayout.GRAYSCALE to the input_image in order to tell coreML to expect only one channel of color and not three.
Here is the code that works for me and generates a coreML model, but I have not put this model in an app to see if it performs as expected.