### Search before asking
- [X] I have searched the YOLOv8 [issues](https://gith…ub.com/ultralytics/ultralytics/issues) and found no similar bug report.
### YOLOv8 Component
Export
### Bug
from ultralytics import RTDETR
model = RTDETR('RTDETR.pt')
model.export(format="coreml")
when exporting a custom trained RT-DETR model i get the following error:
Ultralytics YOLOv8.1.19 🚀 Python-3.10.13 torch-1.12.0+cu102 CPU (AMD EPYC 7282 16-Core Processor)
rt-detr-l summary: 498 layers, 31985795 parameters, 0 gradients, 103.4 GFLOPs
PyTorch: starting from 'RTDETR.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 300, 5) (63.1 MB)
CoreML: starting export with coremltools 6.2...
Converting PyTorch Frontend ==> MIL Ops: 18%|â–Ź| 417/2352 [00:00<00:00, 4165.75 Saving value type of int64 into a builtin type of int32, might lose precision!
Saving value type of int64 into a builtin type of int32, might lose precision!
Saving value type of float64 into a builtin type of fp32, might lose precision!
Converting PyTorch Frontend ==> MIL Ops: 25%|â–Ź| 586/2352 [00:00<00:00, 3093.35
CoreML: export failure ❌ 21.4s: In op, of type linear, named out_w, the named input `weight` must have the same data type as the named input `x`. However, weight has dtype fp32 whereas x has dtype int32.
Traceback (most recent call last):
File "/opt/conda/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "/usr/src/ultralytics/ultralytics/cfg/__init__.py", line 568, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/usr/src/ultralytics/ultralytics/engine/model.py", line 577, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/opt/conda/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/src/ultralytics/ultralytics/engine/exporter.py", line 281, in __call__
f[4], _ = self.export_coreml()
File "/usr/src/ultralytics/ultralytics/engine/exporter.py", line 136, in outer_func
raise e
File "/usr/src/ultralytics/ultralytics/engine/exporter.py", line 131, in outer_func
f, model = inner_func(*args, **kwargs)
File "/usr/src/ultralytics/ultralytics/engine/exporter.py", line 594, in export_coreml
ct_model = ct.convert(
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py", line 444, in convert
mlmodel = mil_convert(
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 187, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 211, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 281, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 109, in __call__
return load(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 57, in load
return _perform_torch_convert(converter, debug)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 96, in _perform_torch_convert
prog = converter.convert()
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 281, in convert
convert_nodes(self.context, self.graph)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 89, in convert_nodes
add_op(context, node)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 749, in matmul
res = mb.linear(x=inputs[0], weight=_np.transpose(inputs[1].val), name=node.name)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 176, in add_op
return cls._add_op(op_cls_to_add, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/mil/builder.py", line 166, in _add_op
new_op = op_cls(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/mil/operation.py", line 187, in __init__
self._validate_and_set_inputs(input_kv)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/mil/operation.py", line 503, in _validate_and_set_inputs
self.input_spec.validate_inputs(self.name, self.op_type, input_kvs)
File "/opt/conda/lib/python3.10/site-packages/coremltools/converters/mil/mil/input_type.py", line 137, in validate_inputs
raise ValueError(msg)
ValueError: In op, of type linear, named out_w, the named input `weight` must have the same data type as the named input `x`. However, weight has dtype fp32 whereas x has dtype int32.
This error is consistent when using either the coreml or mlmodel format and across different pytorch versions.
### Environment
Ultralytics YOLOv8.1.19 🚀 Python-3.10.13 torch-1.12.0+cu102 CPU (AMD EPYC 7282 16-Core Processor)
Setup complete âś… (32 CPUs, 251.6 GB RAM, 304.2/313.0 GB disk)
OS Linux-5.4.0-170-generic-x86_64-with-glibc2.35
Environment Docker
Python 3.10.13
Install git
RAM 251.57 GB
CPU AMD EPYC 7282 16-Core Processor
CUDA None
matplotlib âś… 3.8.3>=3.3.0
opencv-python âś… 4.9.0.80>=4.6.0
pillow âś… 10.0.1>=7.1.2
pyyaml âś… 6.0.1>=5.3.1
requests âś… 2.31.0>=2.23.0
scipy âś… 1.12.0>=1.4.1
torch âś… 1.12.0>=1.8.0
torchvision âś… 0.13.0>=0.9.0
tqdm âś… 4.65.0>=4.64.0
psutil âś… 5.9.0
py-cpuinfo âś… 9.0.0
thop âś… 0.1.1-2209072238>=0.1.1
pandas âś… 2.2.1>=1.1.4
seaborn âś… 0.13.2>=0.11.0
error was also present when using a fresh ultralytics install in Google Colab
### Minimal Reproducible Example
from ultralytics import RTDETR
model = RTDETR('RTDETR.pt')
model.export(format="coreml")
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR!