Hello everyone,
I am trying to export the “Qwen/Qwen3-Embedding-0.6B” model to ONNX using the “optimum” library. According to the Optimum documentation, the “Qwen3” architecture is supported for ONNX export.
However, the export process fails with a error: “invalid unordered_map<K, T> key”
from optimum.exporters.onnx import main_export
import os
model_id = "Qwen/Qwen3-Embedding-0.6B"
output_dir = "qwen3_embedding_onnx_from_script"
os.makedirs(output_dir, exist_ok=True)
print(f"start export '{model_id}' ")
try:
main_export(
model_id,
output=output_dir,
task="feature-extraction",
trust_remote_code=True,
opset=20
)
print(f"Model '{model_id}' finish '{output_dir}'")
except Exception as e:
print(f"error: {e}")
I have tried using both task='feature-extraction'
and task='default'
(by letting optimum
infer it automatically).
Both attempts result in the same invalid unordered_map<K, T> key
error.
1 Like
This seems pretty difficult to get working. I failed too. I don’t want to reinstall PyTorch…
# pip install -U optimum[onnxruntime]
# pip install -U accelerate transformers sentence-transformers
from optimum.exporters.onnx import main_export
import os
model_id = "Qwen/Qwen3-Embedding-0.6B"
output_dir = "qwen3_embedding_onnx_from_script"
os.makedirs(output_dir, exist_ok=True)
print(f"start export '{model_id}' ")
try:
main_export(
model_id,
output=output_dir,
task="feature-extraction",
trust_remote_code=True,
opset=20 # opset=17 with PyTorch 1.x may work? https://huggingface.co/zhiqing/Qwen3-Embedding-0.6B-ONNX/discussions/1 https://github.com/pytorch/pytorch/issues/120559
# With 2.x, "error: Exporting the operator 'aten::__ior_' to ONNX opset version 20 is not supported."
)
print(f"Model '{model_id}' finish '{output_dir}'")
except Exception as e:
print(f"error: {e}")
invalid unordered_map<K, T> key
error.
Seems 2.x issue, too…
opened 01:10PM - 18 Jan 24 UTC
closed 05:32PM - 18 Jan 24 UTC
bug
# Bug Report
**Error description:**
```
[188](file:///C:/Users/P.Vijay%20… Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:188) @_beartype.beartype
[189](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:189) def export(
[190](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:190) model: Union[torch.nn.Module, torch.jit.ScriptModule, torch.jit.ScriptFunction],
(...)
[206](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/onnx/utils.py:206) export_modules_as_functions: Union[bool, Collection[Type[torch.nn.Module]]] = False,
...
[511](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:511) '(vmap, grad, jvp, jacrev, ...), it must override the setup_context '
[512](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:512) 'staticmethod. For more details, please see '
[513](file:///C:/Users/P.Vijay%20Srinivasan/AppData/Local/Programs/Python/Python310/lib/site-packages/torch/autograd/function.py:513) 'https://pytorch.org/docs/master/notes/extending.func.html')
**RuntimeError: invalid unordered_map<K, T> key**
```
**System information**
- OS Platform and Distribution: Windows 64-bit
- ONNX version :1.15
- Python version:3.10
- Torch version: 2.0.1 + cpu
**- Code**
```
batch_size = 1
channels = 3 # Adjust this based on your model's expected number of input channels
depth = 16 # This is an example value; adjust based on your model's requirements
height = 224
width = 224
x = torch.randn(batch_size, channels, depth, height, width, requires_grad=True).to('cpu')
torch.onnx.export(torch_model, x, "super_resolution.onnx", export_params=True, do_constant_folding=False, keep_initializers_as_inputs=True, input_names = ['input'], output_names = ['output'],dynamic_axes={'input' : {0 : 'batch_size'}, 'output' : {0 : 'batch_size'}})`
```
**Expected behavior**
model should be converted to ONNX without any errors
1 Like
Probably, if a parameter that forces attn_implementation="eager"
at model.from_pretrained()
part is implemented in Exporter, it will work with PyTorch 2.x as well…
autodetected_message = ""
model_tasks = TasksManager.get_supported_tasks_for_model_type(
model_type, exporter="onnx", library_name=library_name
)
raise ValueError(
f"Asked to export a {model_type} model for the task {task}{autodetected_message}, but the Optimum ONNX exporter only supports the tasks {', '.join(model_tasks.keys())} for {model_type}. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task {task} to be supported in the ONNX export for {model_type}."
)
# TODO: Fix in Transformers so that SdpaAttention class can be exported to ONNX.
# This was fixed in transformers 4.42.0, we can remve it when minimum transformers version is updated to 4.42
if model_type in SDPA_ARCHS_ONNX_EXPORT_NOT_SUPPORTED and is_transformers_version("<", "4.42"):
loading_kwargs["attn_implementation"] = "eager"
with DisableCompileContextManager():
model = TasksManager.get_model_from_task(
task,
model_name_or_path,
subfolder=subfolder,
revision=revision,
cache_dir=cache_dir,
token=token,
Thank you for your help! Unfortunately, your suggestions didn’t work:
Tried attn_implementation=“eager” - same “invalid unordered_map<K, T> key” error
Tested opset from 16 to 20 - identical results
Tried different export approaches (ORTModelForFeatureExtraction, torch.onnx.export) - same failure everywhere
It seems the issue is deeper at the compatibility level between Qwen3 architecture and current PyTorch/ONNX versions. (((((
1 Like
Yeah, the error was indeed tied to torch 2.6.0. I installed this combo: pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1, and the issue is gone—thanks for the heads-up! Man, I’m so fed up with these constant PyTorch “rollercoasters” (((
1 Like
system
Closed
June 28, 2025, 10:40am
6
This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.