Hi, Iâm trying to export a given LayoutLMv3 fine tuned model to onnx format following this guide.
The actual model Iâm trying to export is this one from @nielsr .
My current configuration is the following:
-
transformers
version: 4.21.3 - Platform: Windows-10-10.0.22000-SP0
- Python version: 3.10.4
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: -
When Iâm trying to run the command from the tutorial (python -m transformers.onnx --model=nielsr/layoutlmv3-finetuned-cord onnx/) I get this error:
Some weights of the model checkpoint at nielsr/layoutlmv3-finetuned-cord were not used when initializing LayoutLMv3Model: [âclassifier.out_proj.biasâ, âclassifier.dense.biasâ, âclassifier.dense.weightâ, âclassifier.out_proj.weightâ]
- This IS expected if you are initializing LayoutLMv3Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LayoutLMv3Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Using framework PyTorch: 1.12.0+cpu
Traceback (most recent call last):
File âC:\Users\nk-alex\miniconda3\envs\sparrow\lib\runpy.pyâ, line 196, in _run_module_as_main
return run_code(code, main_globals, None,
File âC:\Users\nk-alex\miniconda3\envs\sparrow\lib\runpy.pyâ, line 86, in run_code
exec(code, run_globals)
File "C:\Users\nk-alex\miniconda3\envs\sparrow\lib\site-packages\transformers\onnx_main.py", line 107, in
main()
File "C:\Users\nk-alex\miniconda3\envs\sparrow\lib\site-packages\transformers\onnx_main.py", line 89, in main
onnx_inputs, onnx_outputs = export(
File âC:\Users\nk-alex\miniconda3\envs\sparrow\lib\site-packages\transformers\onnx\convert.pyâ, line 336, in export
return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)
File âC:\Users\nk-alex\miniconda3\envs\sparrow\lib\site-packages\transformers\onnx\convert.pyâ, line 143, in export_pytorch
model_inputs = config.generate_dummy_inputs(preprocessor, framework=TensorType.PYTORCH)
File âC:\Users\nk-alex\miniconda3\envs\sparrow\lib\site-packages\transformers\models\layoutlmv3\configuration_layoutlmv3.pyâ, line 264, in generate_dummy_inputs
setattr(processor.feature_extractor, âapply_ocrâ, False)
AttributeError: âRobertaTokenizerFastâ object has no attribute âfeature_extractorâ
Am I doing something wrong?