Apologies for missing something very obvious but I am just getting started with ONNX and donât understand many of the errors and problems leading upto converting finetuned local models to ONNX format.
Currently trying to export a finetuned model of layoutlmv3-base, however I encounter a similar error as mentioned above in this post #23166
x----------------------------------Configuration Details----------------------------------x
transformers = 4.28.1
Python = 3.8.10
torch = 1.12.0
onnx = 1.13.1
CPU Only
x----------------------------------Error Trace----------------------------------x
Local PyTorch model found.
Framework not requested. Using torch to export to ONNX.
Using framework PyTorch: 1.12.0+cu102
Traceback (most recent call last):
File â/usr/lib/python3.8/runpy.pyâ, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File â/usr/lib/python3.8/runpy.pyâ, line 87, in _run_code
exec(code, run_globals)
File â/usr/local/lib/python3.8/dist-packages/transformers/onnx/main.pyâ, line 240, in
main()
File â/usr/local/lib/python3.8/dist-packages/transformers/onnx/main.pyâ, line 232, in main
export_with_transformers(args)
File â/usr/local/lib/python3.8/dist-packages/transformers/onnx/main.pyâ, line 165, in export_with_transformers
onnx_inputs, onnx_outputs = export(
File â/usr/local/lib/python3.8/dist-packages/transformers/onnx/convert.pyâ, line 346, in export
return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device)
File â/usr/local/lib/python3.8/dist-packages/transformers/onnx/convert.pyâ, line 143, in export_pytorch
model_inputs = config.generate_dummy_inputs(preprocessor, framework=TensorType.PYTORCH)
File â/usr/local/lib/python3.8/dist-packages/transformers/models/layoutlmv3/configuration_layoutlmv3.pyâ, line 263, in generate_dummy_inputs
setattr(processor.feature_extractor, âapply_ocrâ, False)
AttributeError: âLayoutLMv3TokenizerFastâ object has no attribute âfeature_extractorâ
Am I missing something very obvious?