Convert model clip to onnx

I want to convert model openai/clip-vit-base-patch16. Since I’m a newbie, I tried reading optimal’s documentation and he provided support. But when converting, I get an error:
clip model type is not supported yet in NormalizedConfig. Only albert, bart, bert, blenderbot, blenderbot-small, bloom, falcon, camembert, codegen, cvt, deberta, deberta-v2, deit, distilbert, donut-swin, electra, encoder-decoder, gemma, gpt2, gpt-bigcode, gpt-neo, gpt-neox, gptj, imagegpt, llama, longt5, marian, markuplm, mbart, mistral, mixtral, mpnet, mpt, mt5, m2m-100, nystromformer, opt, pegasus, pix2struct, phi, phi3, phi3small, poolformer, regnet, resnet, roberta, segformer, speech-to-text, splinter, t5, trocr, vision-encoder-decoder, vit, whisper, xlm-roberta, yolos, qwen2 are supported. If you want to support clip please propose a PR or open up an issue.
Is there any way to fix this error?