Hello I wanted to use this saved model ncoop57/multilingual-codesearch · Hugging Face but first the code
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ncoop57/multilingual-codesearch") model = AutoModel.from_pretrained("ncoop57/multilingual-codesearch")
doesn’t work it output this which suggest the fact that it cannot load with only weights
`ValueError: Unrecognized model in ncoop57/multilingual-codesearch. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: gpt_neo, big_bird, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas`
and then I downloaded the pytorch bin file but it only contains the weight dictionnary (state dictionnary like mentionned here What is a state_dict in PyTorch — PyTorch Tutorials 1.8.1+cu102 documentation) which mean that if I want to use the model I have to initialize the good architecture and then load the weight. But how am I supposed to find the architecture fitting the weight of a model that complex ? I saw that some method could find back the model based on the weight dictionnary but I didn’t manage to make them work so I’m not sure.(I do think about this Auto Classes — transformers 4.5.0.dev0 documentation ) Can someone help me ? How can one find back the architecture of a weight dictionnary in order to make the model work ? Is it even a possible thing ?