EleutherAI / lm-evaluation-harness on a custom model

I have a model that has been finetuned with quant and lora via the auto train, the model is hosted on HF @ https://huggingface.co/theoracle/gemma_italian_camoscio

I want to use the evaluation techniques such as

!python -m lm_eval
–model hf
–model_args pretrained=theoracle/gemma_italian_camoscio
–tasks xcopa_it,hellaswag_it,lambada_openai_mt_it,belebele_ita_Latn,arc_it
–device cuda:0
–batch_size 8

but always get an error does not appear to have a file named config.json.

which is meaningless as many times I get this error when it does not have a GPU available or the libraries are not loaded, how to fix this anyone?