How to use hugging face transformers for testing a dataset

I’m struggling to replicate the results from this repository but using other LLMs such as LLAMA. I’m using google colab, I already cloned the repository and installed the required packages.

They said you can use any model from the hugging face transformers but I can’t figure out where to get the “model” and “model_args” parameters:

# running 3-shot with CoT for GPT-4V on ENEM 2022
python main.py \
    --model chatgpt \
    --model_args engine=gpt-4-vision-preview \
    --tasks enem_cot_2022_blind,enem_cot_2022_images,enem_cot_2022_captions \
    --description_dict_path description.json \
    --num_fewshot 3 \
    --conversation_template chatgpt

If you go to the Llama model in hugging face and click in “Use in Transformers” you get this:

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B")

So I tried using “model = meta-llama” and “model_args = Meta-Llama-3-8B” but that doesn’t work.
Like so:

!python main.py \
  --model meta-llama \
  --model_args Meta-Llama-3-8B \
  --tasks enem_cot_2022_blind,enem_cot_2022_captions \
  --description_dict_path description.json \
  --num_fewshot 3

I get:

Selected Tasks: ['enem_cot_2022_blind', 'enem_cot_2022_captions']
Traceback (most recent call last):
  File "/content/gpt-4-enem/main.py", line 112, in <module>
    main()
  File "/content/gpt-4-enem/main.py", line 81, in main
    results = evaluator.simple_evaluate(
  File "/content/gpt-4-enem/lm_eval/utils.py", line 164, in _wrapper
    return fn(*args, **kwargs)
  File "/content/gpt-4-enem/lm_eval/evaluator.py", line 66, in simple_evaluate
    lm = lm_eval.models.get_model(model).create_from_arg_string(
  File "/content/gpt-4-enem/lm_eval/models/__init__.py", line 16, in get_model
    return MODEL_REGISTRY[model_name]
KeyError: 'meta-llama'

Can easily see the reference to transformers in the linked file.
transformers follows the same format as Github for repositories and hence for your issue you should run with

--model gpt2
--model_args meta-llama/meta-llama-3-8b

Strictly this does seem off-topic for this forum and would likely been addressed better by opening a Github issue for said repo