Currently I’m trying to get the LM evaluation harness running without success. I was curious if there is an easy way to Benchmark or evaluate pre-trained Generative text models inside the hugging face Library. I’m sorry if this is really obvious.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How to run Llama 3.1 benchmark | 0 | 66 | September 2, 2024 | |
EleutherAI / lm-evaluation-harness on a custom model | 0 | 1986 | April 10, 2024 | |
Benchmarking LLMs | 1 | 1403 | August 20, 2024 | |
Evaluating my own model | 6 | 120 | February 21, 2025 | |
Is it possible to evaluate generations/output while fine-tuning a LLM? | 2 | 750 | November 1, 2023 |