ECHOscore: Evaluator Cockpit for Harmony Outcomes
How do we know if a prompt was really heard?
ECHOscore is an open, modular cockpit for evaluating prompt-output quality—not only measuring accuracy, but asking:
“Did the model respond in the spirit of the prompt?”
What’s inside:
-
Prompt hygiene checks
-
Dual-side (Prompt ↔ Output) evaluation
-
Scoring modules for tone, engagement, and intent clarity
-
Designed for integration with LoRA fine-tuning, evaluator pipelines, and safety research
We are currently looking for:
-
Contributors for scoring module development
-
Research partners for benchmarking and testing
-
Early adopters who care about behavioral alignment and prompt safety
If this resonates with your work, we’d love to echo with you.
→ [eugene.p.xiang@gmail.com / Repo Link Here: ECHOscore - a Hugging Face Space by EugeneXiang]