Safe_Mode = [True, False]

I’m looking for alignment and safety pros to collaborate on a framework for evaluating open-source LLMs. I have a hunch the open-source community can deliver responsible AI faster than the proprietary shops. Is anyone else working on this?

I’m starting the process with an evaluation of Mistral. I welcome your feedback!