How are people building QA and guard rails into automated systems, especially when using AI?
I’m working on a project that allows for an AI supervisor to monitor my automations and notify me of any potential errors or high risk tasks that require human approval/revision. I’ve also built in a system for feedback to be gathered when the outputs were wrong so that I can improve my automation/prompt that fed into this system.
If anyone knows of a software or package that does this I’d be curious to take a look.