Security of the LLM applications

Anyone looking to secure LLM applications? with like prompt injections, toxicity and like 10’s of scanners… Looking to talk and get views

Looks interesting. But I don’t know to how solve this upcoming Problem.