LLM security check

A data science team wants to use a Pre-built open source NLP model. The team wants to use huggingface open source ML models. Now, simply downloading and executing the model is not allowed by choice of the developers. The model needs to be scanned for vulnerabilities and issues and then approved for usage.

How can we scan the model and know the vulnerabilities it might bring? Is there a tool that can do similar checks as the DAST tool does for the py libraries?

1 Like

I am also looking for something similar. Not seen many work around vulnerability scans for LLMs

Use garak to scan LLM. Its open source at the moment.

1 Like