AI Recruiting Agent — Bias-Aware Candidate Evaluation with RAG + Audits

Hi Hugging Face community :waving_hand:

I wanted to share a prototype Space I’ve been building called AI Recruiting Agent — a Gradio-based demo that explores how LLMs can support recruiting workflows without becoming a black box.

:backhand_index_pointing_right: Live Space:
https://huggingface.co/spaces/19arjun89/AI_Recruiting_Agent

:backhand_index_pointing_right: Code:
https://huggingface.co/spaces/19arjun89/AI_Recruiting_Agent/blob/main/app.py


:rocket: What it does

This Space has two main modes:

1) Candidate Assessment (Recruiter View)

  • Upload company culture documents

  • Upload resumes in bulk

  • Paste a job description

  • The system evaluates each candidate across:

    • Technical skills match

    • Culture fit

    • A final hiring recommendation

    • Claim verification against source inputs

    • A structured bias audit

2) Cold Email Generator (Candidate View)

  • Upload a single resume

  • Paste a job description

  • Generates a tailored outreach email grounded in role requirements and experience


:shield: Why I built it this way (Responsible AI focus)

Beyond hallucinations, one of the biggest risks in AI recruiting is algorithmic bias.
So I built in multiple safeguards at both input and output stages:

:small_blue_diamond: Resume Anonymization
Resumes are sanitized before embedding and analysis (emails, phone numbers, addresses, likely name headers, and explicit demographic fields are redacted). This forces the system to focus on professional qualifications rather than demographic signals.

:small_blue_diamond: Fact Verification
Every skills and culture analysis is checked against:

  • Resume content

  • Job description

  • Culture documents

Unsupported claims are flagged and can trigger a self-correction routine.

:small_blue_diamond: Bias Audit Chain
For each candidate, a secondary “bias audit” prompt reviews:

  • Over-reliance on education pedigree or past employers

  • Penalizing nontraditional career paths

  • Subjective or exclusionary cultural language

  • Reasoning not grounded in source documents

The audit outputs structured Bias Indicators and a Transparency Note for recruiter review.

These checks don’t disqualify candidates automatically — they flag where human judgment is critical.


:brick: Tech stack

  • Gradio (UI)

  • LangChain (LLM orchestration)

  • Chroma (vector search for resumes & culture docs)

  • ChatGroq (LLM inference)

  • Hugging Face embeddings (semantic search)


:open_file_folder: Sample inputs

If you don’t have test files handy:

  • I’ve added sample resumes and culture documents in the Files section of the Space

  • The UI also includes step-by-step instructions for first-time users


:folded_hands: What I’d love feedback on

I’m especially interested in:

  1. Bias mitigation ideas I should add

  2. Better ways to structure or score the bias audit output

  3. Failure modes you can think of (edge cases, weird resumes, etc.)

  4. UX improvements for first-time users

  5. Responsible AI patterns you’ve used in similar demos


:warning: Disclaimer

This is a research prototype intended for demonstration only.
It does not replace recruiter judgment, legal review, or organizational hiring policies.

Final hiring decisions must always be made by humans.


Thanks in advance for checking it out!
Happy to iterate based on community feedback :raising_hands:
Arjun

1 Like