Biases in AI Hallucinations Based on Context

Hi Everyone!

I’m starting a small research project on how LLMs can generate biased answers based on things like race, sex, or age of the person asking the questions. It’s based on this paper that talks about how even the name of the user can affect the likelihood of getting certain answers over another, even if asking the same question.

Some questions I’m interested in are things like:

  1. Average Salaries for jobs that are male-dominated, or race-dominated
  2. Estimation on how much a person’s bail/fine would be
  3. Rating how well an essay is written
  4. Credit Scores
  5. Numbers of households affected by redlining or gentrification

I would be interested to hear if anyone else has done this, or if anyone would like to contribute to the discussion!

1 Like