Ethical AI x Narrative Intervention

Hey folks — complete beginner (at least in this space).

I’m working on a small, experimental project that brings together LLMs, open-source tools, and online harm reduction. We’re looking for a developer to help build a quick ‘proof-of-concept’ bot that can:

  • Flag risky/bias-prone content
  • Offer reworded alternatives or counter-messages
  • (Eventually) learn from the dialogue it creates

It’s part of a bigger vision around narrative shaping and ethical tech — details to come once we connect. For now, just looking to test a core idea.

Ideal if you’ve played with:

  • LLMs (OpenAI, Hugging Face, Mistral)
  • Toxicity detection / content moderation tools
  • Streamlit, Flask, or simple chat interfaces

Short-term, async-friendly, with potential for long-term collab + grant-funded work later.

Ping me here if you’re curious. Thanks so much
x