What I learned shipping my first OSS AI project: ~500 ⭐ in ~60 days (no funds, no ads, no team)

TL;DR

I open-sourced a tiny, MIT-licensed “reasoning layer” called WFGY. I spent two months mostly helping people debug real issues, and that alone grew the project to ~500 GitHub stars and ~4,000+ paper downloads—with zero budget, no marketing, and no team. Sharing what worked (and what didn’t) for anyone starting their first OSS project.

(attaching my star-growth image here)


What worked (surprisingly well)

  1. Solve actual pain, in public
    Most stars came after I helped folks fix real bugs (RAG drift, OCR citations, long-context entropy). I didn’t “pitch”—I posted concise replies, named the failure mode, and gave a minimal fix path.

  2. Make it reproducible in ~60 seconds
    Trying something should be easier than debating it. My default instruction became:

Upload the WFGY PDF to your model (Claude/GPT/etc).
Then ask: “Use WFGY to answer: <your question>”.
Compare before vs after.

No fine-tuning, no settings. If people can’t reproduce fast, they won’t even argue—they’ll bounce.

  1. One file > one framework
    Beginners don’t want a stack. A single PDF (MIT) that any LLM can “consult” lowered the barrier a lot.

  2. Ask for negative results
    I explicitly invited “this didn’t work for me” traces. Those led to better docs and a clearer Problem Map (16 common failure modes) so newcomers can self-diagnose.

  3. Rewrite for each community
    Same core idea, different angles: data engineering (pipeline failures), NLP (semantic stability), Claude/GPT subs (how to run it in chat), RAG builders (retrieval sanity checks). Copy-pasting the same post gets ignored.


What didn’t work

  • Generic “here’s my project” posts
    Threads with no runnable snippet got low engagement.

  • Too many links
    Even good content gets auto-filtered. I now default to one link and keep the rest inline.


What WFGY is (one paragraph, beginner-friendly)

WFGY is a tiny symbolic math layer a model can reference while it reasons—think constraints + guardrails + recovery. You don’t change weights or settings. You just attach the PDF and ask the model to “use WFGY” on your question. In side-by-side tries, people typically see fewer over-expanded chains and tighter constraint-keeping on reasoning tasks. It won’t add missing domain facts; it just helps the model stay logically sane.


Try it (no pressure)

If you’re curious, you can start with the README and the PDF here (MIT):

If you do try it, I’d love to hear one of two things:

  • a tiny win (“this fixed X in my pipeline”), or

  • a tiny failure (“this didn’t help on Y; here’s the trace”).

Both help the project evolve—and help the next beginner ship faster.


Takeaways for first-time OSS authors

  • Pick one real pain and solve it in public.

  • Make the core demo copy-pasteable and model-agnostic.

  • Keep your scope tiny (one PDF, one prompt).

  • Invite negative feedback; publish the fixes back.

  • Celebrate the small wins—open source is a marathon.

Thanks for reading! If anyone here wants help debugging a stubborn reasoning/RAG issue, drop a short example—I’ll try to map it to the right fix path so others can learn from it too.

2 Likes