A few weeks ago I was lurking in a Discord for a tool I use occasionally. Someone asked a fairly specific question about a config edge case. Two minutes later, a confident-sounding answer appeared from a regular member. The answer had the right shape. Right code style. Plausible API references. Half of it was wrong.
Nobody caught it for almost a day. The person who asked actually thanked the responder, went off, presumably wasted a couple of hours, and came back annoyed.
I do not know for sure that the answer was AI-generated. I am pretty sure though. And the reason I am pretty sure is that this is the new failure mode in developer communities, and once you start looking for it, it is everywhere.
The Stack Overflow story is the obvious one
In December 2022, Stack Overflow issued a temporary ban on ChatGPT-generated answers. The reasoning was direct: the answers had a high error rate, but they looked correct, which made them harder to moderate than just bad answers from humans. Plausible-but-wrong is more dangerous than obviously-wrong.
That ban became permanent policy and then, in May 2024, Stack Overflow announced a partnership with OpenAI to feed Stack Overflow content into ChatGPT. So the platform that banned AI-generated answers started licensing its human-generated answers to train the AI generating those answers. A lot of users were not thrilled. Some deleted their highest-voted answers in protest, and a number got suspended for it.
Meanwhile the actual question volume on Stack Overflow has continued to decline, with various analyses pointing at the obvious cause. People ask the AI first.
That sequence of events is messy in the telling, but the underlying tension is simple. Communities exist because humans contribute knowledge to other humans. The moment the contribution might not be human, and might not even be correct, the trust contract breaks. And once trust breaks in a community, the contributors leave first. Then the answer quality drops further. Then the readers leave too. The doom loop is well documented.
This is not a Stack Overflow problem, it is everyone's problem
Run a Discord. Run a Discourse. Run a subreddit. Run a Slack workspace for your open source project. The same dynamic is showing up everywhere, just less visibly.
I have seen it in:
- A Discord where a couple of "helpful" members were clearly piping every question through ChatGPT and pasting the response, complete with hallucinated method names
- A Discourse forum where a moderator noticed answers in a niche subforum suddenly getting more polished but less correct
- A subreddit where the moderators had to add a rule about AI-generated submissions because low-effort posts were drowning out the genuine ones
The pattern is always the same. AI lowers the cost of producing a plausible answer to roughly zero. Plausibility is the thing that historically gated who got upvoted, replied to, or believed. And now the gate does not work.
Common Room's community research and the Orbit model's writing on community health both point at the same underlying signal: communities live or die on the trust relationships between members. AI-generated content does not just add noise. It poisons the signal that lets a community function in the first place.
The honest unresolved tension
Here is the part nobody has a clean answer to, and the reason I think this is the conversation worth having:
Faster answers are good. Reliable answers are good. Right now you can have one or the other, not both.
If your Discord allows AI-generated responses, your average response time goes down and your accuracy gets noisier. If you ban them, you slow the community down and probably can't enforce the ban anyway, because plausibility is exactly what makes detection hard.
There are some directions that look promising, but none of them are settled.
Disclosure norms. Some communities have adopted a culture of marking AI-assisted answers explicitly. "Claude says this, I have not verified it." This works in small, healthy communities. It scales poorly because the bad actors are exactly the ones who will not disclose.
Reputation gating. Make the cost of contributing high enough that low-effort AI dumps are filtered out by the contribution friction itself. This is basically what Stack Overflow's reputation system was designed to do, before AI shifted the cost equation.
Human-only spaces. A few projects I have seen are explicitly carving out invitation-only or paid spaces for "verified human" conversation. There is a real argument for this, and it is also depressing as hell, because what we are saying is that open developer community is becoming a thing you have to pay to access.
AI as triage, not as answer. The most workable pattern I have seen is using AI to summarize, route, and tag, while keeping the actual answer human. The model is doing the boring work, the human is providing the credibility. This is roughly what the better-run community teams I know are doing now.
The trust contract that makes a community function was always implicit. AI made it the most important thing to be explicit about. Communities that name the contract directly, and enforce it, will probably make it through. The ones that pretend nothing has changed will not.
I do not have this figured out for the communities I am part of. Honestly, I think most community managers are flying blind right now, and the tooling has not caught up.
The thing I am sure of is that "we will deal with it later" is not a strategy. The trust loss is happening now, quietly, in the gap between when a wrong answer gets posted and when somebody finally calls it out. Every gap like that is a small withdrawal from the community's credibility account.
If you run a developer community of any kind, this is the conversation worth having with your members in the next month or two. Not in a year.