Stack Overflow’s moderators are on strike over a new policy that limits their ability to remove AI-generated content. This policy reportedly lowers the quality-checking standards and was implemented without consulting the moderators.
The main complaint from the moderators is that they can now only remove AI-generated answers in rare situations. They believe this will lead to an increase in low-quality AI-generated responses on the platform, going against the community’s agreement.
Due to the strike, the moderators have deactivated SmokeDetector, an important tool used to fight spam. Out of the 24 moderators, 15, including Zoe from Norway, are actively participating in the strike. This puts more pressure on the remaining moderators, who now have to handle a larger number of flagged posts. Zoe highlighted that the policy lacks guidelines for suspending users suspected of generating AI-generated content. She also expressed doubts about the effectiveness of automated AI detectors and their limited integration with the moderators’ systems.
Philippe Beaudette, VP of Community at Stack Overflow, is worried about correctly identifying AI-generated content and the possibility of mistakenly suspending users. But some moderators disagree with Beaudette’s understanding of the policy.
He acknowledges that previous ChatGPT detection tools have produced a lot of false positives, which has led to unnecessary suspensions and affected new contributors, so it wants to stop using these tools and is actively looking for other solutions.
The sources for this piece include an article in DevClass.