People aren't good at detecting AI generated/edited comments, so unsure how effective this policy will be. Though I guess there are still some obvious signs of AI speak like emdashes and sycophantic (it's not X, it's Y!) speech.
Bit of a shameless plug but I wrote a HN AI comment detector game[0] with AI and most of my friends and fellow HN users who tried it out couldn't detect them.
Something I've noticed through moderation is that people are much more easily duped by generated comments if they like the content and/or agree with the point. We've seen several cases where a bot-generated comment has been heavily upvoted and sits at the top of the thread for hours, and any comments calling it out for being generated languish at the bottom of the subthread below other enthusiastic, heavily upvoted replies. This shouldn't be surprising, given what we've seen of LLM chatbots being tuned to be sycophantic, but it's interesting to see it in effect on HN.
This is another reason why it's good to email us (hn@ycombinator.com) rather than commenting when you see generated comments.
Having been reading generated comments almost daily for over three years now, I have a pretty good sense of it. There's a bunch of signals: how new the account is; how the comments look visually (the capitalization and layout of the paragraphs, particularly when all of one user's comments are displayed in a list). Em-dashes and short, emphatic sentences, make it more obvious of course.
There are cases that are more borderline; usually when someone has used a translation service or has used an LLM to polish up a comment they wrote themselves. For these ones there's less certainty, and whilst we discourage them, we're not as rigid in our aversion to them or as eager to ban accounts that do it.
But ones that are entirely generated are still pretty easy to spot, even just from visual appearance.
It’s certainly hard to detect in isolation, but the thing that gives it away is the comment history.
All the AI acounts I’ve seen repeatedly post the exact same cookie cutter top-level comments over and over again. Typically some vapid observation followed by an obviously forced question serving as engagement bait. The paragraphs and sentence structure even looks visually similar across comments when you scroll down the history page.
Just look at a few of these accounts and you’ll easily be able to recognize AI posts on your own.
Some of us were trained/self taught to write that way. Even "it's not X, it's Y" is a legitimate and subjectively effective communication tool, and there are those of us who either by training modeling have picked it up as a habit. It's not Ai that started this, Ai learned it from us.
Crap - I just did it, didn't I? Awww double crap! Did it again...
Forums and comments are not written as formal novels or text. Corporate-speak is also not typically used in these environments unless you are representing corporate.
So I think it's fine to scrutinize commenters who write that way.
Besides, the biggest offense of AI speak is making everything seem like a grand epiphany and revolutionary discovery. Aka engagement bait.
That's just French for "masterful" or a way to describe lectures. There's a sense of greatness in that word that contrasts with the Mini in Ministral which is in turn might be a pun on "ménestrel" (minstrel), "ministre" (minister), or made to sound like Minitel (or all of the above).
psychosis.hn is a daily game. Every day we fetch three stories from a previous front page of HN, each with 5-7 AI comments threaded into the discussion. They have personas, reply to real people, and sometimes have real comments reparented underneath them.
I think this comes off a bit too strong (as well as the replies to this to be fair)
The example isn't quite accurate. If a friend bought you lunch, the social norm of reciprocity would incline you towards buying them lunch in the future (i.e part of your paycheck)
Free open source software is a public good. While there is no obligation to give back, giving back helps that public good become more useful to other people (including your future self). I'm against making contribution an obligation, but I'm not against light social pressure upon philanthropists who have the means (which is what the parent comment was doing).
In the lunch example, reciprocation would be releasing additional software under free software licenses, not payments.
There should be zero social pressure, as gifts do not convey obligation. It was the software author’s explicit choice when licensing and publishing the software to make clear that payment is not expected.
Do you routinely struggle in social situations? Do you frequently have people tell you that you misinterpreted social cues?
You are correct that no legal obligation was passed, but generally people feel that if you got something from a community that helped you succeed greatly you do have an obligation to throw something back to the organization to help it help others.
If you don't, that'ss generally classified by people as being a jackass
I wonder if anyone has done an analysis on the HN user sentiment on the varying AI models over time. I'd be curious to see what that looks like. Increasingly, I'm seeing more and more people talk positively about Gemini and Google (and having used Gemini recently, I align with that sentiment)
I think Bard (lol) and Gemini got a late start and so lots of folks dismissed it but I feel like they've fully caught up. Definitely excited to see what Gemini 3 vs GPT-5 vs Claude 4 looks like!
I'm using Windsurf IDE so have all the main models available. Mainly doing Python, JS, HTML, CSS, some Go. I have found Claude 3.7 outperforms Gemini 2.5 and ChatGPT 4.1, 4o, Deepseek, etc, for my work in most cases.
I suspect that I experience some performance throttling with Gemini 2.5 in my Windsurf setup because it's just not as good as anecdotal reports by others, and benchmarks.
I also seem to run up against a kind of LLM laziness sometimes when they seemingly can't be bothered to answer a challenging prompt ... a consequence of load balancing in action perhaps.
https://blog.rice.is/post/doom-over-dns/
reply