Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We should be so far past the "grand debate about its usefulness" at this point.

If you think that's still a debate, you might be listening to the small pool of very loud people who insist nothing has improved since the release of GPT-4.



Have you considered the opposite? Reflected on your own biases?

I’m listening to my own experience. Just today I gave it another fair shot. GitHub Copilot agent mode with GPT-4.1. Still unimpressed.

This is a really insightful look at why people perceive the usefulness of these models differently. It is fair to both sides without being dismissive as one side just not “getting it” or how we should be “so far” past debate:

https://ferd.ca/the-gap-through-which-we-praise-the-machine....


Do either of these impress you?

https://alexgaynor.net/2025/jun/20/serialize-some-der/ - using Claude Code to compose and have a PR accepted into llvm that implements a compiler optimization (more of my notes here: https://simonwillison.net/2025/Jun/30/llvm/ )

https://lucumr.pocoo.org/2025/6/21/my-first-ai-library/ - Claude Code for writing and shipping a full open source library that handles sloppy (hah) invalid XML

Examples from the past two weeks, both from expert software engineers.


Not really, no. Both of those projects are tinkertoy greenfield projects, done by people who know exactly what they're doing.

And both of them heavily caveat that experience:

> This only works if you have the capacity to review what it produces, of course. (And by “of course”, I mean probably many people will ignore this, even though it’s essential to get meaningful, consistent, long-term value out of these systems.)

> To be clear: this isn't an endorsement of using models for serious Open Source libraries...Treat it as a curious side project which says more about what's possible today than what's necessarily advisable.

It does nobody any good to oversell this shit.


A compiler optimization for LLVM is absolutely not a "tinkertoy greenfield projects".

I linked to those precisely because they aren't over-selling things. They're extremely competent engineers using LLMs to produce work that they would not have produced otherwise.


Should be, but the bar for scientifically proven is high. Absent actual studies showing this, (and with a large N), people will refuse to believe things they don't want to be true.


I think this is definitely true for novel writing and stuff like that based on my experiments with AI so far.. I'm still on the fence about coding/building s/w based on it, but that may just be about the unlearning and re-learning i'm yet to do/try out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: