Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have my own side project that I vibe coded. I probably did what would take one team 6 montns and produced it myself in one month.

I'm not afraid of breaking stuff because it is only a small set of users. However for my own code for my professional job no way I would go that fast because I would impact millions of users.

It is insane that companies think they can replace teams wholesale while maintaining quality.

 help



>However for my own code for my professional job no way I would go that fast because I would impact millions of users

Tech-savvy people might understand this feeling, but those who are responsible for hiring will easily proceed with another candidate that goes fast.

When push comes to shove, then, programmers will opt to have food to eat over handling technical debt generation.


The trick is to keep a layer of management or engineering below you that can be blamed if things go wrong.

Yeah I vibe coded an addition game for my 4 year old that lets him do addition problems where the answer is always 10 or less. It’s very “juicy”. There’s a lot of screen shake and spinning and flashy rainbow insanity going on. If I had done all that stuff myself it would have take a week because I would have been picky about each little animation. The thing that saved me the most time was just being ok with the good enough animations the ai spit out.

It’s amazing for him and it works on his iPad.

However when I tried it on my iPhone it was a broken mess. Completely unusable (not because of screen size differences).

I tried getting Claude to fix it but it couldn’t do it without changing too much of the look and feel, so I dug into the code and it was thousands of lines of absolute madness. I know from using this at work that there are things I could have done. Write tests to lock in things I like etc…

But so much of the speed up was about not caring about the specifics that once I started caring about making an actual product, I was not much faster maybe not any faster at all. The bottleneck in writing a game was never in banging out code.


> I dug into the code and it was thousands of lines of absolute madness

Ask the AI to assess the code itself and to propose ways to gradually refactor it for better cleanliness. It can be good at that stuff, but you need to make it an explicit goal.


Yeah I tried that but without tests it couldn’t keep the look and feel the same. And spending time thinking deeply enough about it to understand and specify what exactly I don’t want it to change just goes back to my point that coding isn’t the hard part.

Prompting all the way down? Have the AI create tests that document existing, known-good behaviours, then refactor while ensuring those tests pass.

That doesn’t work because tests for Luke and feel are difficult at best and nearly impossible when the code wasn’t designed for it. It’s a chicken and egg problem that you need to refactor to be able to test things reasonably.

It’s not an impossible problem to solve. I could probably setup a test harness that uses the existing game as an oracle that checks to see if the same sequence of inputs produces the same outputs. But by the time one done all, got it to clean up the code, and then diagnosed and fixed the issue, I doubt I would have saved very much time at all if any.


"feed me even more coins and I'll make my code not suck the second time around, pinky promise" vibe.

What language? JavaScript, Objective C, or Swift?

Typescript.

> I probably did what would take one team 6 montns and produced it myself in one month.

I find it… Amusing? That’s not quite the word. That programmers—a group notoriously for making wrong estimates of how long something will take to build—continuously and confidently spew a version of this.

And it’s not even estimating how long we ourselves would take to build something, now we’re onto estimating what an undetermined team of completely made up strangers could do. It’s bonkers. It has no basis in reality.


It’s not an estimate, the point was that AI produced code multiples faster than the prompter, and the prompter is in a pretty good position to make that claim. I can confirm and make the same claim, so I believe that it’s true that for some tasks, Claude makes me 10x faster than on my own without AI, where 10x absolutely is a completely made up number that’s still true in spirit.

> It’s not an estimate

Yes, it is. “It would take a team 6 months” is an estimate, and I don’t see how you can argue it’s not. Even if it just said it would take them longer, that would still be an estimate.

> Claude makes me 10x faster than on my own without AI

Also an estimate.

> where 10x absolutely is a completely made up number

And by your own admission, an estimate taken from the ass that you thus cannot be certain is true. Made up perception does not equal reality.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...


> I don’t see how you can argue it’s not.

Yes you do, you already made the argument when you pointed out the “team” size and makeup was completely unspecified, therefore the number is not an estimate, it’s just a number.

When you call it an “estimate” you are adding additional unsupported specificity to something that was explicitly stated as being hand-wavy to make an obviously rhetorical point. Are you saying you can’t understand the point being made?

My 10x is based on my experience doing projects with Claude. I also said “some” tasks, not all tasks, and I didn’t specify which tasks, and I clarified that my number is made up, which is why my number is also not an estimate of anything. There are some tasks that Claude can do 10x faster than me, and there are some tasks that it can do 100x faster than me, and there are some tasks I can do faster than Claude... for now... More importantly for me personally, Claude makes starting projects and using tech I don’t already know easier; it’s lower effort, regardless of speed.

The paper is interesting and a valid data point, but I don’t think it proves your point. I’ll respond with a few thoughts.

First, the dev’s self estimate of AI productivity speedup was +20%, even though their measured productivity was -20%. This may relate to the effort and not the speed, and it’s important to note that this is a gray zone the paper didn’t explore, and something that can be true on both sides. I can be “faster” at developing and still take the same or longer wall clock time. Measuring the time doesn’t capture how the time was spent, nor the qualities of that time.

Second, this study was done a year ago. That’s an eternity in AI land, and everyone noticed Claude and other models getting substantially better at code writing last fall, plus workflows and tooling are improving even faster than that. There’s every reason to believe the outcome of the exact same study might be different this year than it was last year.

Third, this study is explicitly biased toward large projects, and large projects are, even today, more difficult to find the productivity boosts with. I find Claude absolutely amazing at starting new projects, and absolutely terrible at working in large code bases that don’t fit in context. When I say Claude makes me 10x faster at some projects, I’m referring to something like setting up a new CRUD app when I don’t know much about setting up a database and web server backend, or writing a graphics app in Vulkan when I’ve only used OpenGL. Doing stuff like that, having Claude help me with tech stacks I don’t know, absolutely is many multiples faster than doing it on my own, and the paper link you’ve shared doesn’t address that use of AI at all.

Note specifically the paper says they are not demonstrating or claiming that “AI systems do not currently speed up many or most software developers”, and they have not demonstrated or claimed that “AI systems in the near future will not speed up developers in our exact setting”. It might be a mistake on your part to try to use this as some kind of evidence that AI isn’t speeding devs up.


A missing link right now is automated high-quality code reviews. I would love an adversarial code review agent that has a persona oriented around all incoming code being slop, that leverages a wealth of knowledge (both manually written by the team and/or aggregated from previous/historical code reviews). And that agent should pull no punches when reviewing code.

This would augment actual engineer code reviews and help deal with volume.


Cursor Bugbot is a game changer — runs on PR and finds the most subtle of bugs in enormous PRs.

I've been asking for security audits as I go. It's not perfect but it's something. And it picks up the most obvious stuff.

> It is insane that companies think they can replace teams wholesale while maintaining quality.

The assumption is that AI will continue to improve. If we get another one or two quality jumps over the next 1-3 years, which is not totally unreasonable, AI quality might be good enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: