I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way.
Yes, and they work really well for small side projects that an exec probably used to try out the LLM.
But writing code in one clean discrete repo is (esp. at a large org) only a part of shipping something.
Over time, I think tooling will get better at the pieces surrounding writing the code though. But the human coordination / dependency pieces are still tricky to automate.
Yes, all the time. I understand that if you have a setup where you do everything in your IDE you could reasonably leave it full screen all the time and I get why that works for some people. I'm not one of those folks and I use separate IDE, terminal, browsers, and other windows and use window management to allow myself to see multiple of them at the same time and switch between them by clicking on what I want.
Also just want to be 100% clear: Tahoe is bad and I hate the changes and I don't think the OS should prefer one way of working over the other. I just hope it's helpful to explain my perspective.
Everyone is different, and even though I don't share your experience, I don't view yours as either good or bad, it just is what it is. My experience is different but I'm not planning on ever telling anyone "Oh don't worry about it just have kids it'll be the best experience of your life" in blind faith.
Honestly I think this is the primary explanation for why there is so much disagreement on if LLMs are useful or not. If you leave out the more motivated arguments in particular.
In my experience trying to push the onus of filtering out slop onto reviewers is both ineffective and unfair to the reviewer. When you submit code for review you are saying "I believe to the best of my ability that this code is high quality and adequate but it's best to have another person verify that." If the AI has done things without you noticing, you haven't reviewed its output well enough yet and shouldn't be submitting it to another person yet.
Code review should be a transmission of ideas and helping spotting errors that can slip in due to excessive familiarity with the changes (which are often glaring to anyone other than the author).
If you're not familiar with the patch enough to answer any question about it, you shouldn't submit it for review.
I think another part of it is that AI tools demo really well, easily hiding how imperfect and limited they are when people see a contrived or cherry-picked example. Not a lot of people have a good intuition for this yet. Many people understand "a functional prototype is not a production app" but far fewer people understand "an AI that can be demonstrated to write functional code is not a software engineer" because this reality is rapidly evolving. In that rapidly evolving reality, people are seeing a lot of conflicting information, especially if you consider that a lot of that information is motivated (eg, "ai is bad because it's bad to fire engineers" which, frankly, will not be compelling to some executives out there). Whatever the new reality is going to be, we're not going to find out one step at a time. A lot of lessons are going to be learned the hard way.
reply