When people say they don't write the code, they mean they don't type it, but if there are not vibe coding garbage they are still watching what the agent outputs, and redirects it when it goes wrong. Instead of fixing the code manually they prompt the LLM.
Yes. They are trying to make AI-assisted development more structured. I am focusing with Primer more on learning-path oriented side. It means breaking things into small, verifiable milestones that one completes step by step, rather than defining a full spec upfront.
IMHO no one know what the heck is going to happen. Probably will have to adapt to the situation as needed... In other words, it's too hard to predict to come up with a plan ahead of time?
At a time like this, don't put all your eggs in one basket. So maybe come up with a plan to get more baskets.
What would that look like? Well, I've seen the statement that you can become a licensed phlebotomist (one who draws blood) for $500. That gives you an option that is not "write code until I get laid off". (Of course, in a true AI takeover, we're going to have blood-drawing robots eventually, so it's not a permanent fix.)
More generally, even if software stays around forever, you may not be doing the same kind of software for your whole career. (I have done both internet security software and embedded systems.) You almost certainly won't be at the same company for your whole career. Keep learning new things, and keep your eyes open for new opportunities. Right now is more scary than it often is, but we have always needed to keep our eyes open for what we'll do next.
There is no scientific basis to expect that the current approach to AI involving LLMs could ever scale up to super intelligent AGI. Another major breakthrough will be needed first, possibly an entirely new hardware architecture. No one can predict when that will come or what it will look like.
reply