I've tested Gemini Pro 2.5 yesterday with a function I had troubles with. It wasn't something I can't do, just one of those things easy to get wrong that I postponed because I lacked focus that day due to a heat wave. The AI spit out a perfect function with working tests after the first prompt.
Now I don't want to sound like a doomsayer but it appears to me that application programming and corresponding software companies are likely to disappear within the next 10 years or so. We're now in a transitional phase were companies who can afford enough AI compute time have an advantage. However, this phase won't last long.
Unless there is a principal block to further enhance AI programming, not just simple functions but whole apps can be created with a prompt. However, this is not where it is going to stop. Soon, there will be no need for apps in the traditional sense. End users will use AI to manipulate and visualize data and operating systems will integrate the AI services needed for this. "Apps" can be created on the fly and are constantly adjusted to the users' needs.
Creating apps will not remain a profitable business. If there is an app X someone likes, they can prompt their AI to create an app with the same features, but perhaps with these or those small changes, and the AI will create it for them, including thorough tests and quality assurance.
Right now, in the transitional phase, senior engineers might feel they are safe because someone has to monitor and check the AI output. But there is no reason why humans would be needed for that step in the long run. It's cheaper to have 3 AIs quality test and improve the outputs of one generating AI. I'm sure many companies are already experimenting with this, and at some point the output of such iterative design procedures will have far less bugs than any code produced by humans. Only safety critical essential features such as operating systems and banking will continue to be supervised by humans, though perhaps mostly for legal reasons.
Although I hope it's not but to me the end of software development seems a logical long-term consequence of current AI development. Perhaps I've missed something, I'd be interested in hearing from people who disagree.
It's ironic because in my great wisdom I chose to quit my day job in academia recently to fulfill my lifelong dream of bootstrapping a software company. I'll see if I can find a niche, maybe some people appreciate hand-crafted software in the future for its quirks and originality...
Ephemeral data processing and visualization programs. Nothing that needs to be installed or downloaded or runs on some web server. The concept is pretty simple. The operating system+AI will create those on the fly.
Now I don't want to sound like a doomsayer but it appears to me that application programming and corresponding software companies are likely to disappear within the next 10 years or so. We're now in a transitional phase were companies who can afford enough AI compute time have an advantage. However, this phase won't last long.
Unless there is a principal block to further enhance AI programming, not just simple functions but whole apps can be created with a prompt. However, this is not where it is going to stop. Soon, there will be no need for apps in the traditional sense. End users will use AI to manipulate and visualize data and operating systems will integrate the AI services needed for this. "Apps" can be created on the fly and are constantly adjusted to the users' needs.
Creating apps will not remain a profitable business. If there is an app X someone likes, they can prompt their AI to create an app with the same features, but perhaps with these or those small changes, and the AI will create it for them, including thorough tests and quality assurance.
Right now, in the transitional phase, senior engineers might feel they are safe because someone has to monitor and check the AI output. But there is no reason why humans would be needed for that step in the long run. It's cheaper to have 3 AIs quality test and improve the outputs of one generating AI. I'm sure many companies are already experimenting with this, and at some point the output of such iterative design procedures will have far less bugs than any code produced by humans. Only safety critical essential features such as operating systems and banking will continue to be supervised by humans, though perhaps mostly for legal reasons.
Although I hope it's not but to me the end of software development seems a logical long-term consequence of current AI development. Perhaps I've missed something, I'd be interested in hearing from people who disagree.
It's ironic because in my great wisdom I chose to quit my day job in academia recently to fulfill my lifelong dream of bootstrapping a software company. I'll see if I can find a niche, maybe some people appreciate hand-crafted software in the future for its quirks and originality...