Programming is a tricky skill and takes a long time to get good at. Lots of people aren't good at it. AI helps them program anyway, and allows them to sometimes produce useful programs. That's it.
It's not a talking point. It's just the reality of what the technology enables, and it's a simple enough observation that millions of people can independently arrive at that conclusion, and some of them might even refer to it as "democratization".
> Programming is a tricky skill and takes a long time to get good at. Lots of people aren't good at it.
This is a good thing. It's a filter for the careless, lazy, and incompetent. LLMs are to programming what a microwave is to food. I'm not a chef because I can nuke a hot pocket. "Vibe coders" (not AI-assisted coding) are the programming equivalent of the people on Kitchen Nightmares. Go figure, it's a community rife with narcissism, too.
I'm surprised that as a principal engineer, you view your greatest skill set as your expertise in programming. While that is certainly an enormous asset, I have never met a principal engineer that hadn't also mastered how to work within the organization to align the right resources to achieve big goals. Working with execs and line managers and engineers directly to bring people together to chase something complex and difficult: that skill is not going to be replaced by LLMs and remains extremely valuable.
I'm not a principal, but I would wonder: if AI increases every "coder's" productivity, say, 5x doesn't that replace some teams with 1 person, meaning less "alignment" necessary? Some whole org layers may disappear. Soft skills become less relevant when there are fewer people to interface with.
Even regarding "chase something complex and difficult", there are currently only so many needs for that, so I think any given person is justified fearing they won't be picked. It may be several years between AI eating all the CRUD work from principal down, and when it expands the next generation of complex work on robotics or whatever.
Also, to speak on something I'm even less qualified – the economy feels weak, so I don't have a lot of hope for either businesses or entrepreneurs to say "Let's just start new lines of business now that one person can do what used to take a whole team." The businesses are going to pocket the safe extra profits, and too many entrepreneurs are not going to find a foothold regardless how fast they can code.
This mirrors my personal list very closely. I don't mind "it turns out" or "to be honest", but "just" and "should" I feel are often used to manipulate someone into agreeing.
> Shock! Shock! I learned yesterday that an open problem I’d been working on for several weeks had just
been solved by Claude Opus 4.6— Anthropic’s hybrid reasoning model that had been released three weeks
earlier! It seems that I’ll have to revise my opinions about “generative AI” one of these days. What a joy
it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in
automatic deduction and creative problem solving.
I think we're going to have several years of people claiming genAI "didn't really do something novel here," despite experts saying otherwise, because people are scared by the idea that complex problem solving isn't exclusive to humans (regardless of whether these models are approaching general intelligence).
There's no way to deploy a system like the one you're describing without being abused for authoritarian overreach. It's simply a matter of time, and once it is deployed for authoritarian overreach, the only way back will be paid for in blood.
> Because direct licensing isn’t available to us on compatible terms, we - like many others - use third-party API providers for SERP-style results (SERP meaning search engine results page). These providers serve major enterprises (according to their websites) including Nvidia, Adobe, Samsung, Stanford, DeepMind, Uber, and the United Nations.
> This is not our preferred solution. We plan to exit it as soon as direct, contractual access becomes available. There is no legitimate, paid path to comprehensive Google or Bing results for a company like Kagi. Our position is clear: open the search index, make it available on FRAND terms, and enable rapid innovation in the marketplace.
Weird, I switched to DDG years ago, I used !g for a couple of years, but I don't really ever use Google anymore. I don't seem to struggle to find things, so I wonder what's up. Maybe I'm just training myself to be OK with something that isn't optimal. That said, Google seems pretty spammy these days (i.e. actual search results below the fold in many cases).
Wow, TIL. Thanks for mentioning this. I ran across this as I was researching the background:
> The "beige box" era was largely the result of strict German workplace ergonomics standards (specifically the TUV and DIN standards) that became the de facto rules for the entire global industry. The law didn't explicitly say "thou shalt use beige," but the regulations were so specific about light reflectivity and eye strain that beige (or "computer gray") was essentially the only compliant option.
IBM prepared some light-gray ThinkPad prototypes but were really committed to the black design. They negotiated with the German workplace ergonomics agency who allowed them to sell black ThinkPads but with a "not for office use" label. I wonder if something similar could be done for California's restrictions?
If I recall correctly, it took two years for them to add cut and paste to iOS.
reply