The thing I am struggling with is where is the impact of LLM tools, especially given the massive increase in token consumption from 2025 to now and the saturated presence of LLMs everywhere.
Naively speaking, I have so many expectations for the impact of this tech.
I'd expect a noticeable uptick in applications published on Google, Apple and Microsoft app stores. I'd also expect an uptick of games published to Steam. I'd expect an uptick in Github repos and libraries on PyPi.
I'd also expect some impact on the GDP ⸻ a non-negligible part of running a business is communication, planning, ads. Naively, I'd expect that LLMs should be able to both speed some of these things up and lubricate others.
I'd also expect that large corpos like Microsoft and Apple would have more resources to spare on the essential details of their OS like having a functioning taskbar or a predictable, consistent GUI.
I'd expect increased SAT scores or improved PISA results. Maybe even improved mental health, let's go wild.
It's strikes me as a reasonably useful tool, personally.
Programming is a necessary but not sufficient condition for software products to exist. So while the programming has to be good, so too do many other things, like product vision, product management, project management, and of course there still needs to be feedback between all of the above so that engineering isn't implementing a misunderstood version of the product and that product isn't asking for 5 years and a PhD research team. And on and on and on. Typing the code is like 2-10% of actually ending up with a software project and it's more toward the 2% for a software business.
So while AI made coding maybe 110% faster, it has also made literally every other person in the process lose their gd minds and they're wanting to break or skip everything else in the process to just shit out code faster.
Going faster only works WHEN you know EXACTLY (or close to it) what you want.
Going faster when experimenting? Nah you actually need a mix of slow and fast, and mostly slow stuff up-front.
There's a fundamental misunderstanding of how people actually do stuff imo - its akin to force fitting a square peg in a round hole. Im sure many are hoping its just a 'your organisation is designed wrong' problem. I doubt it though.
The tech is still young and projects take time. And there are many slow parts of building that have not been accelerated (mythical man month).
I have started making an indie game, as one does, and it’s easily going 2-4x speed, but even still I’d predict a year of free time development with focus to ‘finish’ this thing. But the latest agentic tech is 3 months old.
Oh yeah, a month ago I was reading a comment section about LLM writing tendencies and someone humorously suggested using the loooooooong-em-dash to distinguish yourself from LLMs. I found it so charming that I made my keyboard output it when I double tap "-".
On Linux you press Ctrl+Shifs+U and then type 2E3B, then press enter.
Sure, it covers 99.9% of cases, but top elite athletes are the genetic exceptions, they are the genetic freaks. They are the top 0.0001%. You don't get to compete at the most elite levels without your body being exceptionally gifted and almost specifically shaped for the relevant sport, which inevitably means funky genetic traits and disorders, higher testosterone levels etc.
I mean the word freak in the most loving and caring way possible, mind you.
I am not sure what point you are trying to make. When it comes to the Olympics, it was decided a long time ago that having both men and women's events was beneficial for societal progress to have both sexes represented. This was at a time when sex=gender. Now, we recognize the difference between sex and gender but one side thinks the split of events was always based on gender whereas it was almost surely based on sex. This ruling confirms that view point.
It certainly is. For people who have not heard the statements, here are some quotes. I bring them up, because I think it's worthwhile to remember the bold predictions that are made now and how they will pan out in the future.
Council on Foreign Relations, 11 months ago: "In 12 months, we may be in a world where AI is essentially writing all of the code."
Axios interview, 8 months ago: "[...] AI could soon eliminate 50% of entry-level office jobs."
The Adolescence of Technology (essay), 1 month ago: "If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything."
Naively speaking, I have so many expectations for the impact of this tech.
I'd expect a noticeable uptick in applications published on Google, Apple and Microsoft app stores. I'd also expect an uptick of games published to Steam. I'd expect an uptick in Github repos and libraries on PyPi.
I'd also expect some impact on the GDP ⸻ a non-negligible part of running a business is communication, planning, ads. Naively, I'd expect that LLMs should be able to both speed some of these things up and lubricate others.
I'd also expect that large corpos like Microsoft and Apple would have more resources to spare on the essential details of their OS like having a functioning taskbar or a predictable, consistent GUI.
I'd expect increased SAT scores or improved PISA results. Maybe even improved mental health, let's go wild.
It's strikes me as a reasonably useful tool, personally.
Yet, where are the goods in the aggregate?
reply