From everything I’ve seen, LLMs aren’t exactly known for writing extremely optimized code.
Also, what happens to the stability and security of my phone after they let an LLM loose on the entire code base for a weekend?
There are 1.5 billion iPhones out there. It’s not a place to play fast and loose with bleeding edge tech known for hallucinations and poor architecture.
The average developers suck. The distribution is also unbalanced. It is bulkier on the low-skill side.
Great UIs are written by above average or even exceptional developers. Such experience is tied to the real-life reasoning and combining unique years-long human experience of interacting with the world. You need true general intelligence for that.
Is that really how it works - everything is just weighted equally? I would hope there would be at least some kind of tuning, so <well-regarded-codebase> gets more weight than <random-persons-first-coding-project>? If not, that seems like an opportunity. But no idea how these things are actually configured.
You can also tell it the optimization to implement.
I asked Claude to find all the valid words on a Boggle board given a dictionary and it wrote a simple implementation that basically tried to search for every single word on the board. Telling it to prune the dictionary first by building a bit mask of the letters in each word and on the board and then checking if the word is even possible to have on the board gave something like a 600x speedup with just a simple prompt of what to do.
That does assume that one has an idea of how to optimize though and what are the bottlenecks.
Can we assume at this point if the problems are well known, the low hanging fruit has already been addressed? The Boggle example seems like a pretty basic optimization that anyone writing a Boggle-solver would do.
iOS is 19 years old, built on top of macOS, which is 24 years old, built on top of NeXTSTEP, which is 36 years old, built on top of BSD, which is 47 years old. We’re very far from greenfield.
They kind do if you prompt them, I had mine reimplement the Windows calc (almost fully feature complete) in rust running with 2mb RAM instead of 40mb or whatever the win 11 version uses as a POC.
A handwritten c implementation would most likely be better, but there is so much to gain from just slaughtering the abstraction bloat it does not really matter.
Also, what happens to the stability and security of my phone after they let an LLM loose on the entire code base for a weekend?
There are 1.5 billion iPhones out there. It’s not a place to play fast and loose with bleeding edge tech known for hallucinations and poor architecture.