The 'Handmade Network' is essentially this (in a good way though) - and long before LLMs got good enough for code-generation - instead as a counter philosophy to the soulless "enterprise software development" where a feature that could be implemented in 10 lines of code is wrapped in 1000 lines of "industry-best-practices" boilerplate.
Programming via LLMs is just the logical conclusion to this niche of industrialized software development which favours quantity over quality. It's basically replacing human bots which translate specs written by architecture astronauts into code without having to think on their own.
And good riddance to that type of 'spec-in-code-out' type of programming, it should never have existed in the first place. Let the architecture astronauts go wild with LLMs implementing their ideas without having to bother human programmers who actually value their craft ;)
I'm kinda leaning towards the analogy that LLMs are to programming as textile machines were to the loom.
People still pay for hand-knit fabrics (there's one place in Italy that makes silk by hand and it costs 5 figures per foot), but the vast majority is machine made.
Same thing will happen to code, unless the bubble bursts really badly. Most bulk API Glue CRUD stuff and basic web UI work will be mostly automated and churned off automated agentic production lines.
But there will still be a market for that special human touch in code, most likely when you need safety/security or efficiency/speed.
> macOS 26.3 updated clang and broke my emscripten workflow.
This is actually strange since Emscripten comes with its own Clang toolchain and shouldn't use anything from the system's toolchain (I'm also on 26.3 and haven't seen any behaviour changes in my dev setup).
FWIW, I'm also not a fan of the UI changes in Tahoe, but I mostly just move between the terminal (via Ghostty), VSCode and Chrome, so for the most part I'm blissfully unaware of the UI wreckage ;)
IMHO the one great feature of Objective-C (compared to C++) is that it doesn't interfere with any C language features. In C++ the C 'subset' is stuck in the mid-1990s, while Objective-C "just works" with any recent C standard.
The one really funny feature of Objective-C++ is that it lets you write C++ using modern C features that haven't been pulled into C++, and you don't have to actually use the Objective part. Designated initializers before C++ got them were the main actually useful application of this.
Interestingly, I recently auto-translated wget from C to a memory-safe subset of C++ [1], which involves the intermediate step of auto-converting from C to the subset of C that will also compile under clang++. You end up with a bunch of clang++ warnings about various things being C11 extensions and not ISO C++ compliant, but it does compile.
No not really, for instance the designated init feature in C++20 is a distinctive C++ feature which is not compatible with C99's designated init. AFAIK the C subset hasn't changed since C++98.
C11 atomics, C11 threads, variable length arrays, safely reading from an inactive union member, designated array initializers, compound struct literals, implicitly converting a void pointer to a typed one, and the list goes on.
In GCC and Clang it's quite relaxed since their C++ frontends pull in a couple of modern C features as non-standard C++ extensions, but MSVC has a strict separation between the C++ and C frontend and doesn't allow this sort of 'cross-pollution'.
As a quick example, this C99 code block has at least 4 features that don't work in C++:
In the end though most of those 'sending a message' actions are just fancy virtual method calls (e.g. an indirect jump), everything else would be much too slow:
IMHO the whole 'message' and 'sending' lingo should be abandondend, the job of objc_msgSend is to look up a function pointer by certain rules. There are no 'messages' involved, and nothing is 'sent'.
> There are no 'messages' involved, and nothing is 'sent'.
The conceptual difference is significant as an object can respond to messages that it doesn't have a method for. You are, conceptually, just sending a message and leave it up to the object what it wants to do with it (e.g. forwardInvocation:). That is, after all, what sets "object-oriented" apart from having objects alone. Optimizations that can be made under the hood don't really affect the language itself.
> can respond to messages that it doesn't have a method for.
Clang produces a warning in that case though (something along the lines of "object might not respond to ..."), I don't think that feature is particularly useful in practice (also because it kills any sort of type safety) :)
And the reason it’s a warning and not an error (like in C++) is that it’s actually possible that the object can respond to such a message but the compiler doesn’t know about it.
It was incredibly useful in the olden days. The NeXT/Apple ecosystem leaned on it heavily.
We have new ways to approach problems nowadays, so it may be fair to say that object-oriented programming is a relic of the past. I mean, it is telling that Smalltalk, Objective-C, and Ruby are the only languages to ever go down that road. Still, if you are using an OO language, then it makes sense to lean into OO features. Otherwise, why not use a language better suited to your problem?
> That is, after all, what sets "object-oriented" apart from having objects alone.
I wouldn't say so, most object-oriented languages don't work like Objective-C/Smalltalk. Today, I think most programmers would agree that inheritance is the defining feature of object-orientation.
Okay, that's what sets what was classically known as "object-oriented" apart.
Understandably, language evolves. If OO means something different today, what do most programmers call what used to be known as OO? I honestly have never heard anyone use anything else. But I am always up for refreshing my lexicon. What did most programmers settle on for this in order to free up OO for other uses?
The big difference is that the big game engines have to cover all sorts of genres and scenarios, which often results in bloated "jack of all trades master of none" code compared to engine-layer code that's highly specialized for exactly one, or few very similar games.
If building a custom commercial game engine these days... A team is 100% focused on the wrong problem, as the game-play content is what sells. Customers only care about game-engines when broken or cheating.
Godot, Unreal, CryEngine, and even Unity... all solve edge-cases most don't even know they will encounter. Trying something custom usually means teams simply run out of resources before a game ships, and is unlikely stable on most platforms/ports. =3
Many of those "edge cases" lurk in the platform abstraction layer (driver or OS bugs which needs to be worked around), and many of those problems are also taken care of in cross-platform wrappers like SDL (and for 2D games this is completely sufficient, you don't need UE5, Unity or Godot to render a couple thousand sprites and play music and audio effects).
But even more complex custom/inhouse engines are usually not written from scratch, those are often mostly glued together from specialized middleware libraries which solve the tricky problems (e.g. physics engines like Jolt, vegetation rendering like SpeedTree, audio engines and authoring tools like FMOD or WWise, LOD solutions like Simplygon, etc etc...)
>Customers only care about game-engines when broken or cheating
Most game engines are broken by default. Modern customers just aren't very discerning ("It's for the pigs. Pigs eat slop."). You can feel holes and rough edges in the vast majority of new releases, including AAA titles.
Unreal is the worst for this and Unreal-based games almost always have two things in common: a very particular, soft, sticky and unresponsive look & feel (often alleviated but never fully corrected by turning off some combination of motion blur, AA and VSync), as well as a UI that mishandles mouse pointers.
Unity devs seem to rely on a (more diverse but still quite) small pool of subsystems and renderers; possibly some mix of baseline and Asset Store components. This gives each Unity game a specific subset of flaws from a wider common pool. That is, you can tell that game A uses the same movement subsystem as games B and C (but not D), that game B uses the same UI subsystem as games C and D (but not A), and that game D uses the same rendering subsystem as games A and B (but not C).
In my humble opinion, the difference between good and great was often whether the Shaders and pre-Baked work was done well enough to go unnoticed.
Forcing devs to use a mid-grade GPU also tends to reduce chasing performance issues later. For example, high frame-generation artifacts users often perceive as "floats" or "wobbly". =3
> "redraw everything the whole frame" and "don't do any diffing" sound insane in this regard.
You need to consider that a web browser with its millions of lines of code in the DOM and rendering engine is pretty much the worst case for "redrawing a complex UI each frame", especially since the DOM had been designed for mostly static 'documents' and not highly dynamic graphical UIs.
Add React on top and the whole contraption might still be busy with figuring out what has changed and needs to be redrawn at the time an immediate mode UI sitting directly on top of a 3D API is already done rendering the entire UI from scratch.
A native immediate mode UI will easily be several hundred times less code (for instance Dear ImGui is currently just under 50kloc 'orthodox C++').
When the UI is highly dynamic/animated it needs to be redrawn each frame also in a 'retained mode' UI framework.
When the UI is static and only needs to change on user input, an immediate mode UI can 'stop' too until there's new input to process.
For further low-power optimizations, immediate mode UI frameworks could skip describing parts of the UI when the application knows that this part doesn't need to change (contrary to popular belief, immediate mode UI frameworks do track and retain state between frames, just usually less than retained mode UIs - but how much state is retained is an internal implementation detail).
The problem is that widgets still need to store state somewhere, and that storage space needs to be reclaimed at some point. How does the system know when that can be done? I suppose the popular approach is to just reclaim space that wasn't referenced during a draw.
However ...
When you have a listbox of 10,000 rows and you only draw the visible rows, then the others will lose their state because of this.
Of course there are ways around that but it becomes messy. Maybe so messy that retained mode becomes attractive.
At the earlist in the first frame the application UI description code doesn't mention an UI item (that means UI items need a persistent id, in Dear ImGui this is a string hash, usually created from the item's label which can have a hidden `##` identifier to make it unique, plus a push/pop-id stack for hierarchical namespacing.
> then the others will lose their state because of this
Once an item is visible, the state must have been provided by the application's UI description code, when the item is invisible, that state becomes irrelevant.
Once the item becomes visible, the application's UI code provides the item's state again.
E.g. pseudocode:
for (firstVisibleItemIndex .. lastVisibleItemIndex) |itemIndex| {
ui_list_item(itemIndex, listItemValues[itemIndex]);
}
For instance Dear ImGui has the concept of a 'list clipper' which tells the application the currently visible range of a list or table-column and the application only provides the state of the currently visible items to the UI system.
Same way as for regular ui items, if the application's ui code no longer "mentions" those items, their state can be deleted (assuming the immediate mode tracks hidden items for some reason).
The job of the immediate UI is to just draw the things. Where and how you manage your state is completely up to you.
It seems you assume some sort of OO model.
> When you have a listbox of 10,000 rows and you only draw the visible rows, then the others will lose their state because of this.
Well keep the state then.
Immediate mode really just means you have your data as an array of things or whatever and the UI library creates the draw calls for you. Drawing and data are separate.
> The job of the immediate UI is to just draw the things. Where and how you manage your state is completely up to you.
This is a bit oversimplified. For instance Dear ImGui needs to store at least the window positions between frames since the application code doesn't need to track window positions.
But then you have state in two places, user code and the retained-mode GUI framework, which need to be synced - that's where complexity creeps in. Immediate mode removes that redundancy and makes things simpler in many situations. It depends on your preference and what you're doing too, which approach suits better.
The more dynamic/animated an UI is, the less there's a difference between a retained- and immediate-mode API, since the UI needs to be redrawn each frame anyway. Immediate mode UIs might even be more efficient for highly dynamic UIs because they skip a lot of internal state update code - like creating/destroying/showing/hiding/moving widget objects).
Immediate-mode UIs can also be implemented to track changes and retain the unchanged parts of the UI in baked textures, it's just usually not worth the hassle.
The key feature of immediate mode UIs is that the application describes the entire currently visible state of the UI for each frame which allows the UI code to be 'interleaved' with application state changes (e.g. no callbacks required), how this per-frame UI description is translated into pixels on screen is more or less an implementation detail.
> The more dynamic/animated an UI is, the less there's a difference between a retained- and immediate-mode API, since the UI needs to be redrawn each frame anyway. Immediate mode UIs might even be more efficient for highly dynamic UIs because they skip a lot of internal state update code - like creating/destroying/showing/hiding/moving widget objects).
That depends on the kind of animations - typically for user interfaces, it's just moving, scaling, playing with opacity etc.. that's just updating the matrices once.
So you describe the scene graph once (this rectangle here, upload that texture there, this border there) using DOM, QML etc..., and then just update the item properties on it.
As far as the end user/application developer is concerned , this is retained mode. As far as the GPU is considered it can be redrawing the whole UI every frame..
> it's just moving, scaling, playing with opacity etc.. that's just updating the matrices once.
...any tiny change like this will trigger a redraw (e.g. the GPU doing work) that's not much different from a redraw in an immediate mode system.
At most the redraw can be restricted to a part of the visible UI, but here the question is whether such a 'local' redraw is actually any cheaper than just redrawing everything (since figuring out what needs to be redrawn might be more expensive than just rendering everything from scratch - YMMV of course).
It's not about what gets redrawn but also how much of the UI state is still retained (by the GPU). Imagine having to reupload all the textures, meshes to the GPU every frame.
Something like a lot of text ? Probably easier to redraw everything in immediate mode.
Something like a lot of images just moving, scaling, around? Easier to retain that state in GPU and just update a few values here and there...
> Easier to retain that state in GPU and just update a few values here and there
It's really not that trivial to estimate, especially on high-dpi displays.
Rendering a texture with a 'baked UI' to the framebuffer might be "just about as expensive" as rendering the detailed UI elements directly to the framebuffer.
Processing a pixel isn't inherently cheaper than processing a vertex, but there are a lot more pixels than vertices in typical UIs (a baked texture might still win when there's a ton of alpha-blended layers though).
Also, of course you'd also need to aggressively batch draw calls (e.g. Dear ImGui only issues a new render command when the texture or clipping rectangle changes, e.g. a whole window will typically be rendered in one or two draw calls).
> who has to regularly turn my VPN on and off to have full internet access,
Is this because the EU or your country has blocked access, or some news site from the US blocking access from the EU because they don't want to deal with GDPR?
Programming via LLMs is just the logical conclusion to this niche of industrialized software development which favours quantity over quality. It's basically replacing human bots which translate specs written by architecture astronauts into code without having to think on their own.
And good riddance to that type of 'spec-in-code-out' type of programming, it should never have existed in the first place. Let the architecture astronauts go wild with LLMs implementing their ideas without having to bother human programmers who actually value their craft ;)
reply