Most software uses 10x more memory than is necessary to solve the problem. In an ideal world, developers would stop building bloatware if their customers can't afford the DRAM.
I agree, OTOH there are many very cool things that we can build if we're able to assume a user can spare 2GB of RAM that we'd otherwise have to avoid entirely like 3D scenes with Three.js, in-browser video/photo editing. Should be making sure that extra memory is enabling genuinely richer functionality, not just compensating for developer laziness (fewer excuses now than ever for that).
While Blackstone and other PE firms are involved in buying those assets directly (part of my old job), Blackrock is also indirectly involved by buying up massive portions of the REITs listed by these firms, which validated the business in the first place. Without the extremely insane amounts of money pumped by Blackrock, Vanguard and State Street into these structures, all for some measly 4-5% return (laughable for most sophisticated investors but apparently good enough for these guys), they were able to put the accelerant to the fire. Neither BX nor any other PE firm would be doing this model if a market didn't exist for it.
While I'm obviously biased here, imo Blackstone is much better still because you don't see Steve Schwarzman go around pontificating while using the voting rights of passive investors to force certain behaviors upon the boards of nearly every company.
At the end of the day, I'm being paid to ensure that the code deployed to production meets a particular bar of quality. Regardless of whether I'm reviewing code or writing it, If I let a commit be merged, I have to be convinced that it is a net positive to the codebase.
People having easy access to LLMs makes this job much harder. LLMs can create what looks at the surface like expert-written code, but suffers from below-the-surface issues that will reveal themselves as intermittent issues or subtle bugs after being deployed.
Inexperienced devs create huge commits full of such code, and then expect me to waste an entire day searching for such issues, which is miserable.
If the models don't improve significantly in the future, I expect that most high-stakes software teams will fire all the inexperienced devs and have super-experienced engineers work with the bots directly.
Rigid ABIs aren't necessary for statically linked programs. Ideally, the compiler would look at the usage of the function in context and figure out an ABI specifically for that function that minimizes unnecessary copies and register churn.
IMHO this is the next logical step in LTO; today we leave a lot of code size and performance on the floor in order to meet some arbitrary ABI.
I would argue that is largely true because we got the ABIs and the hardware to support them to be highly optimized. Thing slow down very quickly if one gets off that hard-won autobahn of ABI efficiency.
Partly it's due to lack of better ideas for effective inter-procedural analysis and specialization, but it could also be a symptom of working around the cost of ABIs.
The point of interfaces is to decouple caller implementation details from callee implementation details, which almost by definition prevents optimization opportunities that rely on the respective details. There is no free lunch, so to speak. Whole-program optimization affords more optimizations, but also reduces tractability of the generated code and its relation to the source code, including the modularization present in the source code.
In the current software landscape, I don’t see these additional optimizations as a priority.
When looking at the rv32imc emitted by the Rust compiler, it's clear that there would be a lot less code if the compiler could choose different registers than those defined in the ABI for the arguments of leaf functions.
Not to mention issues like the op mentions making it impossible to properly take advantage of RVO with stuff like Result<T> and the default ABI.
> This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
That's not been my experience with Rust. On average produces binaries at least 4x bigger than the Zig I've compiled (and yes, I've set all the build optimization flags for binary size). I know it's probably theoretically possible to achieve similar results with Rust, it's just you have to be much more careful about things like monomorphization of generics, inlining, macro expansion, implicit memory allocation, etc that happen under the hood. Even Rust's standard library is quite hefty.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
The Rust standard library in its default config should not be used if you care about code size (std is compiled with panic/fmt and backtrace machinery on by default). no_std has no visible deps besides memcpy/memset, and is comparable to bare metal C.
I understand this, but that is a pain that you don't get with Zig. The no_std constraint is painful to deal with as a dev even with no dependencies and also means that if you're working on a target that needs small binaries, that the crates.io ecosystem is largely unavailable to you (necessitating filtering by https://crates.io/categories/no-std and typically further testing for compilation size beyond that).
Zig on the other hand does lazy evaluation and tree shaking so you can include a few features of the std library without a big concern.
Rustc does a good job of removing unused code, especially with LTO. The trick is to make sure the std library main/panic/backtrace logic doesn't call code you don't want to pay for.
IIRC there's also a mutex somewhere in there used to workaround some threading issues in libc, which brings in a bespoke mutex implementation; I can't remember whether that mutex can be easily disabled, but I think there's a way to use the slower libc mutex implementation instead.
Also, std::fmt is notoriously bad for code size, due to all the dyn vtable shenanigans it does. Avoid using it if you can.
Regardless, the only way to fix many of the problems with std is rebuilding it with the annoying features compiled out. Cargo's build-std feature should make this easy to do in stable Rust soon (and it's available in nightly today).
I'm curious about a system where capital gains are 100% for the first... I don't know, let's say a month. Then you ramp down over the course of the next year until it matches the regular income tax rate. I'm less concerned about the specific time periods than I am about the idea that it would be beneficial to society to have our financial systems encourage long-term thinking.
Exactly, the headline sort of paradoxically reflects the desire for news, and not news itself.
But still, the actual stock market behavior right now is PROBABLY (!!) more reflective of random motion than it is of a fundamental shift in investor behavior.
reply