I use DC on my Mac, works just as well as on Linux, some of the touchpad gesture sometimes changes the font size and I have to reconfigure it once in a while when I accidentally do them, haven’t figured out a simple way to prevent this from happening.
Not sure I follow. Math.random() is in testSort, so the time to generate the random numbers is part of the measurement (even though it almost certainly shouldn’t be).
Edit: my main point is that there are many flaws in this comparison, so I wouldn’t draw any conclusions from the measurements in the article. They’re pretty much meaningless.
I agree, but what I'm saying is, it only generates the numbers once and then sorts them 500 times. Yes it's a flawed measurement because it measures the generation time + 500 sorts, but the time to generate the numbers is probably minuscule compared to the sorting.
There are many more flaws, as you say, the biggest flaw is the stable vs unstable sort comparison, but it looks like the article author (not OP) has fixed it half an hour ago and updated the article.
Your go uses `pdqsort` to sort 4 byte ints from 0 to 100, while rust uses a stable sort (`sort_unstable` is equivalent to `pdqsort`) on single byte integers from 0 to 255. Hardly a fair comparison.
You typically use build-std and possibly abort on panics or immediate-abort to disable the string formatting code in stdlib, and you do fat LTO. after stripping the binary, it’s pretty damn small.
If you want even further savings on std, you can use an utility that LZMA compresses the different ELF sections of your binary and unpacks them to memory at runtime. That’s language agnostic though…
Regarding multiple rust binaries each bundling a copy of stdlib and inflating the space, this approach would only link the bits of the stdlib each binary uses, but still it’s not ideal. Three approaches I can think of are:
1. Use a file system that compresses its contents like btrfs or zfs or their embedded variant. It should reduce the redundancies.
2. Go the busybox way, as you said. This requires work but it’s definitely doable.
3. Link stdlib dynamically. There is a way, I believe. Rust maintains a “stable” ABI as long as you use the same rustc version if I’m not mistaken.
4. This is ridiculous, so I don’t even count this as a way, but what if you just stored the .a static libs and did the linking on demand into a temporary file that would then be executed?
I personally have never encountered 50MiB optimized stripped and LT optimized rust binaries, only ‘materialized’ which was like 138MiB but it contained debug symbols, no LTO and is quite a large database application.
Both Java and Go also have unsafe, and are often used for FFI/Performance, and some popular/foundational libraries (like Google's Protobuf library in Java) use unsafe just for added performance. Heck, Go has an Assembler. So that's off the table as far as safety comparisons go. "But they don't use it as much" is also not an argument. #[forbid(unsafe_code)] and rely on the RustBelt formally verified standard library, problem solved.
Memory leaks are not a memory safety error, especially when deliberate (think calling malloc without ever calling free is perfectly safe, especially if you intend it as a static duration allocation). Unintentional memory leaks like Rc/Arc cycles are not a problem that occurs in garbage collected language, true, but also not a memory safety issue, unless it's unsafe code that relies on drop() being called or something.
So if we count data races, which you mentioned, Rust is in fact safer than Java/Go.
“But they don’t use it as much” is not an argument?
It’s the whole case for Rust > modern C++ based on that?
Rust can have memory issues, but “it doesn’t use unsafe as much” as modern C++. Forbidding unsafe code doesn’t guarantee vulnerabilities are gone.
why is it that when talking about C++ memory safety, even a bit more is worth everything, but when talking about rust vs python, suddenly it doesn’t matter if it’s less.
No one is writing kernels, raw register level volatile DMA bit-banging embedded code, and other "impossible without unsafe" code in Java/Go (Ok, almost no one, don't @ me, pjmlp, there are kernels built in memory-safe languages and there's tinygo for embedded etc. But they obviously use unsafe all the same). So they don't need to use it as much (standard library, core primitives and runtime implementation notwithstanding, and boy oh boy is it unsafe!)
Shifting the discussion to Vulns., Modern C++ is moving the goalposts a bit, don't you feel?
> Forbidding unsafe code doesn’t guarantee vulnerabilities are gone
It does, because if it doesn't, or GC'd languages offer more protection, then it's a bug in rustc/spec/core libs, for all intents and purposes.
Might as well mention /proc/self/mem and other filesystem/IO related exploits, because Rust can't protect you from them, and therefore it's completely and utterly unsafe and unfit for use.
I am not saying rust is unfit and I think your first sentence gets to sorta my point.
I agree no one writes actual kernels in GC languages. 100% Rust is the best choice where GCs can’t be.
I think my argument is that if you can use a GC, I think it’s considered best practice to use a GC. If you need thread safety, use a GC language like Elixir that handles concurrency well.
Like there’s no reason to act like Java community and try to force rust into every area. It’s very good at what it does, let it stay there.
I don't even disagree with these points! But if you go down to language lawyering and semantics, I think one could technically make a case for Rust being safer (in the memory safety sense) than Java/Go/(insert any GC'd memory safe language that features synchronization primitives as an opt-in feature, think volatile in Java), if only on the merit of protecting you from data races.
Elixir is great, use elixir. I certainly am not stopping you, it's completely safe and fault tolerant. Or maybe I'm saying this because I don't know as much about Elixir/BEAM internals compared to the aforementioned languages, who knows.
That’s not true - if anything, Java is much more memory safe. Even data races are well-defined and are not prone to “out-of-thin-air” values.
If you ever go down the safe road in Rust (be it a buggy library n layers down that you use from entirely safe code), you can no longer be sure in anything, a data race is entirely undefined behavior.
How is it different from "If you ever go down the JNI road in Java"? Be it a library the bundles RocksDB, or Android stuff, doesn't matter that your average Spring Boot developer probably won't be having any native dependencies (unless they use Kafka Streams or something else that bundles a native dependency that had CVEs that needed to be patched). Just as a Rust high-level Back-End developer that works with axum, sqlx, tokio etc. (vs Netty/NIO in Java for example, which also use unsafe/native code) hopefully won't be using buggy unsafe libraries.
Does the JVM protect you from partial reads? On Hotspot or on say, GraalVM's LLVM runtime too? Does Go? I assume they at least protect you from stale read UAFs by virtue still being traceable. (This is a genuine question).
Java is probably the most self-reliant platform out there, it is almost completely pure, as in being written in Java or in another JVM language. Besides places where it is absolutely necessary (e.g. opengl), there is simply no JNI used, or only very rarely.
> For regular application code there's no real benefit
Can't disagree, naive garbage collected code would outperform naive Rust code that allocates Strings and Vecs up the wazoo.
And performance isn't a real issue with simple CRUD/ORM-type services.
Wrapping business logic in newtypes and enforcing specific invariants for them and serde style "parse don't validate", can do wonders for business logic, but then again you can probably just use a garbage collected functional language with dependent typing and a bunch of other tricks in its hat, and get more mileage out of it, if that's the code you write.
> It's not really a systems language because of all the hidden memory allocations
C and C++(especially) can hide memory allocation just fine themselves.
Allocations are pretty explicit in Rust, but layers of libraries can hide those (if you don't use no_std and the like), just like in C or C++.
> lack of a stable ABI
there is the C ABI, a stable Rust ABI at any stage would just be an optimization killer and a massive PITA, ask C++. Stable ABI, international standard, and multiple implementations of the compiler, are not a requirement for a systems language.
For a language that relies on compile-time monomorphization, a stable ABI beyond what C already provides gives precious little. Swift ABI forgoes monomorphization, but it's not really a trade-off a systems language should embrace. The template header issue in C++, I won't even touch.
There are also crates in the ecosystem that can generate 2-way FFI glue code to provide a stable ABI for Rust based on extern "C".
> It's garbage for embedded because of all the gratuitous copies you have to make to appease the borrow checker. It blows up your memory budget.
We use embedded Rust on a memory constrained MCU and besides slightly increasing the maximum stack size (because less is allocated on the heap), memory budget is not an issue we have. And in LLVM 16, Rust's stack usage will decrease even more, due to recent optimizations.
You don't need to use "gratuitous copies", you can statically allocate just fine, and if your value is ephemeral and is read/mutated by multiple threads, heap allocating with reference counting is just fine.
Do you want to use AVX-512? Zen 4 is your only real option out of the three. They disabled AVX-512 on Golden Cove due to Gracemont not having it, and not having implemented any sort of AVX-512 workload CPU pinning like idk, SIGILL trapping? That's still the situation with Raptor Lake as well, from what I gather.
I personally am waiting for the 3D V-Cache variants which will hopefully be announced at CES.
I think you meant Espressif, Steve. They have a fork of LLVM with Xtensa support they’re looking to upstream (still a few things missing, like the DSP/AI instructions in ESP32-S3, and I think the codegen is better on GCC for now). And the folks at esp-rs who work at ESP and outside contributors maintain a Rust toolchain and standard library (based on ESP-IDF) which they also want to upstream. There’s also a baremetal target which has a dedicated developer in ESP, it’s pretty amazing. Although esp32-s3 is going to be the last Xtensa chip from them, they’re planning on moving to RISC V, wholesale, with all their products in the last year based on it. Ferrous Systems even designed a Rust specific devkit based on their RISC V esp32-c3, to teach embedded Rust on.
I bet Cadence was ripping them off for the IP, which is a shame…
Any talks in Oxide about porting Hubris to RISC V? I hear getting your hands on Cortex-M*s in bulk is still pretty challenging these days.