> What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing
They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls. Most of those mistakes are exceedingly amateur from the perspective of long-time GNU coreutils (or BSD or Solaris base) developers, issues that were identified and largely hashed out decades ago, notwithstanding the continued long tail of fixes--mostly just a trickle these days--to the old codebases.
Reading that Canonical thread was jaw-dropping. Paraphrased: "Rust is more secure, security is our priority, therefore deploying this full-rewrite of core utils is an emergency. If things break that's fine, we'll fix it :)".
I would not want to run any code on my machines made by people who think like this. And I'm pro-Rust. Rust is only "more secure" all else being equal. But all else is not equal.
A rewrite necessarily has orders of magnitude more bugs and vulnerabilities than a decades-old well-maintained codebase, so the security argument was only valid for a long-term transition, not a rushed one. And the people downplaying user impact post-rollout, arguing that "this is how we'll surface bugs", and "the old coreutils didn't have proper test cases anyway" are so irresponsible. Users are not lab rats. Maintainers have a moral responsibility to not harm users' systems' reliability (I know that's a minority opinion these days). Their reasoning was flawed, and their values were wrong.
This leaves such a bad taste in my mouth. If you fucking found 44 CVEs with some relatively amateurish ones (I'm no security engineer but even I've done that exact TOCTOU mitigation before) in such a core component of your system a month before 26.04 LTS release (or a couple months if you count from their round 1), surely the response should be "we need to delay this to 28.04 LTS to give it time to mature", not "we'll ship this thing in LTS anyway but leave out the most obviously problematic parts"?
The snap BS wasn't enough to move me since I was largely unaffected once stripping it out, but this might finally convince me to ditch.
It's insane that this is going into an LTS. It's the kind of experiment I'd expect them to play with in a non-LTS and revert in LTSes until it's fully usable, like they did with Wayland being the default, which started in 2017
If you don't want Canonical's packages, you should probably just be using Debian rather than Ubuntu. It's not 2008 anymore, stock Debian is quite user-friendly.
Worth noting is that in Debian experimental coreutils defaults to coreutils-from-uutils [0]. This came as a big surprise and as far as I can tell there's been no discussion. A Canonical developer seems to have unilaterally overwritten the coreutils package without discussing with the maintainer. All the package renames that are in Ubuntu aren't in Debian so you can't switch to GNU utils either without deep trickery in a separate recovery environment.
I'm used to running experimental software but I wasn't ready for my computer to not boot one day because of uutils. The `-Z` flag for `cp` wasn't implemented in the 9 month old version shipped in Debian at that time so initramfs creation failed...
It's in experimental only, not unstable or testing. That said I'm surprised it hasn't even caused discussion on debian-devel (sans [0]). I would've thought that at least enough Debian developers run experimental to have noticed and raise the issue, but no. I thought about starting a thread myself but couldn't be bothered.
I’ve gotta agree. Some horror stories were going around about their interview process. It seemed highly optimized to select people willing to put up with insane top-down BS.
More than that: it seems that Rust stdlib nudges the developer towards using neat APIs at an incorrect level of abstraction, like path-based instead of handle-based file operations. I hope I'm wrong.
Nearly every available filesystem API in Rust's stdlib maps one-to-one with a Unix syscall (see Rust's std::fs module [0] for reference -- for example, the `File` struct is just a wrapper around a file descriptor, and its associated methods are essentially just the syscalls you can perform on file descriptors). The only exceptions are a few helper functions like `read_to_string` or `create_dir_all` that perform slightly higher-level operations.
And, yeah, the Unix syscalls are very prone to mistakes like this. For example, Unix's `rename` syscall takes two paths as arguments; you can't rename a file by handle; and so Rust has a `rename` function that takes two paths rather than an associated function on a `File`. Rust exposes path-based APIs where Unix exposes path-based APIs, and file-handle-based APIs where Unix exposes file-handle-based APIs.
So I agree that Rust's stdilb is somewhat mistake prone; not so much because it's being opinionated and "nudg[ing] the developer towards using neat APIs", but because it's so low-level that it's not offering much "safety" in filesystem access over raw syscalls beyond ensuring that you didn't write a buffer overflow.
> So I agree that Rust's stdilb is somewhat mistake prone; not so much because it's being opinionated and "nudg[ing] the developer towards using neat APIs", but because it's so low-level that it's not offering much "safety" in filesystem access over raw syscalls beyond ensuring that you didn't write a buffer overflow.
`openat()` and the other `*at()` syscalls are also raw syscalls, which Rust's stdlib chose not to expose. While I can understand that this may not be straight forward for a cross-platform API, I have to disagree with your statement that Rust's stdlib is mistake prone because it's so low-level. It's more mistake prone than POSIX (in some aspects) because it is missing a whole family of low-level syscalls.
They're not missing, Rust just ships them (including openat) as part of the first-party libc crate rather than exposing them directly from libstd. You'll find all the other libc syscalls there as well: https://docs.rs/libc/0.2.186/libc/ . I agree that Rust's stdlib could use some higher-level helper functions to help head off TOCTOU, but it's not as simple as just exposing `openat`, which, in addition to being platform-specific as you say, is also error-prone in its own right.
The parent was asking for access to the C syscall, and C syscalls are unsafe, including in C. You can wrap that syscall in a safe interface if you like, and many have. And to reiterate, I'm all for supporting this pattern in Rust's stdlib itself. But openat itself is a questionable API (I have not yet seen anyone mention that openat2 exists), and if Rust wanted to provide this, it would want to design something distinct.
> Why can I easily use "*at" functions from Python's stdlib, but not Rust's?
I'm not sure you can. The supported pattern appears to involve passing the optional `opener` parameter to `os.open`, but while the example of this shown in the official documentation works on Linux, I just tried it on Windows and it throws a PermissionError exception because AFAIK you can't open directories on Windows.
You can but you have to go through the lower level API: NtCreateFile can open a directory, and you can pass in a RootDirectory handle to following calls to make them handle-relative.
The correct comparison is to rustix, not libc, and rustix is not first-party. And even then the rustix API does not encapsulate the operations into structs the same way std::fs and std::io do.
The correct comparison to someone asking for first-party access to a C syscall is to the first-party crate that provides direct bindings to C syscalls. If you're willing to go further afield to third-party crates, you might as well skip rustix's "POSIX-ish" APIs (to quote their documentation) and go directly to the openat crate, which provides a Rust-style API.
If I have to use unsafe just to open a file, I might as well use C. While Rustix is a happy middle that is usually enough and more popular than the open at crate, libc is in the same family as the "*-sys" crate and, generally speaking, it is not intended for direct use outside other FFI crates.
> For example, Unix's `rename` syscall takes two paths as arguments; you can't rename a file by handle
And then there’s renameat(2) which takes two dirfd… and two paths from there, which mostly has all the same issues rename(2) does (and does not even take flags so even O_NOFOLLOW is not available).
I’m not sure what you’d need to make a safe renameat(), maybe a triplet of (dirfd, filefd, name[1]) from the source, (dirfd, name) from the target, and some sort of flag to indicate whether it is allowed to create, overwrite, or both.
How about fd of the file you wanna rename, dirfd of the directory you want to open it in, and name of the new file? You could then represent a "rename within the same directory" as: dfd = opendir(...); fd = openat(dfd, "a"); rename2(fd, dfd, "b");
I can't think of a case this API doesn't cover, but maybe there is one.
The file may have been renamed or deleted since the fd was opened, and it might have been legitimate and on purpose, but there’s no way to tell what trying to resolve the fd back to a path will give you.
And you need to do that because nothing precludes having multiple entries to the same inode in the same directory, so you need to know specifically what the source direntry is, and a direntry is just a name in the directory file.
After reading this article, I'm inclined to think that the right thing for this project to do is write their own library that wraps the Rust stdlib with a file-handle-based API along with one method to get a file handle from a Path; rewrite the code to use that library rather than rust stdlib methods, and then add a lint check that guards against any use of the Rust standard library file methods anywhere outside of that wrapper.
If that's the right approach, then it would be useful to make that library public as a crate, because writing such hardened code is generally useful. Possibly as a step before inclusion in the rust stdlib itself.
If anything, I find the rust standard library to default to Unix too much for a generic programming language. You need to think very Unixy if you want to program Rust on Windows, unless you're directly importing the Windows crate and foregoing the Rust standard library. If you're writing COBOL style mainframe programs, things become even more forced, though I doubt the overlap between Rust programmers and mainframe programmers that don't use a Unix-like is vanishingly small.
This can also be a pain on microcontrollers sometimes, but there you're free to pretend you're on Unix if you want to.
If you want to support file I/O in the standard library, you have to choose _some_ API, and that either is limited to the features common to all platforms, or it covers all features, but call that cannot be supported return errors, or you pick a preferred platform and require all other platforms to try as hard as they can to mimic that.
Almost all languages/standard libraries pick the latter, and many choose UNIX or Linux as the preferred platform, even though its file system API has flaws we’ve known about for decades (example: using file paths too often) or made decisions back in 1970 we probably wouldn’t make today (examples: making file names sequences of bytes; not having a way to encode file types and, because of that, using heuristics to figure out file types. See https://man7.org/linux/man-pages/man1/file.1.html)
You have to choose something, and I'm glad they didn't go with the idiotic Go approach ("every path is a valid UTF-8 string" or we just garble the path at the standard library level"). You can usually abstract away platform weirdness at the implementation level, but programming on non-Unix environments it's more like programming against cygwin.
A standard library for files and paths that lacks things like ACLs and locks is weirdly Unixy for a supposedly modern language. Most systems support ACLs now, though Windows uses them a lot more. On the other hand, the lack of file descriptors/handles is weird from all points of view.
Had Windows been an uncommon target, I would've understood this design, but Windows is still the most common PC operating system in the world by a great margin. Not even considering things like "multile filesystem roots" (drive letters) "that happen to not exist on Linux", or "case insensitive paths (Windows/macOS/some Linux systems)" is a mistake for a supposedly generic language, in my opinion.
As far as I can tell from Microsoft's documentation, WinAPI access for ACLs was added in Windows 10, which Rust 1.0 predates. And std::fs attempts to provide both minimalist and cross-platform APIs, which in practice means (for better or worse) it's the lowest common denominator between Windows and Unix, with the objective being that higher-level libraries can leverage it as a building block. From the documentation for std::fs:
"This module contains basic methods to manipulate the contents of the local filesystem. All methods in this module represent cross-platform filesystem operations. Extra platform-specific functionality can be found in the extension traits of std::os::$platform."
Following its recommendation, if we look at std::os::windows::fs we see an extension trait for setting Windows-specific flags for WinAPI-specific flags, like dwDesiredAccess, dwShareMode, dwFlagsAndAttributes. I'm not a Windows dev but AFAICT we want an API to set lpSecurityAttributes. I don't see an option for that in std::os::windows::fs, likely complicated by the fact that it's a pointer, so acquiring a valid value for that parameter is more involved than just constructing a bitfield like for the aforementioned parameters. But if you think this should be simple, then please propose adding it to std::os::windows::fs; the Rust stdlib adds new APIs all the time in response to demand. (In the meantime, comprehensive Windows support is generally provided by the de-facto standard winapi crate, which provides access to the raw syscall).
I'm not sure which docs you mean but that's not true. The NT kernel has used ACLs long before rust was invented. But it's indeed true that rust adds platform-specific methods based on demand. The trouble with ACLs is it means either creating a large API surface in the standard library to handle them or else presenting a simple interface but having to manage raw pointers (likely using a wrapper type but even then it can't be made totally safe).
> the de-facto standard winapi crate, which provides access to the raw syscall
Since the official Microsoft `windows-sys` crate was released many years ago, the winapi crate has been effectively unmaintained (it accepts security patches but that's it).
You misunderstand the documentation. Microsoft doesn't provide online documentation for versions of Windows that are no longer supported. Functions like SetFileSecurity have existed since Windows NT 3.1 back in 1993.
And sure, Rust could add the entire windows crate to the standard library, but my point is that this isn't just Windows functionality: getfacl/setfacl has been with us for decades but I don't know any standard library that tries to include any kind of ACLs.
> I'm glad they didn't go with the idiotic Go approach ("every path is a valid UTF-8 string" or we just garble the path at the standard library level")
Can you expound a bit on this? I haven't been able to find any articles related to this kind of problem. It's also a bit surprising, given that Go specifically did not make the same choice as Rust to make strings be Unicode / UTF-8 (Go strings are just arrays of bytes, with one minor exception related to iteration using the range syntax).
Go's docs put it like this: Path names are UTF-8-encoded, unrooted, slash-separated sequences of path elements, like “x/y/z”. If you operate on a path that's a non-UTF-8 string, then Go will do... something to make the string work with UTF-8 when passed back to standard file methods, but it likely won't end up operating on the same file.
Rust has OsStr to represent strings like paths, with a lossy/fallible conversion step instead.
Go's approach is fine for 99% of cases, and you're pretty screwed if your application falls for the 1% issue. Go has a lot of those decisions, often to simplify the standard library for most use cases most people usually run into (like their awful, lossy, incomplete conversion between Unix and Windows when it comes to permissions/read-only flags/etc.).
> Path names are UTF-8-encoded, unrooted, slash-separated sequences of path elements, like “x/y/z”
This is only for the "io/fs" package and its generic filesystem abstractions. The "os" package, which always operates on the real filesystem, doesn't actually specify how paths are encoded, nor does its associated helper package "path/filepath".
In practice, non-UTF-8 already wasn't an issue on Unix-like systems, where file paths are natively just byte sequences. You do need to be aware of this possibility to avoid mangling the paths yourself, though. The real problem was Windows, where paths are actually WTF-16, i.e. UTF-16 plus unpaired surrogates. Go has addressed this issue by accepting WTF-8 paths since Go 1.21: https://github.com/golang/go/issues/32334#issuecomment-15500...
That's the same for the C or Python standard libraries. The difference is that in C you tend to use the Win32 functions more because they're easily reached for; but Python and Rust are both just as Unixy.
Someone once coined a related term, "disassembler rage". It's the idea that every mistake looks amateur when examined closely enough. Comes from people sitting in a disassembler and raging the high level programmers who had the gall to e.g. use conditionals instead of a switch statement inside a function call a hundred frames deep.
We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.
Thing is, these tools are so critical that even one error may cause systems to be compromised; rewriting them should never be taken lightly.
(Actually ideally there's formal verification tools that can accurately test for all of the issues found in this review / audit, like the very timing specific path changes, but that's a codebase on its own)
Is formal verification able to find most of these issues? I'm no expert on formal analysis, but I suspect most systems are not able to handle many of these errors. It seems more likely that the system will assume the file doesn't change between two syscalls - which seems to be the majority of issues. Modeling that possibility at least makes the formal system much harder to make.
When I read the article I came away with the impression that shipping bugs this severe in a rewrite of utils used by hundreds of millions of people daily (hourly?) isn’t ok. I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.
Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.
Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.
I think that legitimate real world issues in rust code should be talked about more often. Right now the language enjoys a reputation that is essentiaöly misleading marketing. It isn't possible to create a programing language that doesn't allow bugs to happen (even with formal verification you can still prove correctness based on a wrong set of assumptions). This weird, kind of religious belief that rust leads to magically completely bug free programs needs to be countered and brought in touch with reality IMO.
Nobody believes Rust programs are but free, though. Rust never promised that. It doesn't even promise memory safety, it only promises memory safety if you restrict yourself to safe APIs which simply isn't always possible.
Is it possible you’ve misunderstood what Rust promises?
> It isn't possible to create a programing language that doesn't allow bugs to happen
Yes, that’s true. No one doubts this. Except you seem to think that Rust promises no bugs at all? I don’t know where you got this impression from, but it is incorrect.
Rust promises that certain kinds of bugs like use-after-free are much, much less likely. It eliminates some kinds of bugs, not all bugs altogether. It’s possible that you’ve read the claim on kinds of bugs, and misinterpreted it as all bugs.
On the other hand, there are too many less-experienced Rust fans who do claim that "Rust" promises this and that any project that does not use Rust is doomed and that any of the existing decades-old software projects should be rewritten in Rust to decrease the chances that they may have bugs.
What is described in TFA is not surprising at all, because it is exactly what has been predicted about this and other similar projects.
Anyone who desires to rewrite in Rust any old project, should certainly do it. It will be at least a good learning experience and whenever an ancient project is rewritten from scratch, the current knowledge should enable the creation of something better than the original.
Nonetheless, the rewriters should never claim that what they have just produced has currently less bugs than the original, because neither they nor Rust can guarantee this, but only a long experience with using the rewritten application.
Such rewritten software packages should remain for years as optional alternatives to the originals. Any aggressive push to substitute the originals immediately is just stupid (and yes, I have seen people trying to promote this).
Moreover, someone who proposes the substitution of something as basic as coreutils, must first present to the world the results of a huge set of correctness tests and performance benchmarks comparing the old package with the new package, before the substitution idea is even put forward.
The only language I've ever seen users make that claim for is Haskell. Rust users have never made the claim, but I've seen it a lot from advocates who appear to find "hello world" a complex hard to write program.
Where are these rust fans? Are they in the room with us right now?
You’ve constructed a strawman with no basis in reality.
You know what actual Rust fans sound like? They sound like Matthias Endler, who wrote the article we’re discussing. Matthias hosts a popular podcast Rust in Production where talks with people about sharp edges and difficulties they experienced using Rust.
A true Rust advocate like him writes articles titled “Bugs Rust Won’t Catch”.
> Such rewritten software packages should remain for years as optional alternatives to the originals.
> must first present to the world the results of a huge set of correctness tests and performance benchmarks
Yeah, you can see those in https://github.com/uutils/coreutils. This project has also worked with GNU coreutils maintainers to add more tests over time. Check out the graph where the total number of tests increases over time.
> before the substitution idea is even put forward
I partly agree. But notice that these CVEs come from a thorough security audit paid for by Canonical. Canonical is paying for it because they have a plan to substitute in the immediate future.
Without a plan to substitute it’s hard to advocate for funding. Without funding it’s hard to find and fix these issues. With these issues unfixed it’s hard to plan to substitute.
Those Rust fans exist on almost all Internet forums that I have seen, including on HN.
I do not care about what they say, so I have not made a list with links to what they have posted. But even only on HN, I certainly have seen much more than one hundred of such postings, more likely at least several hundreds, even on threads that did not have any close relationship with Rust, so there was no reason to discuss Rust.
Since the shameless promotion with false claims of Java by Sun, during the last years of the previous century, there has not been any other programming language affected by such a hype campaign.
I think that this is sad. Rust has introduced a few valid innovations and it is a decent programming language. Despite this, whenever someone starts mentioning Rust, my first reaction is to distrust whatever is said, until proven otherwise, because I have seen far too many ridiculous claims about Rust.
Could you find one such person on this thread? Someone making ridiculous claims about what Rust offers.
I’ll tell you what I think you’ve seen - there are hundreds of threads where you’ve seen people claim they’ve seen this everywhere. That gives you the impression that it is universal.
The "elimination of bugs" is not synonymous with "the elimination of all bugs". The way you're presenting it, any single bug in a rewrite would be grounds to consider the the entire endeavor a failure, which is a ridiculous standard.
There are plenty of strong arguments to be made against rewriting something in Rust, but this is a pretty weak one.
I didn't downvote, but I feel the last two points show a lack of nuance. It's saying "Rust doesn't prevent 100% of the bugs, like all other programming languages", while failing to acknowledge that if a programming language prevents entire classes of bugs, it's a very significant improvement.
Nobody disputes that Rust is one of the programming languages that prevent several classes of frequent bugs, which is a valuable feature when compared with C/C++, even if that is a very low bar.
What many do not accept among the claims of the Rust fans is that rewriting a mature and very big codebase from another language into Rust is likely to reduce the number of bugs of that codebase.
For some buggier codebases, a rewrite in Rust or any other safer language may indeed help, but I agree with the opinion expressed by many other people that in most cases a rewrite from scratch is much more likely to have bugs, regardless in what programming language it is written.
If someone has the time to do it, a rewrite is useful in most cases, but it should be expected that it will take a lot of time after the completion of the project until it will have as few bugs as mature projects.
As other people have mentioned, the goal of uutils was not "let's reduce bugs in coreutils by rewriting it in Rust", it was "it's 2013 and here's a pre-1.0 language that looks neat and claims to be a credible replacement for C, let's test that hypothesis by porting coreutils, giving us an excuse to learn and play with a new language in the process". It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.
Whether or not it was wise for Canonical to attempt to then take that codebase and uplift it into Ubuntu is a different story altogether, but one that has no bearing on the motivations of the people behind the original port itself.
You can see an alternative approach with the authors of sudo-rs. Rather than porting all of userspace to Rust for fun, they identified a single component of a particularly security-critical nature (sudo), and then further justified their rewrite by removing legacy features, thereby producing an overall simpler tool with less surface area to attack in the first place. It was not "we're going to rewrite sudo in Rust so it has fewer bugs", it was "we're going to rewrite sudo with the goal of having fewer bugs, and as one subcomponent of that, we're going to use Rust". And of course sudo-rs has had fresh bugs of its own, as any rewrite will. But the mere existence of bugs does not invalidate their hypothesis, which is that a conscientious rewrite of a tool can result in fewer bugs overall.
But are the current uutils developers the same as the 2013 developers? At least based on GitHub's graphs, that's not the case (it looks fairly bimodal to me), and so it wouldn't be unreasonable to treat the 2013-era project differently to the 2020-era project. So judging the 2020-era project for its current and ongoing failures does not seem unreasonable.
Similarly, sudo-rs dropping "legacy" features leaves a bad taste in my mind, there are multiple privilege escalation tools that exist (doas being the first that comes to mind), and doing something better and not claiming "sudo" (and rather providing a compat mode ala podman for docker) would to me seem a better long term path than causing more breakage (and as shown by uutils, breakage on "core" utils can very easily lead to security issue).
I personally find uutils lack of care to be concerning because I've been writing (as a very low priority side project) a network utility in rust, and while it not aiming to be a drop in rewrite for anything, I would much rather not attract the same drama.
doas and sudo-rs occupy different niches, specifically doas aims for extreme minimalism and deliberately sacrifices even more compatibility than sudo-rs, which represents a middle ground.
No, once you have an MIT-licensed codebase without a copyright assignment scheme, you no longer have the freedom to relicense it at will. You could attempt to have a mixed-license codebase, which is supported by the GPL, and specify that all new contributions must accept the GPL, but this is tantamount to an incompatible fork of the project from the perspective of any downstream users, and anyone who insists on contributing code under the GPL has the freedom to perform this fork themselves.
This is simply false. You can accept GPL contributions and clearly indicate the names of the contributors as required by MIT. There is no "incompatible fork".
No, GPL and MIT have significantly different compliance requirements. You cannot suddenly begin shipping code with stricter compliance requirements to downstream users without potentially exposing them to legal liability.
Because the bugs were caused by programmer error, not anything inherent to rust. It was more notable due to cloudflare being a critical dependency for half the internet, but that particular issue could've happened in any language.
This kind of melodramatic reaction to rust code is fatiguing, honestly. Rust does not bill itself as some programming panacea or as a bug free language, and neither do any of the people I know using it. That's a strawman that just won't go away.
Rust applies constraints regarding memory use and that nearly eliminates a class of bugs, provided safe usage. And that's compelling to enough people that it warrants migration from other languages that don't focus on memory safety. Bugs introduced during a rewrite aren't notable. It happens, they get fixed, life moves on.
> caused by programmer error, not anything inherent to Rust
Your argument does not work as a praise for Rust because the bugs in any program are caused by programmer errors, except the very rare cases when there are bugs in the compiler tool chain, which are caused by errors of other programmers.
The bugs in a C or C++ program are also caused by programmer errors, they are not inherent to C/C++. It is rather trivial to write C/C++ carefully, in order to make impossible any access outside bounds, numeric overflow, use-after-free, etc.
The problem is that many programmers are careless, especially when they might be pressed by tight time schedules, so they make some of these mistakes. For the mass production of software, it is good to use more strict programming languages, including Rust, where the compiler catches as many errors as possible, instead of relying on better programmers.
The cloudflare bug was the equivalent of an uncaught exception caused by a malformed config file. There's no recovery from a malformed config file - the software couldn't possibly have done its job. What's salient is that they were using an alternative to exceptions, because people were told exceptions were error-prone, and using this thing instead would make it easier to write bug-free code. But don't do the equivalent of not catching them!
And then, it turned out to not really be any better than exceptions.
Most Rust evangelism is like this. "In Rust you do X and this makes your code have fewer bugs!" Well no it doesn't. Manually propagating exceptions still makes the program crash and requires more typing, and doesn't emit a stack trace.
That was why I brought it up. I wasn't trying to be snarky or haughty. Thank you for filling in the gaps, I should have done that instead of the 1-liner.
Seems pretty impressive they rewrote the coreutils in a new language, with so little Unix experience, and managed to do such a good job with very little bugs or vulns. I would have expected an order of magnitude more at least.
Shows how good Rust is, that even inexperienced Unix devs can write stuff like this and make almost no mistakes.
Yes, it's the lack of Unix experience that's terrifying. So many of mistakes listed are rookie mistakes, like not propagating the most severe errors, or the `kill -1` thing. Why were people who apparently did not have much experience using coreutils assigned to rewrite coreutils?
> Why were people who apparently did not have much experience using coreutils assigned to rewrite coreutils?
From what I understand, "assigned" probably isn't the best way to put it. uutils started off back in 2013 as a way to learn Rust [0] way before the present kerfuffle.
Yeah perhaps learning UNIX API's and Rust at the same time doesn't lead to a drop in replacement ready to be shipped in major distributions. Who whould have thunk it.
Strictly speaking it doesn't preclude eventually producing a production-ready drop-in replacement either, though evidently that needs a fresh set of eyes.
Why is it even possible to represent a negative PID, let alone treat the integer -1 as a PID meaning "all effective processes"? This seems like a mistake (if not a rookie mistake) in the Linux kernel API itself.
-1 is a special case, a way to represent a PID with all bits set in a platform-independent way. It's not very clean, and it comes from ancient times when writing some extra code and storing an extra few bytes was way more expensive.
The problem is that -DIGIT doubles as both "signal number" and process group. The right way to invoke kill for a process group however would be "kill [OPTS]... -- -PGID".
Pretty much all the rough edges being discussed here are design mistakes in Linux or Unix, and/or a consequence of using an unsafe language with limited abstractions and a weak type system. But because of ubiquity, this is everyone’s problem now.
You are right, but those who set for themselves the goal to substitute a Linux/UNIX package must implement programs that handle correctly all the quirks of the existing Linux/POSIX specifications.
If they do not like the design mistakes, great, they should set for themselves the goal to write a new operating system together with all base applications, where all these mistakes are corrected.
As long as they have not chosen the second goal, but the first, they are constrained by the existing interfaces and they must use them correctly, no matter how inconvenient that may be.
Anyone who learns English may be frustrated by many design mistakes of English, but they must still use English as it is spoken by the natives, otherwise they will not be understood.
Not necessarily, but was the reasoning sound and have the tradeoffs been made? The website (https://uutils.github.io/) shows some reasonable "why"s (although I disagree with making "Rust is more appealing" a compelling reason, but that's just me (disclaimer: I don't like C and don't know Rust so take this comment as you will)), but I think what's missing is how they will ensure both compatibility and security / edge case handling, which requires deep knowledge and experience in the original code and "tribal knowledge" of deep *nix internals.
Yes, perfectly good code can have bugs. This is ridiculous thinking to scrap a codebase because it's not bug-free, to replace it with one riddled with differences in behavior that break everything that uses it.
Understandable as GNU was founded on software freedom. I guess one could argue that the Rust rewrite is to establish some kind of higher standard for correctness.
That depends on what tests you are running. In any significant projects you need a test suite so large that you wouldn't run all the tests before pushing to CI - instead you are the targeted tests that test the area of code you changed, but there are more "integration tests" that go through you code and thus could break, but you don't actually run.
You can also run some static analysis that is too long to run locally every time, but once in a while it will point out "this code pattern is legal buy is almost always a bug"
It is also possible to do some formal analysis of code on CI that you wouldn't always run locally - I'm not an expert on these.
The article might disagree. See the subsection, "The importance of tacit knowledge". OTOH, if that tacit knowledge is indeed so critical then there's less risk (e.g. regarding future investment incentives) to narrowing patent protections. OTOOH, ASML's supply chain is deep and complex, and the patent portfolio is presumably similarly diffuse, which makes it difficult to analyze or even, short of a complete patent regime overhaul, identify which patents to open up to accelerate adoption.
ASML's supply chain is deep and complex - and secret. But if it were F/OSS (just imagine it) from sand to chip, that complexity would have a wider scope of human attention applied to it.
What is happening with ASML now, once happened with the wheel.
Patents are supposed to be the antidote to industrial secrets. Of course, it doesn't really work out that way because in addition to patent writers hiding the ball or strategically layering patents and secrecy, things like tacit knowledge and organization play a huge role in exploring, building, and applying solutions. FOSS doesn't really help with the tacit stuff. It's partly why it's so difficult for projects to survive after the original authors move on. With software that's not necessarily immediately fatal as long as the software works well and is easy enough to tweak around the edges to keep it compiling and interfacing well, qualities which FOSS is meant to foster and preserve. But outside software, and especially in the industrial sphere, the loss of that tacit knowledge and organization is often immediately fatal. You can't just copy stuff, you have to rebuild all that tacit knowledge and process. Often times, like in software, the resulting product that nominally achieves the same results is built around an entirely different technical approach.
RFC 1855, Netiquette Guidelines[1], specifies underscore for underlining. However, it says asterisks are for emphasis, not bold, per se. They just happened to (often?) display as bold because italics in terminals weren't a common thing. For the same reason, using /'s for italics didn't make much sense except maybe in word processors. I also suspect underscore become conflated with asterisk because some people preferred using the former for emphasis--people weren't usually trying to adhere to professional styling guides, and some people may have preferred underlining to impart emphasis, or just got into the habit without thinking about it.
I don't know how well RFC 1855 reflected common practice, though. It might be worthwhile to check the rendering code in clients like tin and mutt.
In California cops, family members of cops, and related personnel (e.g. police union officials) can get a special insignia on their license. So when they're pulled over and are asked to present their license....
The FOP (Fraternal Order of Police). Also a thing in NY and NJ.
Fun facts... the insignia you put on your license or on your car also has a thing like a registration tab... The FOP says its "to show your ongoing support", everyone else with a room temperature IQ knows its "to show you're 'paid up' on your protection money for the year".
Oh, and some enterprising souls have created "counterfeit" FOP insignia and stickers and other regalia (or for those tabs), and sold them on eBay... only to have the weight of the police union's attorneys come down on them with cease and desists, etc.
It's probably for the better they're taken down. In California, and perhaps NY and NJ, too, the status shows up on your DMV records, so when a cop runs your license or your plate (and I presume plates are scanned and run automatically), they'll see the discrepancy immediately. So someone is just asking for trouble by using fake stickers, just like if they went around flashing a gang sign when they're not actually a member.
That in itself blows my mind, why on earth should someone see your membership in this order? It's not a LE agency, and in many states the FOP allows membership for retired cops.
I do agree with what you're saying, though, but the issue to me is why that's even something that should show up when your plates are run, "Oh, you're a cop somewhere, or used to be".
I don't remember if the DMV status is actually FOP, or something else, but I knew a lawyer who worked with a police union who had this status. But that's just icing on the cake compared to stuff like https://en.wikipedia.org/wiki/Law_Enforcement_Officers%27_Bi...
I have a friend who's a union leader (as in actually runs a sizeable union) and, in the eyes of most people, a straight-up socialist. He convinced me public sector unions are a horrible idea precisely because of the above. I had known about the above, but I always had trouble squaring my support for the right to unionize with the problems with public sector unions. He basically gave me permission to call a spade a spade.
What about the Federal register of LEOs who have been terminated or resigned to avoid termination? Very useful concept for transparency...
... but the police unions that represent approximately 70% of the nation's police have negotiated it into their CBAs that this register "cannot be used for hiring or promotional purposes".
I think the FOP stickers are quite bad, but it's obviously not a "protection racket"; virtually nobody around here has them, and for a protection scheme to work there has to be some pressure to buy in.
I think you're generalizing too much. Rural communities take gun safety seriously. Farming communities take farming equipment seriously. Kids grow up internalizing the seriousness of these things, which is communicated expressly and tacitly their whole lives by countless people around them, including their friends. Plus they encounter walking examples of what can go wrong, like a missing finger, burn scars (not careful around bonfires or burn pits), or bullet holes (I knew at least 2 or 3 kids growing up with scars from shot). But put those same kids or adults who are careful with those machines in a similarly dangerous but novel situation, and they'll do dumb shit like anyone else. I'm tempted to argue they're more likely to do something dumb because they have a false confidence from their experience with other dangerous situations, whereas suburban and city kids may be more likely to be too scared to play around with any dangerous machine or situation.
I lived on a farm for a year as a young kid (farmer rented a couple of trailers on his land). I remember one day I was hanging around the hog pen watching the giant hogs mill about, probably contemplating trying to pet one. Mr Austin came by and sternly told me to not to reach through the fencing, then knelt down and showed me his ear, which was missing a big chunk.
On the flip side, plenty of Rural and Suburban people are terrified by the city, which kids growing up in the city shrug off.
Rural folks might learn to respect a PTO or the varmint rifle by age 10, but city kids learn how to navigate the bus routes and subway. They learn how to walk on crowded streets, how to live among a lot of different people, including dangerous people(and how to avoid the conflict).
It's all quite interesting. Different kinds of toughness, different kinds of mental fortitude.
I think that there's a major difference in the resulting mindsets that the two types of experiences form, though.
The first learn that nature is always present and doing its best to kill you / wreck your harvest, and that it is only through man's intelligence and social bonds that we thrive. I would argue a corollary of this is that one cannot tolerate malicious or grossly neglectful people around.
The second group learns that other people are a liability and that bad actors are just a fact of life to be tolerated and worked around.
Both approaches are clearly optimal for their respective environment. The former seems like a stronger foundation for building a civilization on, though.
This is becoming such a weird romanticisation of rural Americana!
Your civilisation is being destroyed because a largely rural constituency is able to clean a rifle in 60s but appears to have no critical thinking skills when it comes to a certain New Yorker.
Yes it’s good to learn how to be resilient in nature, but it’s also important to learn how to get along with and manage relationships with larger groups who are not always to be trusted.
The point missing from this discussion is that because of hysteria over stranger danger (not supported out by any real evaluation of or changes in risk) and because we allow cars to dominate our urban spaces, city kids are being denied opportunities for independence they previously had.
That’s the real change that’s happened … and we’re replacing real urban experience with corporate attention economies.
City kids can get on the bus or urban rail in actual big cities. Even in places like urban philippines or mexico where there is [often] no public transport, collectivos take up this niche. Kids abound in these places even in places like Manila where traffic is way worse and way more homicidal, and they take the jeepnee to go to the next barangay.
It's really mainly in the suburbs where neighborhoods are choked off by bike unfriendly freeways and no for-hire transit.
> The first learn that nature is always present and doing its best to kill you
> The second group learns that other people are a liability
Sounds like nature is simply survival + entropy and sometimes that leads to mixed incentives. Rural folks also understand people are dangerous. Per capita violent crime and murder is higher in Rural areas.
That's why I find it interesting, they're different expressions of common survival needs.
San Francisco doesn't have alleys, either, not anymore than NYC. In older buildings, including older apartment buildings, trash cans are kept under stairways, in service rooms, in ground-level hallways, or for single-family homes in garages or backyards, then wheeled out to the sidewalk the night before collection day, blocking pedestrians. Then the garbage men have to roll those bins into the street, maneuvering around parked cars, etc. NYC doesn't have trash cans because New Yorkers perennially chose to continue to throw their trash on the ground like they always had. Blame unions, blame habituation, but you can't blame NYC's architecture and layout; nothing about it is unique compared to other cities globally or even nationally.
The fact China had a huge smog problem, with hundreds of millions of people choking on coal emissions like it was 19th century London, also had something to do with it.
And Shenzen as an example would be an absolute hellhole if they hadn't mandated all electric vehicles, from tuktuks, motorbikes (must be electric) and taxis. This would have impacted ability to engage in the rapid economic growth seen there.
In a city like San Francisco, relative to the status quo ante easier development is more likely to result in slower growth in home prices, not a reduction in home prices.
But that's not the reason most San Franciscans oppose development. The primary reasons are 1) they're convinced more development will raise prices, 2) they believe affordability must be mandated through price controls or subsidies (e.g. developers dedicating X% of units for below market prices), 3) they insist on bike shedding every development proposal to death, 4) they're convinced private development is inherently inequitable (only "luxury" housing is built).
Pretty much the only group of people in the city worried about housing stock increases reducing prices are developers trying to sell-off new units. But developers are repeat players, and they're generally not the ones lending support to development hurdles. Though, there is (was?) at least one long-time developer who specializes in building "affordable" housing--mostly at public expense, of course--who did aggressively lobby for development hurdles, but carefully crafted so he and only he could easily get around them.
Alternatively, since we're spit balling, the administrators and/or accounting staff decided to strategically error on the side of a shortfall because its politically impossible to get the state to fully fund the pension obligations or to stop effectively raiding it.
The Iran War never looked good on paper. The only people who thought it would succeed were Trump and the cast of characters he surrounded himself with. I doubt if many congressional Republican chickenhawks thought it would succeed.
The only way to oust the regime is with ground troops, ripping out the Revolutionary Guard and its tentacles. For all its corruption, Iran is far from a failed state, and there aren't factions waiting in the wings, ready and willing to take over the government with force. (There are political factions, to be sure, but they're already integrated into the government, though without leverage over the Revolutionary Guard.) The only armed group remotely capable of even trying would be the Kurds, but the US and in particular Trump screwed them over in the past, multiple times. Even if they thought they could go it alone (which they couldn't), there was zero chance they were going to enter the fray without the US committing itself fully with their own invasion force (i.e. success was guaranteed), because failure would mean ethnic Kurds would be extirpated from Iran, and might induce Iraq and Syria to revisit the question of Kurdish loyalty to their own states. And, indeed, Kurdish groups took a wait and see approach, assembling some forces but waiting to see how the US played their cards.
It's just so ridiculous. Nobody is going to be writing books about the mistakes or hubris of US intelligence, military strategists, or political scholars and analysts. Even the most diehard American proponents of regime change in Iran, at least those with any competence, could have predicted (and did predict) this outcome. This was 100% a Trump fiasco, though the whole country shares some culpability for this kind of epic failure by allowing someone like Trump to win the presidency... again.
It's a little ironic that its due in part[1] to Trump's reticence to commit ground forces that we've come to this pass. I hesitate to criticize that disposition, but at the same time it's malfeasance to start a war without being willing and able to fully commit to the objective.
[1] Assuming the war had to happen, which of course it didn't.
> The Iran War never looked good on paper. The only people who thought it would succeed were Trump and the cast of characters he surrounded himself with.
Not to nitpick, but “looked good on paper” was an euphemism for “the powers that be think its doable”. Amd yes, yiu are right: Trump surrounded himself with “loyalist” this time that won’t go against hime like in the previous administration, but with the very undesirable effect of amplifying the echo chamber he lives in.
And like someone said in this thread, lots of hubris.
I am no expert on Iran, but all documentaries that I’ve seen about this reach the same conclusion: you don’t invade Iran using ground forces.
An invasion likely would turn into a quagmire, but what keeps regime proponents eternally hopeful is that unlike Afghanistan, Iraq, Vietnam, etc, Iran has a robust political system. The dictatorship notwithstanding, it has a vibrant parliament and, by global standards, a decent electoral system. The Ayatollah rules by following the maxim, keep your friends close and your enemies closer. If you could excise the Revolutionary Guard (a big if), you wouldn't necessarily need to change the government or its institutions. The existing liberal and moderate factions could quickly fill the vacuum, and would be happy to do so. You wouldn't get a pliant Iran, but that's for the better.
So by invasion the idea would be to rapidly, physically excise the apparatus the Ayatollahs use to maintain control. The structure and identity of that group is well known. It's a large group, and you couldn't catch all the leaders, but so long as you can stop their ability to enforce their rule through execution, you give the rest of the country time to shut them out of the institutions. In theory just weeks.
The problem is the very thing that makes regime change a plausibly good idea--a stable polity and modern, liberal-ish institutions--is the very thing that could result in failure. The Ayatollahs understood that a fragile, backwards system would be a weakness to their rule. Their military and bureaucracy are professional; they know how to follow orders, without being micromanaged, and even if everyone wants regime change, there's a huge collective action problem.
They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls. Most of those mistakes are exceedingly amateur from the perspective of long-time GNU coreutils (or BSD or Solaris base) developers, issues that were identified and largely hashed out decades ago, notwithstanding the continued long tail of fixes--mostly just a trickle these days--to the old codebases.
reply