> I'd rather use something written in a crappier language that has been battle-tested for decades, personally.
I don't think this is a universal rule. Something can be old but still suck (see: openssl). On the flip side, though, I'd like to see literally any evidence that coreutils has a security problem before we go jumping off onto the shiny new replacement.
I see this accusation and characterization in basically every thread about Rust, but I really don't think it's true. On the contrary, I strongly believe it's less that these people didn't consider that, and more that they willfully chose to ignore it.
If you always keep praying to the same old bit of code to "reliably" chug along (which people clearly cannot actually ascertain, otherwise these reimplementations wouldn't be struggling), you're forever just rolling the dice that some Pandora's box will simply never open (which it absolutely does and keeps opening), while also giving up on modern capabilities. What you see as old reliable, I see as a buried lede. I'd imagine these folks see the same. [0]
It's frustrating to see the software world contend with the same pushback and counter-arguments the infra/ops world (my neck of the woods) has already figured out and went past long ago during the advent of IaC. Cattle > pets, easily, every time.
[0] It's also not a cost-benefit thing, but clearly a principled decision, so arguments that aim to contend the ROI of it all are off-base from the get-go. If ROI is the key thing for you, then all this philosophical nonsense shouldn't even be on the table. Calculate.
That is a bad take, because that imply "crappier language will be used for MORE decades".
Rust is an absolute improvement over C/C++ in major ways. Once there, for ALL THAT DECADES all the developers and all the code written will be spared the problems of "crappier languages.
In the short term there are adaptation issues? fine. But that will be erased (way faster than is possible with C) and suddenly, never again worry about things.
“battle tested for decades” just lost a lot of its value with Mythos and the likes unfortunately. Rewriting in a different language became much faster with Coding agents at the same time.
I do agree that the second system effect is real, it’s just that the balance of benefits and drawbacks significantly shifted when it comes to “rewrite in Rust” (not limited to Rust though).
> “battle tested for decades” just lost a lot of its value with Mythos and the likes unfortunately
Isn't it a bit early to make predictions on the future of computer security and how we create good software based on something that's been out for 2 weeks?
Meanwhile the C version of coreutils has been in development for 36 years. There's no rush.
https://www.cve.org/CVERecord?id=CVE-2026-35341 - `mkfifo` accidentally resets the permissions of files that already exist, so if you manage to do `sudo mkfifo /etc/shadow` then it becomes world readable.
Tbh I doubt if any of these would ever result in a real hack, unless your system is doing really mental things like running shell scripts with untrusted input.
I could only find a couple of CVEs that looked actually serious for GNU Coreutils too though. IMO if you're using these tools with untrusted input your system is janky enough that there are going to be serious flaws in it anyway. Probably though quoting mistakes.
Quote from the CVE description: "The dd utility in uutils coreutils suppresses errors during file truncation [...] This can lead to silent data corruption in backup or migration scripts, as the utility may report a successful operation even when the destination file contains old or garbage data."
That's terrifying. There's more to bugs than security bugs. You'd expect coreutils to be as bug-free as possible.
Well the TOCTOU issues do not require you to run untrusted scripts to be exploited. Another user on your system can use a legitimate command that you may run to make changes to files they shouldn’t be able to, or further escalate privileges.
Fair point. Though tbh I still think the user-isolation security for Linux is only really suited for the University/company threat model, where you generally trust users not to actually use exploits because they would get expelled/fired.
If you allow a completely untrusted user onto your system I think your chances of staying secure are low.
Then why rewrite coreutils in rust? TOCTOU isn't exact some new concept. Neither are https://owasp.org/Top10/2025/ (most of which a good web framework will prevent or migrate), and switching to rust (which as far as I know) won't bring you a safer web framework like django or rails.
1. Rust is a much more pleasant language to work with.
2. You can improve the tools, adding new features, fixing UX paper cuts etc.
You're probably thinking "you can improve the GNU versions!" and in theory sure. But in practice these sorts of tools are controlled by naysayers who want everything to stay as it was in the 80s. The sorts of people that only accept patches via git send-email to a mailing list.
Hahaha I just looked up GNU Coreutils and not only do they blame poor UX on the user ("Often these perceived bugs are simply due to wrong program usage.") but they even maintain a list of rejected feature requests:
Another maintainer and I follow issues and pull requests on a GitHub mirror. But email works fine for us and many other projects.
Regarding poor UX, it is difficult to dispute with that claim without a specific example. Note that a lot of the features we support are standardized by POSIX. Even if we dislike the behavior, it is better to comply with the standards so the programs don't behave differently than users expect. The sentence you quote isn't meant to put down users. These programs are often much more complex than meets the eye, and there are lots of common gotchas that people have run into (and will continue to do so) [1].
Of course we would love for these programs to be useful for everyone. However, feature requests are often incompatible with existing behavior, incompatible with other feature requests, or have existing functionality elsewhere. For those reasons we cannot accept every feature request.
Have you used busybox? The BSDs? I'm not sure adding more features to coreutils is a major help, and given rust-coreutils/uutils has:
1) more CVEs between two latest Ubuntu releases than coreutils has had over the last 30+ year
2) managed to break security updates
3) is neither fully compatible with POSIX nor coreutils
I'm not sure why I'd ever use it? Sadly, projects like uutils have made me suspicious of rust projects, so unless I know that the project is well maintained (for which there are numerous examples, ripgrep being the obvious example, but newsboat, the various tools from proxmox, servo/firefox, and the pgrx ecosystem are ones I use regularly), it's a negative marker against that project.
Agreed, they also have great documentation. There's something to be said for documentation that is so concise, well laid out, and immediately actionable for those looking to get started quickly.
While I work for Google, I don't speak for Google. I might have some observations that might help explain poor documentation. If I had one word, I would say "politics". The less political the technology, the easier it is for the experts to speak freely. The more political the circumstances, the more difficult it is to speak freely, identify real experts, call out mistakes or errors that might make someone lose face, etc. The more political the technology, the more people might put on the line in terms of their career, so the more dangerous it may be to say anything.
First you clone the API of the winner, because you want to siphon users from its install-base and offer de-risked switch over cost.
Now that you’re winning, others start cloning your API to siphon your users.
Now that you’re losing, you start cloning the current winner, who is probably a clone of your clone.
Highly competitive markets tend to normalize, because lock-in is a cost you can’t charge and remain competitive. The customer holds power here, not the supplier.
Thats also why everyone is trying to build into the less competitive spaces, where they could potentially moat. Tooling, certs, specialized training data, etc
Our (western) economic model forces competing individual companies to be profitable quickly. China can ignore DeepSeek losing money, because they know developing DeepSeek will help China. Not every institution needs to be profitable.
Ah yes, the Western economic model forcing individual American companies like Amazon , Youtube and Uber to become profitable after.. checks notes _14 years_ for Uber, 9 years for Amazon, many years for Youtube.
yes, they want to win the same way they won more or less every other economic competition in the last 30 years, scale out, drop prices and asphyxiate the competition.
Yeah, it’s an interesting one. I think inertia and expectations at this point? I don’t think the big labs anticipated how low the model switching costs would be and how quickly their leads would be eroded (by each other and the upstarts)
They are developing their moats with the platform tooling around it right now though. Look at Anthropic with Routines and OpenAI with Agents. Drop that capability in to a business with loose controls and suddenly you have a very sticky product with high switching costs. Meanwhile if you stick with purely the ‘chat’ use cases, even Cowork and scheduled tasks, you maintain portability.
No, they are not. If they were "racing to AGI" they would be working together. OpenAI would still be focused on being a non-profit. Anthropic wouldn't be blocking distillation on their models.
If by AGI you mean IPO, sure. I genuinely don't believe Dario nor Sam should be trusted at this point. Elon levels of overpromising and underdelivering.
If you want other people to know whether you're being genuine or sarcastic, you'll have to put a bit more effort into your comments. Your comment just adds noise.
For me, DeepSeek has been the best so far, in terms of coding skills, performance and documentation all together. Too bad this is flagged as 'concerning' when it comes to privacy, while on the other hand Gemini, ChatGPT and Claude are way beyond that, especially their mobile apps requiring a lot of permissions.
I spent only two minutes reading their documentation and it’s clear no one did any proofreading and it’s full of mistakes made by non-native speakers.
Example: the second sentence on the first page says “softwares” but “software” is a mass noun that cannot be pluralized.
Example: the third page about tokens has some zipped code to “calculate the token usage for your intput/output” and obviously “intput” should be “input” but misspelled.
As a company that produces LLMs, they could have even used their own LLM to edit their documentation to fix grammar issues, and yet they did not.
Maybe I’m just extra sensitive to grammar and spelling issues but this kind of lack of attention to detail is a huge subconscious turnoff. I had to fight my urge to close the tab.
Yeah I think those details are the least of most peoples concerns. I can't vouch one way or another for DeepSeek's documentation but for me what matters most when reading documentation is being able to get the information I want efficiently, not whether someone spelled "software" as "softwares", which is a very common spelling in Asia as an FYI.
I read OpenAI or Anthropic's documentation nowadays and it's just so full of useless junk and self-congratulation that makes it a miserable experience to go through. It's a real shame because OpenAI used to write stellar documentation and publish really lucid papers just few years ago.
I try hard not to care but subconsciously spelling errors and grammar issues scream low-quality work to me. It’s the kind of mistake that’s the easiest to correct, and they didn’t bother.
The phrase “missing comma” is missing an article. You need “a” or “the” before that. As a result when reading your comment, I subconsciously think of it as low quality.
But it’s okay. HN comments aren’t supposed to be high quality anyways. I know mine aren’t. But the official product documentation ought to be.
Between you, me, and the Deepseek team, so far as I'm aware, only one entity has caused the Western frontier model companies to panic by delivering an open model that competes far more cheaply, to the point where people are running versions of it at home.
So they spelled software wrong. So what? Outside of this being the mental equivalent of a too-scratchy-sweater for the kinds of people sensitive to that sort of thing, I don't see why it matters.
Those of us that have spent a lot of time programming with non native English speakers (the majority of software engineers on earth) have learned long ago that English ability has no correlation with engineering ability.
It may be a sign deepseek isn't "only for" Americans. Billions of non-native speakers communicate in "flawed" versions of English. Similar for other languages. Circling back to polish instructions for the picky among the Americans... hmm
If it tickles anyone's subconscious feelings, it would be their internal guiding myth of exceptionalism.
With their recent forays into authoritarianism, it's becoming ever harder to paper over the reality.
There’s no exceptionalism. I’m not even an American. I just happened to have a string of English teachers in high school that rejected grammar mistakes in student essays with the same vigor they rejected bad arguments, logical fallacies, and more. It’s a classical style education: the trivium comprises grammar, logic, and rhetoric, therefore that was how the teachers evaluated the student essays.
I despise American exceptionalism myself. This is entirely an issue about the quality of the language, not the nationality of the person behind it.
> Example: the second sentence on the first page says “softwares” but “software” is a mass noun that cannot be pluralized.
I constantly see and hear this mistake from actual humans too.
It's fairly ironic that your own comment contains run-on sentences, speculative claims and phrasing peculiarities like "could have even" instead of "could even have". Perhaps you are less sensitive to this than you think!
There is a difference between conversational speech and formal speech like documentation. It isn't rational to criticise use of the first when such speech is complaining about errors in the latter.
It's strange that you criticise "could have even" when it is a phrasing clearly being used for emphasis. "Could even have" makes no clearer sense in context.
This tells me a real developer wrote the docs, instead of someone with good English writing skills but is less technical.
> they could have even used their own LLM to edit their documentation to fix grammar issues
In my experience companies who do this rarely stop at using LLMs to fix grammar issues. It becomes full on LLM speak quite fast, especially if there isn’t a native English speaker in the room who can discern what’s good and bad writing.
"If one finds it difficult to set up OpenCode to use whatever providers they want, I won't call them 'dev'."
I feel the same way. But look at the ollama vs llama.cpp post from HN few days back and you will see most of the enthusiasts in this space are very non technical people.
a quick search suggests that's just for municipal elections. As I understand the football internet blackouts are national government policy not municipal?
That's quite the leap. Not that it's relevant but I have no issue with european boycott or sanctions of Israel, though warcrimes accusations are pretty toothless. Almost no leaders past or present charged with war crimes were ever arrested.
According to CPU bench, the Neo CPU is about the same speed as a mid range intel laptop CPU from 4 years ago.
Apple A18 Pro (Q1 2026): Multithread 11977, Single Thread 4043
Intel Core i5-1235U (Q1 2022): Multithread 12605, Single Thread 3084
--
On the high-end we got i9-13900KS at about 60k, M5 Max 18 scores about the same. But when you move on to server CPUs like Threadripper and EPYC things are about 3x faster.
Lets see if the brand new Arm AGI changes this situation in a few months.
it's worth noting that the neo is running at 3W compared to 15W for the i5. Just putting an $8 thermal pad on the neo gives you a 20-30% perf improvement by letting it run 5W continuous.
And just like that, smoked Salmon became popular again :)
BTW, did you knew municipalities can easily measure fluctuations in drug usage by testing the sewage water? In fact, sometimes they can see clear differences between different parts of the city.
> BTW, did you knew municipalities can easily measure fluctuations in drug usage by testing the sewage water?
Yep. Not just drugs are monitored this way, but also the spread of infectious diseases. That can lead to sometimes pretty weird findings - for example, polio virus is supposed to be extinct, but every so often it shows up in sewage monitoring of major German cities [1]. The cause most likely are people (tourists and immigrants) from Africa and Asia that got an attenuated virus-based vaccination in their home country shortly before they came here.
Covid is, at least in Bavaria, also part of the regular monitoring schedule [2], Austria monitors for Covid, RSV and influenza [3].
Is data like that sold anywhere? I wonder if there’s an analytics market for profiling neighborhoods based on sewage water content now. If my browser history wasn’t already rock bottom, that’s a new low for the ad market
Fun fact: if you sign up for many online casinos or betting sites they will indeed use Google Streetview to lookup your house to estimate how much money they might extract from you.
I feel like looking up official county records which show outstanding mortgage terms and purchase price and permit applications would be a better resource than an image from google street view. You should be able to figure out people's mortgage payments just based on the info on homes.com
Their strategy is more in-depth than that, and they’re more accurately looking for sharps. Somebody working minimum wage in a trailer betting for “their guy” isn’t a problem, even if they’re not going to make the book much money. Somebody working minimum wage in a trailer smurfing for a sharp can be a huge problem. You can read first hand info from professional bettors, books don’t like to reveal their risk management methodology for obvious reasons.
Streetview and a visual model seems excessive when there's plenty of databrokers straight up selling your mortgage info and shopping habits (from CC purchases)
You would think so, but you have to remember that customer profitability is exponentially distributed. I e., one addict gambling away their and their loved ones life savings is worth more than hundreds or thousands of regular players. Thus, focusing on acquiring and retaining such addicts makes perfect economic sense. So much that individual sign-ups are analyzed down to Facebook stalking and Streetview googling. Much in the same way the addicts hunt for the big win which will make them rich do the casinos hunt for the whales that will fund the whole office for months.
Goes to show that not all security bugs are memory related bugs
reply