Hacker Newsnew | past | comments | ask | show | jobs | submit | throwa356262's commentslogin

That was a lot of CVEs

Goes to show that not all security bugs are memory related bugs


Not aimed at you but... no sh*t. The "Rewrite it in Rust" community never heard of the second-system effect.

I'd rather use something written in a crappier language that has been battle-tested for decades, personally.


> I'd rather use something written in a crappier language that has been battle-tested for decades, personally.

I don't think this is a universal rule. Something can be old but still suck (see: openssl). On the flip side, though, I'd like to see literally any evidence that coreutils has a security problem before we go jumping off onto the shiny new replacement.


I see this accusation and characterization in basically every thread about Rust, but I really don't think it's true. On the contrary, I strongly believe it's less that these people didn't consider that, and more that they willfully chose to ignore it.

If you always keep praying to the same old bit of code to "reliably" chug along (which people clearly cannot actually ascertain, otherwise these reimplementations wouldn't be struggling), you're forever just rolling the dice that some Pandora's box will simply never open (which it absolutely does and keeps opening), while also giving up on modern capabilities. What you see as old reliable, I see as a buried lede. I'd imagine these folks see the same. [0]

It's frustrating to see the software world contend with the same pushback and counter-arguments the infra/ops world (my neck of the woods) has already figured out and went past long ago during the advent of IaC. Cattle > pets, easily, every time.

[0] It's also not a cost-benefit thing, but clearly a principled decision, so arguments that aim to contend the ROI of it all are off-base from the get-go. If ROI is the key thing for you, then all this philosophical nonsense shouldn't even be on the table. Calculate.


That is a bad take, because that imply "crappier language will be used for MORE decades".

Rust is an absolute improvement over C/C++ in major ways. Once there, for ALL THAT DECADES all the developers and all the code written will be spared the problems of "crappier languages.

In the short term there are adaptation issues? fine. But that will be erased (way faster than is possible with C) and suddenly, never again worry about things.


“battle tested for decades” just lost a lot of its value with Mythos and the likes unfortunately. Rewriting in a different language became much faster with Coding agents at the same time.

I do agree that the second system effect is real, it’s just that the balance of benefits and drawbacks significantly shifted when it comes to “rewrite in Rust” (not limited to Rust though).


> “battle tested for decades” just lost a lot of its value with Mythos and the likes unfortunately

Isn't it a bit early to make predictions on the future of computer security and how we create good software based on something that's been out for 2 weeks?

Meanwhile the C version of coreutils has been in development for 36 years. There's no rush.


Yay we can create new CVEs faster!

I wish they'd put the severity. There are 4 highs, the rest are medium or low. Here are the high ones:

https://www.cve.org/CVERecord?id=CVE-2026-35338 - `chmod --preserve-root` can be bypassed. That doesn't seem that bad tbh.

https://www.cve.org/CVERecord?id=CVE-2026-35341 - `mkfifo` accidentally resets the permissions of files that already exist, so if you manage to do `sudo mkfifo /etc/shadow` then it becomes world readable.

https://www.cve.org/CVERecord?id=CVE-2026-35352 - TOCTOU in `mkfifo` lets you do the symlink trick to get it to change permissions on an unrelated file.

https://www.cve.org/CVERecord?id=CVE-2026-35368 - You might be able to get chroot to execute arbitrary code.

Tbh I doubt if any of these would ever result in a real hack, unless your system is doing really mental things like running shell scripts with untrusted input.

I could only find a couple of CVEs that looked actually serious for GNU Coreutils too though. IMO if you're using these tools with untrusted input your system is janky enough that there are going to be serious flaws in it anyway. Probably though quoting mistakes.


I clicked a random one: https://www.cve.org/CVERecord?id=CVE-2026-35344

Quote from the CVE description: "The dd utility in uutils coreutils suppresses errors during file truncation [...] This can lead to silent data corruption in backup or migration scripts, as the utility may report a successful operation even when the destination file contains old or garbage data."

That's terrifying. There's more to bugs than security bugs. You'd expect coreutils to be as bug-free as possible.


Well the TOCTOU issues do not require you to run untrusted scripts to be exploited. Another user on your system can use a legitimate command that you may run to make changes to files they shouldn’t be able to, or further escalate privileges.

Fair point. Though tbh I still think the user-isolation security for Linux is only really suited for the University/company threat model, where you generally trust users not to actually use exploits because they would get expelled/fired.

If you allow a completely untrusted user onto your system I think your chances of staying secure are low.


Then why rewrite coreutils in rust? TOCTOU isn't exact some new concept. Neither are https://owasp.org/Top10/2025/ (most of which a good web framework will prevent or migrate), and switching to rust (which as far as I know) won't bring you a safer web framework like django or rails.

I don't know their motivations but mine would be:

1. Rust is a much more pleasant language to work with.

2. You can improve the tools, adding new features, fixing UX paper cuts etc.

You're probably thinking "you can improve the GNU versions!" and in theory sure. But in practice these sorts of tools are controlled by naysayers who want everything to stay as it was in the 80s. The sorts of people that only accept patches via git send-email to a mailing list.

Hahaha I just looked up GNU Coreutils and not only do they blame poor UX on the user ("Often these perceived bugs are simply due to wrong program usage.") but they even maintain a list of rejected feature requests:

https://www.gnu.org/software/coreutils/rejected_requests.htm...

And to nobody's surprise, to contribute it is git send-email to a mailing list.


Another maintainer and I follow issues and pull requests on a GitHub mirror. But email works fine for us and many other projects.

Regarding poor UX, it is difficult to dispute with that claim without a specific example. Note that a lot of the features we support are standardized by POSIX. Even if we dislike the behavior, it is better to comply with the standards so the programs don't behave differently than users expect. The sentence you quote isn't meant to put down users. These programs are often much more complex than meets the eye, and there are lots of common gotchas that people have run into (and will continue to do so) [1].

Of course we would love for these programs to be useful for everyone. However, feature requests are often incompatible with existing behavior, incompatible with other feature requests, or have existing functionality elsewhere. For those reasons we cannot accept every feature request.

[1] https://www.pixelbeat.org/docs/coreutils-gotchas.html


Have you used busybox? The BSDs? I'm not sure adding more features to coreutils is a major help, and given rust-coreutils/uutils has:

1) more CVEs between two latest Ubuntu releases than coreutils has had over the last 30+ year

2) managed to break security updates

3) is neither fully compatible with POSIX nor coreutils

I'm not sure why I'd ever use it? Sadly, projects like uutils have made me suspicious of rust projects, so unless I know that the project is well maintained (for which there are numerous examples, ripgrep being the obvious example, but newsboat, the various tools from proxmox, servo/firefox, and the pgrx ecosystem are ones I use regularly), it's a negative marker against that project.


Indeed, many bugs are API usage bugs, something that no language can verify. (The API is implemented in C anyway.)

No, but some languages make designing difficult-to-misuse APIs a lot easier than others.

It's Linux, it's a C API, so there's little hope.

I wonder if Redox has a much better API; at least I hope it does.


I am thinking of testing one of those AMD Ryzen AI laptops for development and local LLM. These come with win11 copilot+.

How well does 26.04 with the 7.0 kernel support these? Can it, say, use their GPU and NPU for compute out of the box?


Kindly keep us updated with your findings. Please also let me know where you publish it. Thanks


Seriously, why can't huge companies like OpenAI and Google produce documentation that is half this good??

https://api-docs.deepseek.com/guides/thinking_mode

No BS, just a concise description of exactly what I need to write my own agent.


I am very partial to Mistral's API docs https://docs.mistral.ai/api

Agreed, they also have great documentation. There's something to be said for documentation that is so concise, well laid out, and immediately actionable for those looking to get started quickly.

While I work for Google, I don't speak for Google. I might have some observations that might help explain poor documentation. If I had one word, I would say "politics". The less political the technology, the easier it is for the experts to speak freely. The more political the circumstances, the more difficult it is to speak freely, identify real experts, call out mistakes or errors that might make someone lose face, etc. The more political the technology, the more people might put on the line in terms of their career, so the more dangerous it may be to say anything.

It's because they're optimizing for a different problem.

Western Models are optimizing to be used as an interchangeable product. Chinese models are being optimizing to be built upon.


>Western Models are optimizing to be used as an interchangeable product.

But so much investment in their platforms, not just their APIs?


> Western Models are optimizing to be used as an interchangeable product

Why? It sounds like the stupidest idea ever. Interchangeability = no lock-in = no moot.


First you clone the API of the winner, because you want to siphon users from its install-base and offer de-risked switch over cost.

Now that you’re winning, others start cloning your API to siphon your users.

Now that you’re losing, you start cloning the current winner, who is probably a clone of your clone.

Highly competitive markets tend to normalize, because lock-in is a cost you can’t charge and remain competitive. The customer holds power here, not the supplier.

Thats also why everyone is trying to build into the less competitive spaces, where they could potentially moat. Tooling, certs, specialized training data, etc


Our (western) economic model forces competing individual companies to be profitable quickly. China can ignore DeepSeek losing money, because they know developing DeepSeek will help China. Not every institution needs to be profitable.

You mean like intel, tesla, spacex, openai ?

Ah yes, the Western economic model forcing individual American companies like Amazon , Youtube and Uber to become profitable after.. checks notes _14 years_ for Uber, 9 years for Amazon, many years for Youtube.

yes, they want to win the same way they won more or less every other economic competition in the last 30 years, scale out, drop prices and asphyxiate the competition.

Yeah, it’s an interesting one. I think inertia and expectations at this point? I don’t think the big labs anticipated how low the model switching costs would be and how quickly their leads would be eroded (by each other and the upstarts)

They are developing their moats with the platform tooling around it right now though. Look at Anthropic with Routines and OpenAI with Agents. Drop that capability in to a business with loose controls and suddenly you have a very sticky product with high switching costs. Meanwhile if you stick with purely the ‘chat’ use cases, even Cowork and scheduled tasks, you maintain portability.


They are all racing to AGI. They aren't designing them to be interchangeable they just happen to be.

No, they are not. If they were "racing to AGI" they would be working together. OpenAI would still be focused on being a non-profit. Anthropic wouldn't be blocking distillation on their models.

If by AGI you mean IPO, sure. I genuinely don't believe Dario nor Sam should be trusted at this point. Elon levels of overpromising and underdelivering.

If by AGI you mean IPO - I automatically read that in Fireship's voice. XD

If you want other people to know whether you're being genuine or sarcastic, you'll have to put a bit more effort into your comments. Your comment just adds noise.

What da?

For me, DeepSeek has been the best so far, in terms of coding skills, performance and documentation all together. Too bad this is flagged as 'concerning' when it comes to privacy, while on the other hand Gemini, ChatGPT and Claude are way beyond that, especially their mobile apps requiring a lot of permissions.

Meanwhile, they don't actually say which model you are running on Deepseek Chat website.

Because they produce revenue from products which abstract this away

You might enjoy Z.ais api docs aswell

Western orgs have been captured by Silicon Valley style patrimonialism, and aren’t based on merit anymore.

I spent only two minutes reading their documentation and it’s clear no one did any proofreading and it’s full of mistakes made by non-native speakers.

Example: the second sentence on the first page says “softwares” but “software” is a mass noun that cannot be pluralized.

Example: the third page about tokens has some zipped code to “calculate the token usage for your intput/output” and obviously “intput” should be “input” but misspelled.

As a company that produces LLMs, they could have even used their own LLM to edit their documentation to fix grammar issues, and yet they did not.

Maybe I’m just extra sensitive to grammar and spelling issues but this kind of lack of attention to detail is a huge subconscious turnoff. I had to fight my urge to close the tab.


Yeah I think those details are the least of most peoples concerns. I can't vouch one way or another for DeepSeek's documentation but for me what matters most when reading documentation is being able to get the information I want efficiently, not whether someone spelled "software" as "softwares", which is a very common spelling in Asia as an FYI.

I read OpenAI or Anthropic's documentation nowadays and it's just so full of useless junk and self-congratulation that makes it a miserable experience to go through. It's a real shame because OpenAI used to write stellar documentation and publish really lucid papers just few years ago.


No one cares about this kind of stuff. 99% of the devs are not English native speakers, what do you expect ? It works and we all can understand it

I try hard not to care but subconsciously spelling errors and grammar issues scream low-quality work to me. It’s the kind of mistake that’s the easiest to correct, and they didn’t bother.

Missing comma in your first sentence was such an egregious grammar error that I was unable to finish reading the rest.

The phrase “missing comma” is missing an article. You need “a” or “the” before that. As a result when reading your comment, I subconsciously think of it as low quality.

But it’s okay. HN comments aren’t supposed to be high quality anyways. I know mine aren’t. But the official product documentation ought to be.


Why ought it be?

Between you, me, and the Deepseek team, so far as I'm aware, only one entity has caused the Western frontier model companies to panic by delivering an open model that competes far more cheaply, to the point where people are running versions of it at home.

So they spelled software wrong. So what? Outside of this being the mental equivalent of a too-scratchy-sweater for the kinds of people sensitive to that sort of thing, I don't see why it matters.

Those of us that have spent a lot of time programming with non native English speakers (the majority of software engineers on earth) have learned long ago that English ability has no correlation with engineering ability.


It may be a sign deepseek isn't "only for" Americans. Billions of non-native speakers communicate in "flawed" versions of English. Similar for other languages. Circling back to polish instructions for the picky among the Americans... hmm

If it tickles anyone's subconscious feelings, it would be their internal guiding myth of exceptionalism. With their recent forays into authoritarianism, it's becoming ever harder to paper over the reality.


There’s no exceptionalism. I’m not even an American. I just happened to have a string of English teachers in high school that rejected grammar mistakes in student essays with the same vigor they rejected bad arguments, logical fallacies, and more. It’s a classical style education: the trivium comprises grammar, logic, and rhetoric, therefore that was how the teachers evaluated the student essays.

I despise American exceptionalism myself. This is entirely an issue about the quality of the language, not the nationality of the person behind it.


That seems like a you problem

The tool calling Python example would have benefitted from actually parsing the tool call. As is, it explains almost nothing.

> Example: the second sentence on the first page says “softwares” but “software” is a mass noun that cannot be pluralized.

I constantly see and hear this mistake from actual humans too.

It's fairly ironic that your own comment contains run-on sentences, speculative claims and phrasing peculiarities like "could have even" instead of "could even have". Perhaps you are less sensitive to this than you think!


There is a difference between conversational speech and formal speech like documentation. It isn't rational to criticise use of the first when such speech is complaining about errors in the latter.

It's strange that you criticise "could have even" when it is a phrasing clearly being used for emphasis. "Could even have" makes no clearer sense in context.

No irony detected.


i dont think deepseek will ever recover from this. huge loss for them. they will stop the pursuit of agi cause of one hn user and a comma.

i prefer it cuz it indicates they didnt use an LLM to write their documentations and that its human generated

Nobody cares, we're talking about quality documentation here, not a couple spelling mistakes

This tells me a real developer wrote the docs, instead of someone with good English writing skills but is less technical.

> they could have even used their own LLM to edit their documentation to fix grammar issues

In my experience companies who do this rarely stop at using LLMs to fix grammar issues. It becomes full on LLM speak quite fast, especially if there isn’t a native English speaker in the room who can discern what’s good and bad writing.


pedantry

    "If one finds it difficult to set up OpenCode to use whatever providers they want, I won't call them 'dev'."

I feel the same way. But look at the ollama vs llama.cpp post from HN few days back and you will see most of the enthusiasts in this space are very non technical people.

I think you mean ollama vs llama.cpp.

I do!

Damn autocorrect :)


I call it autocorrupt :)

I do like running Linux on the latest hardware right out of the box with zero configuration.

On the other hand, I dont like snap.


It's not ideal that you have to do that, but you can de-snapify Ubuntu. I get Firefox from a Mozilla-hosted repository and Thunderbird from Flatpak.

Then your ideal middle point is called Linux Mint

Yes, I can't get snap to work on unprivileged LXC on arm64.


The Spanish government no longer had to care about the consequences of their actions since they found a new voting block.

I'm not familiar with spanish politics, care to explain?

They just gave millions of foreigners the ability to vote for the government.

a quick search suggests that's just for municipal elections. As I understand the football internet blackouts are national government policy not municipal?

Regularization is a national policy. Soon to be followed by a shortened pathway to citizenship.

Do they not want to use the internet when football is on?

This person is annoyed because Spain is speaking out against Israeli warcrimes/genocide.

That's quite the leap. Not that it's relevant but I have no issue with european boycott or sanctions of Israel, though warcrimes accusations are pretty toothless. Almost no leaders past or present charged with war crimes were ever arrested.

Related: "Deere settles US right-to-repair lawsuit with $99 million fund, repair commitments"

https://www.reuters.com/sustainability/boards-policy-regulat...


According to CPU bench, the Neo CPU is about the same speed as a mid range intel laptop CPU from 4 years ago.

Apple A18 Pro (Q1 2026): Multithread 11977, Single Thread 4043

Intel Core i5-1235U (Q1 2022): Multithread 12605, Single Thread 3084

--

On the high-end we got i9-13900KS at about 60k, M5 Max 18 scores about the same. But when you move on to server CPUs like Threadripper and EPYC things are about 3x faster.

Lets see if the brand new Arm AGI changes this situation in a few months.


I look at those numbers and think that A18 pro is 25% faster as single thread is what matters for UX.

browserbench speedometer 3.0 on A18 pro - 33, Intel Core i5-1235U - 22

i9-13900KS gets about 33

M4 Pro - 44-50


You can't compare raw CPU speed by measuring different browsers on different OSes :)

Try this:

https://www.cpubenchmark.net/compare/6693vs7115vs7229vs7232/...


That is why I mentioned that the litmus test was to put the mobile processor on a real laptop. Not the synthetic benchmarks.

The laptop is in the hands of customers and they are happy for the performance they get.


You can pick the same browsers i.e. brave or chrome.

Also, let's not forget about unified memory impact. Raw cpu benchmarks are only one side of the complex system.


it's worth noting that the neo is running at 3W compared to 15W for the i5. Just putting an $8 thermal pad on the neo gives you a 20-30% perf improvement by letting it run 5W continuous.

I belive he does have a valid point.

You can throw money and hardware at a problem, but then someone may come along with a great idea and leapfrog you.

Just consider that all major AI providers now use deepseeks ideas for efficient training from that first paper.


And just like that, smoked Salmon became popular again :)

BTW, did you knew municipalities can easily measure fluctuations in drug usage by testing the sewage water? In fact, sometimes they can see clear differences between different parts of the city.


> BTW, did you knew municipalities can easily measure fluctuations in drug usage by testing the sewage water?

Yep. Not just drugs are monitored this way, but also the spread of infectious diseases. That can lead to sometimes pretty weird findings - for example, polio virus is supposed to be extinct, but every so often it shows up in sewage monitoring of major German cities [1]. The cause most likely are people (tourists and immigrants) from Africa and Asia that got an attenuated virus-based vaccination in their home country shortly before they came here.

Covid is, at least in Bavaria, also part of the regular monitoring schedule [2], Austria monitors for Covid, RSV and influenza [3].

[1] https://www.aerzteblatt.de/news/erreger-der-kinderlaehmung-i...

[2] https://bay-voc.lgl.bayern.de/abwassermonitoring

[3] https://abwasser.ages.at/de/


Is data like that sold anywhere? I wonder if there’s an analytics market for profiling neighborhoods based on sewage water content now. If my browser history wasn’t already rock bottom, that’s a new low for the ad market

The European Wastewater Surveillance Dashboard:

https://wastewater-observatory.jrc.ec.europa.eu/#/content/th...

Also, Wastewater analysis and drugs — a European multi-city study:

https://www.euda.europa.eu/publications/pods/waste-water-ana...


Fun fact: if you sign up for many online casinos or betting sites they will indeed use Google Streetview to lookup your house to estimate how much money they might extract from you.

I feel like looking up official county records which show outstanding mortgage terms and purchase price and permit applications would be a better resource than an image from google street view. You should be able to figure out people's mortgage payments just based on the info on homes.com

that's wild, do you have a source? curious to know more

Their strategy is more in-depth than that, and they’re more accurately looking for sharps. Somebody working minimum wage in a trailer betting for “their guy” isn’t a problem, even if they’re not going to make the book much money. Somebody working minimum wage in a trailer smurfing for a sharp can be a huge problem. You can read first hand info from professional bettors, books don’t like to reveal their risk management methodology for obvious reasons.

I know that people who work for at least one non-profit, use Google Streetview to see how much money they should ask people for.

A friend working in the business told me. I don't think it's a strategy the casinos would publicly disclose.

Streetview and a visual model seems excessive when there's plenty of databrokers straight up selling your mortgage info and shopping habits (from CC purchases)

Seems quite cumbersome to do this manually when you can get purchasing power assessments at street-level granularity from data brokers.

You would think so, but you have to remember that customer profitability is exponentially distributed. I e., one addict gambling away their and their loved ones life savings is worth more than hundreds or thousands of regular players. Thus, focusing on acquiring and retaining such addicts makes perfect economic sense. So much that individual sign-ups are analyzed down to Facebook stalking and Streetview googling. Much in the same way the addicts hunt for the big win which will make them rich do the casinos hunt for the whales that will fund the whole office for months.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: