Hacker Newsnew | past | comments | ask | show | jobs | submit | aaronbwebber's commentslogin

It's not just better performance on latency benchmarks, it likely improves throughput as well because the writes will be batched together.

Many applications do not require true durability and it is likely that many applications benefit from lazy fsync. Whether it should be the default is a lot more questionable though.


It’s like using a non-cryptographically secure RNG: if you don’t know enough to look for the fsync flag off yourself, it’s unlikely you know enough to evaluate the impact of durability on your application.


> if you don’t know enough to look for the fsync flag off yourself,

Yeah, it should use safe-defaults.

Then you can always go read the corners of the docs for the "go faster" mode.

Just like Postgres's infamous "non-durable settings" page... https://www.postgresql.org/docs/18/non-durability.html


You can batch writes while at the same time not acknowledging them to clients until they are flushed, it just takes more bookkeeping.


I also think fsync before acking writes is a better default. That aside, if you were to choose async for batching writes, their default value surprises me. 2 minutes seems like an eternity. Would you not get very good batching for throughout even at something like 2 seconds too? Still not safe, but safer.


For transactional durability, the writes will definitely be batched ("group commit"), because otherwise throughput would collapse.


> Many applications do not require true durability

Pretty much no application requires true durability.


Maybe what's confusing here is "true durability" but most people want to know that when data is committed that they can reason about the durability of that data using something like a basic MTBF formula - that is, your durability is "X computers of Y total have to fail at the same time, at which point N data loss occurs". They expect that as the number Y goes up, X goes up too.

When your system doesn't do things like fsync, you can't do that at all. X is 1. That is not what people expect.

Most people probably don't require X == Y, but they may have requirements that X > 1.


For the vast majority of applications a rare event of data loss is no big deal and even expected.


I think you're still not getting my point. Yes, a rare event of data loss may not be a big deal. What is a big deal is being able to reason about how rare that event is. When you have durable raft you can reason by using straightforward MTBF calculations. When you don't, you can keep adding nodes but you can't use MTBF anymore because a single failure is actually sufficient to cause data loss.


It means that the action we should take in response to this article is "building more dorms with singles" rather than "we need to rethink the way that we are making accommodations for disabilities in educational contexts".

That seems like an important distinction, and makes the rest of the article (which focuses on educational accommodations) look mistaken.


I worked in residential life while in college and can tell you that placing freshmen in singles is a horrible idea. It leads to isolation and lets mental health issues fester. Some need it but you do not want to place anyone who doesn’t into a room alone especially in their first year.


Yet here in the UK it's perfectly normal. When I went to uni in 2000 in our halls there were 15 rooms per floor ber block, 2 of which were twins and 13 were single.

The people in the twins were not happy - they hadn't asked for them.

I knew one person who dropped out in the first 3 months (for mental purposes), and that was someone who shared a room.


Before you went to college, did you have a bedroom to yourself in your parents' home?


Ridiculous comparison. First, neither I nor anyone I know had a room where we could lock our parents out. Second, your parents actually care about you and if you spent 24+ hours in there without coming out they'd check on you (probably much sooner actually). No such luck in a dorm.


I can't say I agree since I seen many people struggling with being forced into close quarters with a complete stranger that they might have nothing in common with or actively dislike and have nowhere truly private.

Maybe its fine for many extroverts, but forcing an introvert into a room with others is a great way to drive many people absolutely mental.


I agree in that freshmen should get the "experience" at least once. However, the way Stanford has arranged housing has meant that a good number of students will not live in a single for any of their 4 years.


Lol, what an uniquely USA point of view.


Meh. I think you're overstating it. To meet your anecdata, I had both the first college year, and single > double by a large margin.


I would not classify it as anecdata. This was research backed policy adopted by most US universities. Residential life and the Dean of Students office are usually doing a lot to cooperate with other universities. This part of US colleges is not competing with each other so they routinely share data, go to conferences together multiple times a year, and res. life directors move from college to college every few years so they all know each other incredibly well.

The point is that everyone who gets a single is super happy about it the same way that a drug addict is always happy when they get their drug of choice for free: of course it’s great. Of course it isn’t the best thing for you in the long run. I say this as someone who hated being in a double my first year and spent the next three in a single.

As far as I am concerned having apartments of 4-8 students where each has their own small room but shares a common space is ideal. But usually this is reserved for sophomore year and later.


It depends on the person. I lived alone in my last year of undergrad and it sent me into a deep depression. I figured out that living alone was too much isolation for me and moved back in with a roommate. That helped to pull me out of my depression and be able to finish my degree.


I don't think people advocating for more single rooms would say that no multi-occupancy rooms should exist for people who do want them.


True, but unfortunately the response from Stanford has been to introduce triple and quad rooms ;)

This is not entirely their fault. Stanford is subject to Santa Clara County building regulations, and those tend not to be friendly to large university developments (or any large developments for that matter).

I vaguely recall the recent Escondido Graduate Village Residences (EVGR) construction taking a while to get through the regulatory pipeline.

The true underlying issue here is just that there is not enough quality housing for the number of students Stanford admits.


betteridge's law of headlines still undefeated


Bit of a layup for it in this case.


I was _extremely disappointed_ not to see this meme when I clicked on the link. Will not consider using this extension until Xzibit is prominently featured.


if it's not compiled in by default, then you aren't shipping the code! Somebody is downloading it and compiling it themselves!


Incorrect. Features available to users still require a minimum, standard level of support. This is like the deceptive misnomer of staging and test environments provided to internal users used no differently than production in all but name.


Nobody does it like that though, what vendor declares unsupported is unsupported.


That .. is the definition of shipping the code, the code is being shipped to the people downloading and compiling it for themselves


If the feature is in the code that's downloaded, regardless of whether or not the build process enables it by default, the code is definitely being shipped.


BRB, filing CVE's against literally any project with example code in their documentation...


That's actually supported by the CVE program rules. Have at it if you find examples with security vulns.


I've actually seen CVEs like that before, I agree that's bonkers but I have seen it...


Given how frequently people copy and paste example code… why is that surprising? Folks need to be informed. CVEs are a channel for that.


Pssst: People who copy+paste example code aren't checking CVEs


Yes. It's no different from any optional feature. Actual beta features should only be shipped in beta software .


You and I have very different notions of "shipped". It's open source code, it's being made publicly available. That's shipped, as I see it.


This is an insane standard and attempting to adhere to it would mean that the CVE database, which is already mostly full of useless, irrelevant garbage, is now just the bug tracker for _every single open source project in the world_.


This. CVE has become garbage because "security researchers" are incentivized to file anything and everything so they can put it on their resume.


Why is it insane? The CVE goal was to track vulnerabilities that customers could be exposed to. It is used…in public, released versions. Why wouldn’t it be tracked?


Because it's not actually part of the distribution unless you compile it yourself.

It is not released any sense of the word. It is not even a complete feature.

I am actually completely shocked this needs to be explained. Legitimate insanity.


It's in the published source code, as a usable feature, just flagged as experimental and not compiled by default. It's not like this is some random development branch. It's there, to be used en route to being stable. People will have downloaded a release tagged version of the source code, compiled that feature in and used it.

By what definition is that not shipped?

> I am actually completely shocked this needs to be explained. Legitimate insanity.

Right back at you.


I've had an optional experimental feature marked with a CVE. It's not a big deal as it just lets folks know that they should upgrade if they are using that experimental feature in the affected versions.


>just flagged as experimental and not compiled by default

Are UML diagrams considered in scope too?


UML diagrams are not code. You cannot file a CVE for something that is not an actual (software or hardware) implementation.


> to be used en route to being stable

Where did you get this info? It might be the feature is actively being worked on and the DoS is a known issue which would be fixed before merge. Lot of projects have contrib folder for random scripts and other things which wouldn't get merged before some review but users are free to run the script if they want to. Experimental compile time build flags are experimental by definition.


You're all also missing the fact that the vuln is also in the NGINX+ commercial product, not just OSS. Which has a different release model.

Being the same code it'd be darn strange to have the CVE for one and not the other. We did ask ourselves that question and quickly concluded it made no sense.


"made no sense" from a narrow, CVE announcement perspective, but Maxim disagrees from another perspective:

    > [F5] decided to interfere with security policy nginx
    > uses for years, ignoring both the policy and developers’ position.
    >
    > That’s quite understandable: they own the project, and can do
    > anything with it, including doing marketing-motivated actions,
    > ignoring developers position and community.  Still, this
    > contradicts our agreement.  And, more importantly, I no longer able
    > to control which changes are made in nginx within F5, and no longer
    > see nginx as a free and open source project developed and
    > maintained for the public good.
I'm not sure what "contradicts our agreement" means but the simple interpretation is that he feels that F5 have become too dictatorial to the open source project.

The whole drama seems very short-sighted from F5's perspective. Maxim was working for you for free for years and you couldn't find some middle ground? I imagine there could have been some page on the free nginx project that listed CVEs that are in the enterprise product but that are not considered CVEs for the open source project given its stated policy of not creating CVEs for experimental features, or something like that.

To nuke the main developer, cause this rift in the community, and create a fork seems like a great microcosm of the general tendency of security leads to wield uncompromising power. I get it. Security is important. But security isn't everything and these little fiefdoms that security leads build up are bureaucratic and annoying.

I hope you understand that these uncompromising policies actually reduce security in the end because 10X developers like Maxim will start to tend to avoid the security team and, in the worst case, hide stuff from their security team. I've seen this play out over and over in large corporations. In that sense, the F5 security team is no different.

But there should be a collaborative, two-way process between security and development. I'm sure security leads will say that they have that, but that's not what I find. Ultimately, if there's an escalation, executives will side with the security lead, so it is a de facto dictatorship even if security leads will tend to avoid the nuclear option. But when you take the nuclear option, as you did in this case, don't be surprised by the consequences.


OK - I need to make very clear that I'm speaking for myself and NOT F5, OK? OK.

Ask yourself why this matters? What is the big deal about having a CVE assigned? A CVE is just a unique identifier for a vulnerability so that everyone can refer to the same thing. It helps get word out to users who might be impacted, and we know there are sites using this feature in production - experimental or not. This wasn't dictating what could or could not go into the code - my understanding was the vuln wasn't even in his code, but from another contributor. So, honestly, how does issuing the CVEs impact his work, at all?

That's what I, personally, don't understand. At a functional level, this really has no impact on his work or him personally. This is just documentation of an existing issue and a fix which had to be made, and was being made, CVE or no CVE. And this is worth a fork?

What you're suggesting is the best thing to do is to allow one developer to dictate what should or should not be disclosed to the user base, based on their personal feelings and not an analysis of the impact of that vulnerability on said user base? And if they're inflexible in their view and no compromise can be reached then that's OK?

Sometimes there's just no good compromise to be reached and you end up with one person on one side, and a lot of other people on the other, and if that one person just refuses to budge then it is what it is. Rational people can agree to disagree. In my career there have been many times when I have disagreed with a decision, and I could either make peace with it or I could polish my resume. To me it seems a drastic step to take over something as frankly innocuous as assigning a CVE to an acknowledged vulnerability. Clearly he felt differently, and strongly, on the matter. Maybe he is just very strongly anti-CVE in general, or maybe he'd been feeling the itch to control his own destiny and this was just the spur it took to make the move.

His reasons are his own, and maybe he'll share more in time. I'm comfortable with my personal stance in the matter and the recommendations I made; they conform with my personal and professional morals and ethics. I'm sorry it came to this, but I would not change my recommendation in hindsight as I still feel we did the right thing.

Only time will tell what the results of that are. I think the world is big enough that it doesn't have to be a zero sum game.


Docs say its compiled into the Linux binaries by default:

http://nginx.org/en/docs/quic.html

"Also, since 1.25.0, the QUIC and HTTP/3 support is available in Linux binary packages."


I guess a vulnerability doesn’t count unless it’s default lol. Just don’t make it default and you never have any responsibility nor does those who use it or use a vendor version that has added it in their product.


>I guess a vulnerability doesn’t count unless it’s default lol.

It's still being tested. It's not complete. It's not released. It's not in the distribution. The amount of people that have this feature in the binary AND enabled is less than the amount of people that agree that this should be a CVE.

CVE's are not for tracking bugs in unfinished features.


It IS in the code that anyone can compile to use or integrate in projects as is the OSS way. Splitting hairs because it’s not in the default binary is absurd. Guess all the extra FFMPEG compilation flags and such shouldn’t count either.


You know that random thing you mucked around on Github X years ago then forgot about, and it's amongst 30 other random repos?

Should people file a CVE against that?


Great idea! Now to just find somewhere to put a billion dollars worth of solar PV in California, preferably somewhere where it doesn't ever get dark so the power will stay on at night.


I was in the I15 recently and drove past those huge solar generating tower things. Seems like an enormous amount of nice sunny space out there close to HV lines.

As for night time, sure, spend half the $1.1 billion on solar panels and the other half on batteries.


Problem is it needs to be like 90% on batteries…



isn't molten salt quite a bit different?


How does this kind of shit get upvoted?

it really takes about 5 seconds of thinking about this to realize why this "analysis" is stupid - he takes an INCREDIBLY expansive view of what counts as "pollution" from food production and then takes the narrowest possible view of what counts as "pollution" from driving a ICE car.

It's frankly embarrassing that econlib would even host this kind of crap which would get a failing grade in an undergraduate econ class.


Yes it is, it's part of the standard go toolchain as described in the first blog post in the series: https://eng.uber.com/dynamic-data-race-detection-in-go-code/


At least some of these would be caught by running your tests with race detection on? I haven't read the whole article yet but as soon as I read the loop variable one I was pretty sure I have written code with that exact bug and had it caught by tests...

https://go.dev/doc/articles/race_detector

Edit: at the _end_ of the post, they mention that this is the second of two blog posts talking about this, and in the first post they explain that they caught these by deploying the default race detector and why they haven't been running it as part of CI (tl;dr it's slower and more resource-expensive and they had a large backlog).

https://eng.uber.com/dynamic-data-race-detection-in-go-code/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: