It's not just better performance on latency benchmarks, it likely improves throughput as well because the writes will be batched together.
Many applications do not require true durability and it is likely that many applications benefit from lazy fsync. Whether it should be the default is a lot more questionable though.
It’s like using a non-cryptographically secure RNG: if you don’t know enough to look for the fsync flag off yourself, it’s unlikely you know enough to evaluate the impact of durability on your application.
I also think fsync before acking writes is a better default.
That aside, if you were to choose async for batching writes, their default value surprises me.
2 minutes seems like an eternity. Would you not get very good batching for throughout even at something like 2 seconds too?
Still not safe, but safer.
Maybe what's confusing here is "true durability" but most people want to know that when data is committed that they can reason about the durability of that data using something like a basic MTBF formula - that is, your durability is "X computers of Y total have to fail at the same time, at which point N data loss occurs". They expect that as the number Y goes up, X goes up too.
When your system doesn't do things like fsync, you can't do that at all. X is 1. That is not what people expect.
Most people probably don't require X == Y, but they may have requirements that X > 1.
I think you're still not getting my point. Yes, a rare event of data loss may not be a big deal. What is a big deal is being able to reason about how rare that event is. When you have durable raft you can reason by using straightforward MTBF calculations. When you don't, you can keep adding nodes but you can't use MTBF anymore because a single failure is actually sufficient to cause data loss.
It means that the action we should take in response to this article is "building more dorms with singles" rather than "we need to rethink the way that we are making accommodations for disabilities in educational contexts".
That seems like an important distinction, and makes the rest of the article (which focuses on educational accommodations) look mistaken.
I worked in residential life while in college and can tell you that placing freshmen in singles is a horrible idea. It leads to isolation and lets mental health issues fester. Some need it but you do not want to place anyone who doesn’t into a room alone especially in their first year.
Yet here in the UK it's perfectly normal. When I went to uni in 2000 in our halls there were 15 rooms per floor ber block, 2 of which were twins and 13 were single.
The people in the twins were not happy - they hadn't asked for them.
I knew one person who dropped out in the first 3 months (for mental purposes), and that was someone who shared a room.
Ridiculous comparison. First, neither I nor anyone I know had a room where we could lock our parents out. Second, your parents actually care about you and if you spent 24+ hours in there without coming out they'd check on you (probably much sooner actually). No such luck in a dorm.
I can't say I agree since I seen many people struggling with being forced into close quarters with a complete stranger that they might have nothing in common with or actively dislike and have nowhere truly private.
Maybe its fine for many extroverts, but forcing an introvert into a room with others is a great way to drive many people absolutely mental.
I agree in that freshmen should get the "experience" at least once. However, the way Stanford has arranged housing has meant that a good number of students will not live in a single for any of their 4 years.
I would not classify it as anecdata. This was research backed policy adopted by most US universities. Residential life and the Dean of Students office are usually doing a lot to cooperate with other universities. This part of US colleges is not competing with each other so they routinely share data, go to conferences together multiple times a year, and res. life directors move from college to college every few years so they all know each other incredibly well.
The point is that everyone who gets a single is super happy about it the same way that a drug addict is always happy when they get their drug of choice for free: of course it’s great. Of course it isn’t the best thing for you in the long run. I say this as someone who hated being in a double my first year and spent the next three in a single.
As far as I am concerned having apartments of 4-8 students where each has their own small room but shares a common space is ideal. But usually this is reserved for sophomore year and later.
It depends on the person. I lived alone in my last year of undergrad and it sent me into a deep depression. I figured out that living alone was too much isolation for me and moved back in with a roommate. That helped to pull me out of my depression and be able to finish my degree.
True, but unfortunately the response from Stanford has been to introduce triple and quad rooms ;)
This is not entirely their fault. Stanford is subject to Santa Clara County building regulations, and those tend not to be friendly to large university developments (or any large developments for that matter).
I vaguely recall the recent Escondido Graduate Village Residences (EVGR) construction taking a while to get through the regulatory pipeline.
The true underlying issue here is just that there is not enough quality housing for the number of students Stanford admits.
I was _extremely disappointed_ not to see this meme when I clicked on the link. Will not consider using this extension until Xzibit is prominently featured.
Incorrect. Features available to users still require a minimum, standard level of support. This is like the deceptive misnomer of staging and test environments provided to internal users used no differently than production in all but name.
If the feature is in the code that's downloaded, regardless of whether or not the build process enables it by default, the code is definitely being shipped.
This is an insane standard and attempting to adhere to it would mean that the CVE database, which is already mostly full of useless, irrelevant garbage, is now just the bug tracker for _every single open source project in the world_.
Why is it insane? The CVE goal was to track vulnerabilities that customers could be exposed to. It is used…in public, released versions. Why wouldn’t it be tracked?
It's in the published source code, as a usable feature, just flagged as experimental and not compiled by default. It's not like this is some random development branch. It's there, to be used en route to being stable. People will have downloaded a release tagged version of the source code, compiled that feature in and used it.
By what definition is that not shipped?
> I am actually completely shocked this needs to be explained. Legitimate insanity.
I've had an optional experimental feature marked with a CVE. It's not a big deal as it just lets folks know that they should upgrade if they are using that experimental feature in the affected versions.
Where did you get this info? It might be the feature is actively being worked on and the DoS is a known issue which would be fixed before merge. Lot of projects have contrib folder for random scripts and other things which wouldn't get merged before some review but users are free to run the script if they want to. Experimental compile time build flags are experimental by definition.
You're all also missing the fact that the vuln is also in the NGINX+ commercial product, not just OSS. Which has a different release model.
Being the same code it'd be darn strange to have the CVE for one and not the other. We did ask ourselves that question and quickly concluded it made no sense.
"made no sense" from a narrow, CVE announcement perspective, but Maxim disagrees from another perspective:
> [F5] decided to interfere with security policy nginx
> uses for years, ignoring both the policy and developers’ position.
>
> That’s quite understandable: they own the project, and can do
> anything with it, including doing marketing-motivated actions,
> ignoring developers position and community. Still, this
> contradicts our agreement. And, more importantly, I no longer able
> to control which changes are made in nginx within F5, and no longer
> see nginx as a free and open source project developed and
> maintained for the public good.
I'm not sure what "contradicts our agreement" means but the simple interpretation is that he feels that F5 have become too dictatorial to the open source project.
The whole drama seems very short-sighted from F5's perspective. Maxim was working for you for free for years and you couldn't find some middle ground? I imagine there could have been some page on the free nginx project that listed CVEs that are in the enterprise product but that are not considered CVEs for the open source project given its stated policy of not creating CVEs for experimental features, or something like that.
To nuke the main developer, cause this rift in the community, and create a fork seems like a great microcosm of the general tendency of security leads to wield uncompromising power. I get it. Security is important. But security isn't everything and these little fiefdoms that security leads build up are bureaucratic and annoying.
I hope you understand that these uncompromising policies actually reduce security in the end because 10X developers like Maxim will start to tend to avoid the security team and, in the worst case, hide stuff from their security team. I've seen this play out over and over in large corporations. In that sense, the F5 security team is no different.
But there should be a collaborative, two-way process between security and development. I'm sure security leads will say that they have that, but that's not what I find. Ultimately, if there's an escalation, executives will side with the security lead, so it is a de facto dictatorship even if security leads will tend to avoid the nuclear option. But when you take the nuclear option, as you did in this case, don't be surprised by the consequences.
OK - I need to make very clear that I'm speaking for myself and NOT F5, OK? OK.
Ask yourself why this matters? What is the big deal about having a CVE assigned? A CVE is just a unique identifier for a vulnerability so that everyone can refer to the same thing. It helps get word out to users who might be impacted, and we know there are sites using this feature in production - experimental or not. This wasn't dictating what could or could not go into the code - my understanding was the vuln wasn't even in his code, but from another contributor. So, honestly, how does issuing the CVEs impact his work, at all?
That's what I, personally, don't understand. At a functional level, this really has no impact on his work or him personally. This is just documentation of an existing issue and a fix which had to be made, and was being made, CVE or no CVE. And this is worth a fork?
What you're suggesting is the best thing to do is to allow one developer to dictate what should or should not be disclosed to the user base, based on their personal feelings and not an analysis of the impact of that vulnerability on said user base? And if they're inflexible in their view and no compromise can be reached then that's OK?
Sometimes there's just no good compromise to be reached and you end up with one person on one side, and a lot of other people on the other, and if that one person just refuses to budge then it is what it is. Rational people can agree to disagree. In my career there have been many times when I have disagreed with a decision, and I could either make peace with it or I could polish my resume. To me it seems a drastic step to take over something as frankly innocuous as assigning a CVE to an acknowledged vulnerability. Clearly he felt differently, and strongly, on the matter. Maybe he is just very strongly anti-CVE in general, or maybe he'd been feeling the itch to control his own destiny and this was just the spur it took to make the move.
His reasons are his own, and maybe he'll share more in time. I'm comfortable with my personal stance in the matter and the recommendations I made; they conform with my personal and professional morals and ethics. I'm sorry it came to this, but I would not change my recommendation in hindsight as I still feel we did the right thing.
Only time will tell what the results of that are. I think the world is big enough that it doesn't have to be a zero sum game.
I guess a vulnerability doesn’t count unless it’s default lol. Just don’t make it default and you never have any responsibility nor does those who use it or use a vendor version that has added it in their product.
>I guess a vulnerability doesn’t count unless it’s default lol.
It's still being tested. It's not complete. It's not released. It's not in the distribution. The amount of people that have this feature in the binary AND enabled is less than the amount of people that agree that this should be a CVE.
CVE's are not for tracking bugs in unfinished features.
It IS in the code that anyone can compile to use or integrate in projects as is the OSS way. Splitting hairs because it’s not in the default binary is absurd. Guess all the extra FFMPEG compilation flags and such shouldn’t count either.
Great idea! Now to just find somewhere to put a billion dollars worth of solar PV in California, preferably somewhere where it doesn't ever get dark so the power will stay on at night.
I was in the I15 recently and drove past those huge solar generating tower things. Seems like an enormous amount of nice sunny space out there close to HV lines.
As for night time, sure, spend half the $1.1 billion on solar panels and the other half on batteries.
it really takes about 5 seconds of thinking about this to realize why this "analysis" is stupid - he takes an INCREDIBLY expansive view of what counts as "pollution" from food production and then takes the narrowest possible view of what counts as "pollution" from driving a ICE car.
It's frankly embarrassing that econlib would even host this kind of crap which would get a failing grade in an undergraduate econ class.
At least some of these would be caught by running your tests with race detection on? I haven't read the whole article yet but as soon as I read the loop variable one I was pretty sure I have written code with that exact bug and had it caught by tests...
Edit: at the _end_ of the post, they mention that this is the second of two blog posts talking about this, and in the first post they explain that they caught these by deploying the default race detector and why they haven't been running it as part of CI (tl;dr it's slower and more resource-expensive and they had a large backlog).
Many applications do not require true durability and it is likely that many applications benefit from lazy fsync. Whether it should be the default is a lot more questionable though.