I can totally understand why people would want a memory-safe decoder, but a memory-safe encoder is niche. Finding a memory-safety bug in a decoder is a matter of finding a single unchecked integer field somewhere; finding a memory-safety bug in an encoder requires first finding some sort of logic bug in the encoder and then crafting an adversarial input that survives a number of highly lossy transformations.
Compare the number of CVEs against x264 (included decoders don't count!) and FFmpeg's H.264 decoder.
> And as for Git web interfaces, the correct solution is to require logins to view complete history.
Why logins, exactly? Who would have such logins; developers only, or anyone who signs up? I'm not sure if this is an effective long-term mitigation, or simply a “wall of minimal height” like you point out that Anubis is.
Also, relevant for forges: AI doesn't understand what it's clicking on. Git forges tend to e.g. have a lot of links like “download a tarball at this revision” which are super-expensive as far as resources go, and AI crawlers will click on those because they click on every link that looks shiny. (And there are a lot of revisions in a project like VLC!) Much, much more often than humans do.
I haven't previously thought about this, but I think words over a commutative monoid are equivalent to a vector of non-negative integers, at which point you have vector addition systems, and I believe those are decidable, though still computationally incredibly hard: https://www.quantamagazine.org/an-easy-sounding-problem-yiel....
The obvious question, I guess: How much faster are you than whatever is in the Linux kernel's FIB? (Although I assume they need RCU overhead and such. I have no idea what it all looks like internally.)
> But there's now multipath TCP handover? Weird behaviour to want different network interfaces on different network share the same IP, and pass it along like a volleyball?
Mobile IP actually wanted to do this, it just never took off (not the least because both endpoints need to understand it to get route optimization). I think some Windows versions actually had partial Mobile IPv6 support.
Mostly, people who do “requests per day” have a lot lower load than 1100 requests/sec, too… it's a typical red flag for having a team that know a lot less about performance than they think.
Frankly, that usually comes from orgs that deal with the real world. 50k generic requests per day is nothing. 50k orders per day for a small e-commerce company can be pretty overwhelming.
It’s becoming laughable when people use it to boast about microservices or something :)
> Porting all of that to support ipv6 can easily be a multi-year project.
FWIW, as someone who has done exactly this in a megacorp (sloshing through homebrew technical debt with 32-bit assumptions baked in), the initial wave to get the most important systems working was measured in person-months. The long tail was a slog, of course, but it's not an all-or-nothing proposition.
Compare the number of CVEs against x264 (included decoders don't count!) and FFmpeg's H.264 decoder.
reply