Hacker Newsnew | past | comments | ask | show | jobs | submit | king_geedorah's commentslogin

The kernel32 -> Ntdll changes are the most interesting thing to me here. A lot of the rationale is applicable also to the linux userspace APIs, especially errors in return at the kernel-userspace boundary vs GetLastError/errno in kernel32/libc. Of course on linux the "problem" is that libc and the kernel API are intimately intertwined and the former mandates using errno. I wonder how the pattern made its way into windows as well. That environment has never seemed to me to promote using libc.


The errno/GetLastError() pattern is a remnant from a time before threads were a thing. You could have multiple processes, but they were largely scheduled collaboratively (rather than preemptively).

In that world, things like global variables are perfectly fine. But then we got first preemptive scheduling and threads, then actual multicore CPUs, so global variables became really dangerous. Thread locals are the escape hatch that carried these patterns into the 21st century, for better or worse.


Indeed, and this change of philosophy shows up in the pthread (POSIX threads) API, which returns error values directly (as a negative integer) instead of setting the errno variable.


In fact Rust was specifically discussed as a possible alternative to the C++ status quo in Jon's initial "A Programming Language For Games" talk which roughly marks the inception of his current / upcoming language.


io_uring supports submitting openat requests, which sounds like what you want. Open the dirfd, extract all the names via readdir and then submit openat SQEs all at once. Admittedly I have not used the io uring api myself so I can't speak to edge cases in doing so, but it's "on the happy path" as it were.

https://man7.org/linux/man-pages/man3/io_uring_prep_open.3.h...

https://man7.org/linux/man-pages/man2/readdir.2.html

Note that the prep open man page is a (3) page. You could of course construct the SQEs yourself.


You have a limit of 1k simultaneous open files per process - not sure what overhead exists in the kernel that made them impose this, but I guess it exists for a reason. You might run into trouble if you open too many files at ones (either the kernel kills your process, or you run into some internal kernel bottleneck that makes the whole endeavor not so worthwhile)


That's mainly for historical reasons (select syscall can only handle fds<1024), modern programs can just set their soft limit to their hard limit and not worry about it anymore: https://0pointer.net/blog/file-descriptor-limits.html


As far as I'm aware so long as you limit yourself to APIs that were available in XP you don't actually need an older SDK to develop for it with modern MSVC. The early windows platform layer stuff in the handmade hero series demonstrates doing so without anything like Cygwin or MinGW.


Most new APIs introduced since Vista are COM based, and after Windows 8, WinRT based (basically COM with IIinspectable, application identity, and .NET metadata instead of type libraries).

Plain old Win32 C API is basically frozen on Windows XP view of the world, although there are a couple of new .....ExNum() suffixes for stuff like HDPI or various IO improvements, the userspace drivers initially COM (UMDF), but reverted back to plain C struct with function pointers on version 2.0.


The only officially (at least partially) supported way from Microsoft is to add into Visual Studio the toolchain named "C++ Windows XP Support for VS 2017 (v141) tools". It is still there in the "individual components" of Visual Studio Installer for the latest VS but it is marked as [Deprecated]. It is a safe bet that MS will never fix any existing bugs in it or update it so at this point your best bet might be with the open source tools.

All other currently supported toolchains rely on runtimes that are explicitly not compatible with Win XP.


Rather interesting solution to the problem. You can't test every possibility, so you pick one and get to rule out a bunch of other ones in the same region provided you can determine some other quality of that (non) solution.

I watched a pretty neat video[0] on the topic of ruperts / noperts a few weeks ago, which is a rather fun coincidence ahead of this advancement.

[0] https://www.youtube.com/watch?v=QH4MviUE0_s


Not that coincidental. tom7 is mentioned in the article itself, and in his video's heartbreaking conclusion, he mentions the work presented in the article at the end. tom7 was working on proving the same thing!


And he tried to disprove the general conjecture, that every convex polyhedron has the Rupert property, by proving that the snub cube [1] doesn't have it. Which is an Archimedean solid and a much more "natural" shape than the Noperthedron, which was specifically constructed for the proof. (It might even be the "simplest" complex polyhedron without the property?)

So if he proves that the snub cube doesn't have the Rupert property, he could still be the first to prove that not all Archimedean solids have it.

1: https://en.wikipedia.org/wiki/Snub_cube


Wouldn’t this problem be related to the problem of finding whether two shapes collide in 3d space? That would probably be one of the most studied problems in geometry as simulations and games must compute that as fast as possible for many shapes.


A test for this one is a bit simpler, I think, because you just have to find a 2D projection of the shape from multiple orientations so one fits inside the other. You don't technically have to do any 3D comparisons beyond the projections.

It's pretty easy to brute force most shapes to prove the property true. The challenge is proving that a shape does not have the Rupert property, or that it does when it's a very specific and tight fit. You can't test an infinite number of possibilities.


Wow this is such a well made video. So many great insights, just the right level of simplification and also funny as hell!


re-search :D


Truly a special gem of a channel.


The web is open and is famously very competitive. We have three whole browser engines and only two of them are implemented by for-profit corporations whose valuations have 13 digits. I mean other ones exist, but the average modern developer claims it's your fault when something doesn't work because you use firefox or safari and also demands the browser rewrap all the capabilities the operating system already provides for you because they can't be assed to do the work of meeting users where they are.


In a world with over 3 billion people we have 'three whole browser engines'.

I don't want to be mean, but this isn't a great counterpoint.


I'm not sure what the number of people in the world has to do with whether an open standard does or doesn't promote innovation. The user asked for a case where an open standard didn't do that and I provided one. Whether you think it's a great counterpoint is entirely irrelevant to me.


But browser engines are entirely functional based on open standards!!!!!

This is the core proposition!

The benefit of open standards here, is to the consumers of these standards .. not the engines.

Open standards allow the consumers (websites / apps) to be able to benefit.


The presumption that started this thread is that open standards are always good for competition. I think browsers are a good counter example where open standards led to three browser vendors, we have less competition rather than more.


Without open standards, we would need to pick a browser and provide for it.

If we needed to support another browser we'd need to provide a new solution built to its specification.

Open standards have allowed the possibility of multiple browser vendors, without making the life of browser consumers (i.e. developers and organisations providing apps and sites) a living hell.

Without this, we'd be providing apps and sites for a proprietary system (e.g. Macromedia Flash back in ancient history).

Furthermore, when Flash had cornered a market, it had absolutely no competition at all. A complete monopoly on that segment of the market.

It took Steve Jobs and Apple to destroy it, but that's a different story.

--

The reasoning for only three engines, isn't the fault of open standards.

There are many elements of our economic system that prevent competition. Open standards is not one of them.


Browser engines are extremely difficult to start today because of the extensive, complicated, and ever growing list of specifications.

We had a web before open standards. It wasn't the best user experience and each browser was somewhat of a walled garden, but there was heavy competition in the space.


It was a literal hellscape before open standards.

I imagine there's most likely a subset of the population who believe that open standards are aligned conceptually to regulation, and that any form of regulation in a free market is wrong.

This subset of the population is misguided at best, and delusional at worst.

Open standards are essential.


Did that hellacape include more competition between companies building web browsers?


With Microsoft bundling IE with the OS, no.


Do you expect that browsers relying on closed standards would result in more competition under the same circumstances? You didn't demonstrate that.


My original demonstration wasn't actually the browser question. Auto manufacturers did show much higher levels of competition before standards and shared components.

Though it is worth noting that there was heavy competition in the browser space prior to the specs we have today. Part of the reason we ended up with a heavily spec-driven web is precisely because the high level of competition was leading to claims of corporate espionage, and it was expected that end user experience would be better with standards.

I absolutely agree the end user experience is better. I disagree that has anything to do with competition.


Without open standards, we would likely choose _one browser_, due to the economic cost of development.

One manufacturer would call all the shots for the _one browser_.

There would be zero competition until something calamitous happened to the manufacturer and the pendulum swung to a new monopolist.

We even have an example of how this plays out to fall back on; Macromedia Flash.


I believe they are referring to the icon that appears in the status bar when an application is using location services (including in the background).


I'm curious if anybody has used this for their own systems and if the savings were substantial. Fedora used something seemingly equivalent (deltarpms) by default in dnf until last year[1] and the rationale for removing it seemed to be based at least in part on the idea that the savings were not substantial enough.

[1] https://fedoraproject.org/wiki/Changes/Drop_Delta_RPMs


Well, like everything else: Is bandwidth your primary problem, or is CPU? Whenever I run apt now, download time is nearly nil but installation time is forever. Increasing installation time (and complexity, and disk space and cacheability on the mirrors) for saving some download time is unlikely to be a good tradeoff. Of course, if you are stuck with severely metered 2G somewhere in the woods, you may very well think differently. :-)

Similarly, I've turned off pdiffs everywhere; it just isn't worth it.


It plays well with the Debian reproducible builds stuff to weed out as much non-essential variation as possible.

For certain packages, I'm guessing the byte-savings could be near-infinite. Already programs are encouraged to ship `foo` (potentially arch dependent) and `foo-data`, but imagine updating "just one font" in a blob of fonts, and not having to re-download _all_ other fonts in the package.

For some interpreted-language packages, these deltas would be nearly as efficient as `git` diffs. `M-somefile.js`, A-new file.js` and just modify the build timestamps on the rest...

The answer to your question should be relatively straightforward: just run it on a base/default major version upgrade and see how many MB of files have the same `md5` between releases?


It could have really nice observability properties if the delta is transparent and you can see what is flowing by. In this regard, the space savings would be a nice side effect.


I've used this, I think it depends on what speed the connection to your Debian mirror is. It and the apt meta-data diffs definitely helped when I had slower Internet.

IIRC Google does something similar for Chrome browser updates.


Re: "What about my autocomplete?" which has shown up twice in this thread so far.

> As a small exception, trivial tab-completion doesn't need to be disclosed, so long as it is limited to single keywords or short phrases.

RTFA (RTFPR in this case)


If the ignition and door locks in your vehicle were mistakenly designed in such a way that they are trivially shimmed or could be operated by any key it seems absurd to suggest the customer should pay you to replace these mechanisms with ones that are properly secured. This seems roughly analogous to that situation at least to my understanding.


The story has a bad spin yes. But it’s just as much of a controversy if they had require people themselves pay the cost if they found out the cars where shipped with defective breaks. It’s a product error not wear and tear or user error, they should eat the costs, but the cybersecurity framing of it is being used to attempt to push the cost to the consumer.


This is precisely the point I intended to make with my comment. Perhaps my phrasing was unclear.


I think the GP is just agreeing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: