Hacker Newsnew | past | comments | ask | show | jobs | submit | Filligree's commentslogin

Why would I need the compressed layers?

The OCI manifest references the hashes of these compressed layers, and re-compressing them does not guarantee obtaining the same hash

Recompressing should be guaranteed deterministic. It’s the packing/unpacking of tar archives to/from directories on disk that leads to the non-determinism (such as timestamps and ownership metadata). If the tar is left intact, both zstd and gzip should produce byte for byte identical outputs given the same compression parameters.

That is not correct. You would have to use the same compression tool (and likely version) for this to match.

Old docker discarded the compressed bits but kept some metadata about the the so it can at least recreate the tar.

It also recreated the manifest o push.


Thanks for the correction. I did mean given the same tooling version/parameters, but (as you and others pointed out) preserving and recreating that state is not at all straightforward.

You are correct; I confused archiving with compression. However, even considering only the compression process, same compression parameters cannot be guaranteed, as it is unknown which compression parameters the image publisher used.

Thats true. And regardless of compressed vs regular tar, I think the OCI format working with opaque archives is extremely limiting. I hope the industry will eventually redesign to use content addressable storage per file and have metadata to describe the layer/disk layout instead. That would allow per file deduplication, and we can use tar for just bulk transfer over the wire, rather than using tar for the data at rest.

containerd 2.3 has support for erofs which does a direct import of the layer. It can even convert the tar based layers to erofs, faster than extracting the tar normally.

Also looking at block-based content store so that blocks can be deduped across images.


If that's the purpose, couldn't you store the hash and throw away the compressed image?

(As others said, compression is deterministic for the same algorithm, parameters and input data)


Zstd for example only promises determinism on the same version of the library. I've personally seen the hashes mutate between pull and export. Things like tar padding also make a difference. Really, the thing to do is to hash on the _uncompressed_ data and let compression be a transport/registry detail. That's what I've done, at least.

I didn't know that about zstd, that's a bit unfortunate.

Tar isn't related here though, we're talking about compression not archival formats


Yes, compression being part of the OCI image's digest was (in hindsight) a poor decision. _Technically_ OCI images allow uncompressed layers, and the layers could be included without compression (and transport compression to be used); this would allow layers to be fully reproducible. We explored some options to do this (and made some preparations; https://github.com/containerd/containerd/pull/8166), but also discovered that various implementations of registry clients didn't handle transport-compression correctly (https://github.com/distribution/distribution/pull/3754), which could result in client either pulling the full, uncompressed, content, or image validation failing.

For my registry fork/custom pull client I hash on the uncompressed content and store as compressed under the uncompressed digest. This lets me have my cake and eat it, too - compression free digests, smaller storage costs, be able to set consistent compression settings, have the ability to spend extra CPU to recompress on the backend without breaking hashes, etc. I control both pull client and registry, so it works.

The whole entire reason is compression is not deterministic across tooling.

Pushing

What about pushing? Computers are fast enough to compress stuff as it's being transmitted, you don't need to store the compressed copy anywhere...

To save disk space /s

I would prefer a phone that was robust enough to not need a cover, because covers add a great deal of size and weight.

In the absence of such phones, I compromise on adding a cover.


Such phones exist, for Android. Several companies* make highly rugged phones. You can drop a Blackview BV7000 down a concrete staircase, watch it drop into the ocean at the bottom, have lunch, come back, and retrieve your phone from 40" of water, likely completely undamaged.

It's an extreme example, and way too bulky for most people, but the point is: "rugged cellphones" absolutely exist.


I’m aware. Unfortunately I’m an iPhone guy, and the software is more important than the hardware…

Also I don’t need fully ruggedized. Just enough that a cover would be superfluous.


The last one. It would make sense to have a sandbox system, but they don’t.

Presumably, if you use formal verification then that includes memory safety anyway? Would seem strange if it does not.

Formal verification requires a spec and a very large, very expensive amount of tooling to be developed.

My understand is that both these things are in work, and that neither of these things exist yet.


Yes, and AdaCore's tooling is formally verified and produces reports already familiar to aerospace, railway, and auto auditors for verifying certifications making it attractive to this industry segment of high-integrity apps. Memory safety is taken care of mainly through the features Ada/SPARK2014 offer in creating safe, high-integrity programs, correct.

Yeah right now it’s usually C, but if I had a choice I’d use Ada. I’ve never done a graphical interface with Ada, and I have with OpenGLSC using C.

I’m sure at some point there will be an accepted formal verification toolchain for rust, I hope to never use it.


Yes, but isn’t that working as intended?

As we can see by this thread… It’s heavily debated as to whether the intentions we should be following are those of long-dead forbears, or the will of the people, and in the latter, which people.

Mostly, that's non-compliant devices. Doesn't make it work any better, but I wouldn't assume Apple is doing it wrong here.

USB-C ports aren't allowed to provide power until after configuration, but a lot of USB-C chargers provide 5V regardless. This is wrong, but it does mean you can use a dumb C-to-micro cable which doesn't include the necessary electronics. (A pull-down resistor at least.)

And of course there's no way to tell by the looks of the cable.


Yeah this is right. I bought a cheap wireless mouse, with a USB-C port for charging. None of the USB-C chargers in my house would charge it, so after awhile it inevitably went flat and I took it back to the shop - since it was faulty.

The guy in the shop plugged it in to a USB-A port via a cheap A-to-C cable, and the mouse immediately came to life. Of course. I felt like an idiot.

I didn't get a faulty unit. Whoever designed the mouse was treating the USB-C plug like a newer micro-usb port. The mouse just expected 5V over the port. They clearly didn't bother testing it with a proper USB-C charger.

I returned it anyway and got a mouse that wasn't broken.


Something I've also see some shitty peripherals do is only hook up one side of the USB-C connectors. To get it charging, you'd need to orient the cable right.

Absolutely baffling, but it only happened to me for brands where I should've figured.


It annoys me so much when new electronics do this because the fix is both well known by now and only requires 2 dirt cheap components on the circuit board (5.1k resistors to ground on the CC lines).

As a hardware engineer among other things, that was one of the first things I learned about interfacing with USB C. How do so many consumer devices keep getting this wrong in the year of our lord 2026?


I had a bike light that charged over USB-C. I thought I was going nuts when I couldn’t charge it with any combination of cables and chargers I had. That is until I dug up the cable that came with it, a cheap looking yellow USB-A to USB-C cable. With that cable, I could charge it from anything.

Not necessarily, Apple only implemented the latest and greatest USB charging spec in some of their devices (AVS). Their chargers speak the new protocols so their devices and their chargers will work, but a charger from a few years back can easily deliver 100W following the spec (PPS, other PD standards) but be unable to deliver high power charging on some Apple hardware.

Neither side is wrong per se, though it's quite annoying that Apple didn't implement PPS. Then again, if you're buying Apple, you should probably expect these kinds of shenannigans and be ready to need to buy dedicated peripherals.


> This is wrong

I understand the technical reasons behind it, but in this case - the actual expectation is to be able to use usb-c to charge other gadgets.


I think we should expect gadgets to not be outright broken in the first place.

That's what I'm trying to say about Apple's charging bricks

There’s nothing broken about the Apple brick.

If you had a device that wanted 12V input on a USB-C port without negotiation (these products exist, and are dangerous because they come with chargers that just output 12V without any negotiation at all…), whose fault is it? The vendor who chooses to ignore the clearly defined spec to save a few cents and risks damaging devices, or the vendor who follows spec and prevents damaging random devices?


Yes, and in case of 5V, the vendor isn't even saving "a few cents", but a tiny fraction of a cent. USB-C devices without pull-downs are only poorly pretending to be USB-C.

I can do one worse.

I have aquarium lights. They require 48VDC at 1A, which makes it quite a bright light; they’re nice, really…

But the connector is USB-A, and worse, marked as being USB. The power supply just provides 48V unconditionally.


They're spec complaint with genuine USB-PD charging capability. Some devices are counterfeit with fake USB logos & USB-C connectors but not compliant with the specs. I blame the counterfeit sellers & manufacturers.

Solar and wind cannot do that. We'll need oil and gas to tide us over for that decade or more.

Solar and wind are scaling much faster than gas and oil right now. After the recent Iran war I think it would be insane to rely on new oil or gas. Yeah let’s rely on this commodity whose supply and price are controlled by the dumbest egomaniacs on the planet.

>Yeah let’s rely on this commodity whose supply and price are controlled by the dumbest egomaniacs on the planet.

Don't talk about Americans that way!


We aren’t making a very good case for ourselves on the world stage are we…

Some gas, but we can reduce it by an order of magnitude. Either way nuclear is not coming online quickly.

Root access does not typically add anything interesting, for a desktop system. All the valuable stuff is already owned by the single user.

Run “nix flake update”. Commit the lockfile. Build a docker image from that; the software you need is almost certainly there, and there’s a handy docker helper.


Recently I’ve been noticing that Nix software has been falling behind. So “the software you need is almost certainly there” is less true these days. Recently = April 2026.


That's been an issue for years from my impression of the state of NixOS. There are other problems too, like a lot of open source packages doing straight binary downloads instead of actually building the software.

Are you referring to how the nixpkgs-unstable branch hasn't been updated in the past five days? Or do you have some specific software in mind? (not arguing, just curious)


It’s a variety of different software that just isn’t updated very often.

I don’t mind being somewhat behind, but it seems like there are a lot of packages that don’t get regular updates. It’s okay to have packages that aren’t updated, but those packages should be clearly distinguishable.


oh, great, adding more dependency, and one that just had serious security problem


as if other sandboxing software is perfect


Nothing is perfect. (FreeBSD jails come close but still no.)


I refuse. My code will be formatted according to my own preferences.


No one's stopping you from that, as long as your preferences coincide with go fmt ;-)


If you use node, you can do that... until someone decides to add eslint to the pipeline and you get thousands of formatting "errors" that you have to "fix".


Imagine a world where your editor shows you what you want to see… but saves in a standard format for sharing.


That's what tabs accomplish!


In theory, they do. In practice, I have only seen one codebase — ONE — in all my years of programming that was using tabs and yet did not end up with spaces getting mixed in with those tabs at some point along the way. (In the indentation, I mean: obviously once the non-indentation part of the line starts, you want spaces there). And that codebase had precisely two people committing regularly to it. Occasional PRs from other contributors, but only two primary maintainers.

Every other tab-using codebase I've seen (of non-trivial size and complexity, that is), someone, somewhere, had been lazy, or had a misconfigured editor, or something, and spaces snuck into the tabs. The worst offender I ever saw was a file that had been edited by multiple people over the years, who must have had different tab settings in their editors. There was one section where they had tried to line up a bunch of variable assignments and values. (Yes, I know, bad idea, but stick with me for a minute, I'm getting to the punchline). None of the pieces of code that were supposed to line up were actually lined up. (This was C# code, so indentation didn't truly matter like it would in F#, or Python, or ... well, I won't list all of them since I'm trying to get to the point). Here's the really hilarious part. I tried all sorts of tab settings to see if I could get that file to line up. I tried 8. I tried 4. I tried 2. I even tried 3, the setting for the people who can't make their minds up between 4 and 2. Then I tried really oddball settings like 16, 5, or even 7. Nothing worked. There was no tab-size setting I could use that would make the code line up.

That was the day I said "Forget about tabs, just use spaces, you won't have that problem with spaces." Tabs have great promise, but in practice, in my experience at least, you end up having to tell your colleagues "hey, you need to set your tabs to 4" (or 8) "before editing this file". Which almost negates the promise of tabs. They're great in theory, but I've only seen ONE codebase that made them work in practice.


    I am thinking of {
     all the formatting.
    }

So long as my format is the standard one, that all newcomers an unopinionateds see by default and thus my opinions rule forever... yeah! great idea! otherwise... oh hayol no.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: