Hacker Newsnew | past | comments | ask | show | jobs | submit | thomashabets2's commentslogin

"Country restrictions apply". Which countries?

Each country has different regulations for amateur radio bands. In Germany for example, in the bands > 2 GHz maximum power is capped at 75W PEP [1], the US has vastly different limits [2]

[1] https://www.gesetze-im-internet.de/afuv_2005/anlage_1.html

[2] https://www.ecfr.gov/current/title-47/chapter-I/subchapter-D...


I'm sorry, I thought it was very obvious that I was talking about ITAR export controls, not about destination country domestic regulation.

This is a clue from their webpage: "Not intended for radar applications. Core functionality needed for radar not included due to export control restrictions."


All I guess? If you're licensed you should know what you can and can't do.

No, amateur radio does not cover ITAR.

Which is why I ask. I'm not a lawyer, but there could be a general dual use ban, but with some other regulation that exempts e.g. UK.


Any that forbid or restrict satellite comms?

Don’t use this in Iran.


I'm pretty sure the "country restrictions" are about ITAR, not the destination country regulation.

When the page says "uh… do not use this to build a phased array radar… even though you could. And if you do, then in no way were we involved. Just don't", this is extremely likely to be about ITAR.


> License: Amateur Radio (Technician+) to operate, country restrictions apply.

This implies it's about operating a radio transmitter.

Iran will absolutely frown on that right now, as they've frowned on Starlink. Their internet shutoff indicates "empowering the public to connect across the world" is not really what they want.


Leaving Iran and DPRK aside, the frequency range, power levels, and everything else depends of course on the operating country.

> Amateur Radio (Technician+) to operate

This is not even true. You can operate within the ISM band without a license, with ISM band limits. So this is what I mean; listing "country restrictions" (not "local restrictions") doesn't make any sense in this context. Everything is always subject to local laws always. Obviously. And that's not even mentioning that a large reason for having this device is receive-only, which definitely doesn't require a license in the US.

~5.8GHz is an ISM band world wide.

Yes, transmitting at "amateur radio power" or within amateur radio bands but outside ISM requires something.

But "country restrictions apply" doesn't make sense if it means that. That'd be like selling condoms and referring to vatican banning them (I don't know if they do), or some countries banning gay sex, so country restriction applies if you use it for gay sex.


To me, the importance of crewed spaceflight like this cannot be overstated. I think my way of thinking was best phrased by Eddie Izzard: "When we landed on the moon, that was the point where god should have come up and said hello. Because if you invent some creatures, put them on the blue one and they make it to the grey one, you fucking turn up and say 'well done'".

Now, it's not the reason I'm an atheist, but "getting from the blue one to the grey one" (and hearing nothing) is so big that to me it disproves at the very least the existence of a personal god.

You may think it ridiculous, but I'm trying to convey why some people would think that it does change their life.

Most world events don't change 99.99% of people's lives, and yet they matter too. The only big world event, maybe in my entire life, that affected my life was covid. Because I lived in a lockdown country.


It's not for the reason that the parent commenter said, and it's not the moon (yet), but you can't take photos like this with probes alone: https://www.theguardian.com/artanddesign/gallery/2026/apr/05...

I've chatted a bit with the author, but not actually tried the language. It looks very interesting, and a clear improvement. I'm not particularly quiet about not liking Go[1].

I do think there may be a limit to how far it can be improved, though. Like typed nil means that a variable of an interface type (say coming from pure Go code) should enter Lisette as Option<Option<http.Handler>>. Sure, one can match on Some(Some(h)) to not require two unwrapping steps, but it becomes a bit awkward anyway. (note: this double-Option is not a thing in Lisette at least as of now)

Lisette also doesn't remove the need to call defer (as opposed to RAII) in the very awkward way Go does. E.g. de facto requiring that you double-close on any file opened for write.

Typescript helps write javascript, but that's because until WASM there was no other language option to actually run in the browser. So even typescript would be a harder sell now that WASM can do it. Basically, why try to make Go more like Rust when Rust is right there? And fair enough, the author may be aiming for somewhere in between. And then there's the issue of existing codebases; not everything is greenfield.

So this seems best suited for existing Go codebases, or when one (for some reason) wants to use the Go runtime (which sure, it's at least nicer than the Java runtime), but with a better language. And it does look like a better language.

So I guess what's not obvious to me (and I mentioned this to the author) is what's the quick start guide to having the next file be in Lisette and not Go. I don't think this is a flaw, but just a matter of filling in some blanks.

[1] https://blog.habets.se/2025/07/Go-is-still-not-good.html


> Basically, why try to make Go more like Rust when Rust is right there?

Go gives you access to a compute- and memory-efficient concurrent GC that has few or no equivalents elsewhere. It's a great platform for problem domains where GC is truly essential (fiddling with spaghetti-like reference graphs), even though you're giving up the enormous C-FFI ecosystem (unless you use Cgo, which is not really Go in a sense) due to the incompatibilities introduced by Go's weird user-mode stackful fibers approach.


> Basically, why try to make Go more like Rust when Rust is right there?

The avg developer moves a lot faster in a GC language. I recently tried making a chatbot in both Rust and Python, and even with some experience in Rust I was much faster in Python.

Go is also great for making quick lil CLI things like this https://github.com/sa-/wordle-tui


No doubt a chatbot would be built faster if using a less strict language. It wasn't until I started working on larger Python codebases (written by good programmers) that I went "oh no, now I see how this is not an appropriate language".

Similar to how even smaller problems are better suited for just writing a bash script.

When you can have the whole program basically in your head, you don't need the guardrails that prevent problems. Similar to how it's easy to keep track of object ownership with pointers in a small and simple C program. There's no fixed size after which you can no longer say "there are no dangling pointers in this C program". (but it's probably smaller than the size where Python becomes a problem)

My experience writing TUI in Go and Rust has been much better in Rust. Though to be fair, the Go TUI libraries may have improved a lot by now, since my Go TUI experience is older than me playing with Rust's ratatui.


I've also found that traversing a third-party codebase in Python is extremely frustrating and requires lots of manual work (with PyCharm) whereas with Rust, it's just 'Go to definition/implementation' every time from the IDE (RustRover). The strong typing is a huge plus when trying to understand code you didn't write (and I'm not talking LLM-generated).

sounds like a ide-noob theme song

> moves a lot faster in a GC language

Only in the old "move fast and break things" sense. RAII augmented with modern borrow checking is not really any syntactically heavier than GC, and the underlying semantics of memory allocations and lifecycles is something that you need to be aware of for good design. There are some exceptions (problems that must be modeled with general reference graphs, where the "lifecycle" becomes indeterminate and GC is thus essential) but they'll be quite clear anyway.


> Only in the old "move fast and break things" sense

No, definitely not only in that sense. GC is a boon to productivity no matter how you slice it, for projects of all sizes.

I think the idea that this is not the case, perhaps stems from the fact that Rust specifically has a better type system than Java specifically, so that becomes the default comparison. But not every GC language is Java. They don't all have lax type systems where you have to tiptoe around nulls. Many are quite strict and are definitely not "move fast and break things" type if languages.


Rust does have GC in external crates, one was used for implementing Lua in Rust.

A Lua interpreter written in Rust+GC makes a lot of sense.

A simplified Rust-like language written in, and compiling to, Rust+GC makes a lot of sense too.

A simplified language written in Rust and compiling to Go is a no-go.


Well if you think Java doesn't have a sufficiently good type system, then surely Go is even further from one?

Not saying those are the only two GC languages, just circling back to the post spawning these comments.


From your blog entry:

> Go was not satisfied with one billion dollar mistake, so they decided to have two flavors of NULL

Thanks for raising this kind of things in such a comprehensible way.

Now what I don't understand is that TypeScript, even if it was something to make JavaScript more bearable, didn't fix this! TS is even worse in this regard. And yet no one seems to care in the NodeJS ecosystem.

<selfPromotion>That's why I created my own Option type package in NPM in case it's useful for anyone: https://www.npmjs.com/package/fp-sdk </selfPromotion>


TypeScript tried to accurately model (and expose to language services) the actual behavior of JS with regards to null/undefined. In its early days, TypeScript got a lot of reflexive grief for attempting to make JS not JS. Had the TS team attempted to pave over null/undefined rather than modeling it with the best fidelity they could at the time, I think these criticisms would have been more on the mark.

ReasonML / Melange / Rescript are a wholistic approach to this: The issue with stapling an option or result type into Typescript is that your colleagues and LLMs won't used it (ask me how I know).

how do you know?

Your readme would really benefit from code snippets illustrating the library. The context it currently contains is valuable but it’s more what I’d expect at the bottom of the readme as something more like historical context for why you wrote it.

Yup, in my TODO list (I've only recently published this package). For now you can just check the tests, or a SO answer I wrote a while ago (before I published the idea as an npm package): https://stackoverflow.com/a/78937127/544947

I still struggle to see the advantage of Option<T> in a language with null-safe accessors and unions

    function fnO(val?: number) {
        if (val == null) {
            return "NAH";
        } else {
            return (val * val).toString();
        }
    }

    test("testing Options", () => {
        let foo = undefined;
        let bar = 2;
        expect(fnO(foo)).toBe("NAH");
        expect(fnO(bar)).toBe("4");
    });
Your readme states

> FP's languages approach of rather not having null at all

But `None` is just another null / undefined, which brings along a bunch of non-idiomatic code around handling it.


You can enable null safety in TypeScript, seems like a pretty good fix to me.

Its mediocre at best. Like in maths, how would i feel if addition would sometime actully be division. Thats hiw bad it is.

Well, isn’t division just substractive addition?

Where did we lose you? we're talking about two flavours of null, not one.

How would TS fix null in JS without violating its core principles of adhering to EcmaScript standards and being a superset of JS?

Maybe spit warnings when undefined is used? In the same way it does for when you use typeScript in a type-loose way.

But yeah it's a fair point. Sometimes I think I should just write my own lang (a subset of typescript), in the same fashion that Lisette dev has done.


You can already do this with strict type checking enabled and the NonNullable type.

You can't enforce it in any normal codebase because null is used extensively in the third party libraries you'll have to use for most projects.


"A typed nil pointer is not a nil pointer."

Golang does have a lot of weird flaws/gotchas, but as a language target for a compiler (transpiler) it's actually pretty great!

Syntax is simple and small without too many weird/confusing features, it's cross platform, has a great runtime and GC out of the box, "errors as values" so you can build whatever kind of error mechanism you want on top, green threading, speedy AOT compiler. Footguns that apply when writing Go don't apply so much when just using it as a compile target.

I've been writing a tiny toy functional language targeting Go and it's been really fun.

Go's defer is generally good, but it interacts weirdly with error handling (huge wart on Go language design) and has weird scoping rules (function scoped instead of scope scoped).


Rust's async story is much less ergonomic than go's -- mostly because of lack of garbage collection. That might be a good reason by itself?

Does Go actually have an async story? I know that question risks starting a semantic debate, so let me be more specific.

Go allows creating lightweight threads to the point where it's a good pattern to just spin off goroutines left and right to your heart's content. That's more of a concurrency primitive than async. Sure, you combine it with a channel, and you've created an async future.

The explicit passing of contexts is interesting. I initially thought it would be awkward, but it works well in practice. Except of course when you need to call a blocking API that doesn't take context.

And in environments where you can run a multitasking runtime, that's pretty cool. Rust's async is more ambitious, but has its drawbacks.

Go's concurrency story (I wouldn't call it an async story) is way more yolo, as is the rest of the Go language. And in my experience that Go yolo tends to blow up in more hilarious ways once the system is complex enough.


For one, I am glad I don't have to color my functions like your typical async.

I agree that this is the big problem with Rust's async story.

But like I said, in my opinion this compares with Go not having an async story at all.


Other languages have ill considered shortcomings. Rust has ambitious shortcomings.

Go's async story is great, as there is no function coloring at all. That being said, I don't like Go's syntax very much. The runtime is great though.

To be fair, Go’s async story only works because there’s a prologue compiled into every single function that says “before I execute this function, should another goroutine run instead?” and you pay that cost on every function call. (Granted, that prologue is also used for other features like GC checks and stack size guards, but the point still stands.) Languages that aspire to having zero-cost abstractions can’t make that kind of decision, and so you get function coloring.

I'm not sure this is 100% correct. I haven't researched it but why would they perform such a check at runtime if it is 1)material and 2) can be done at compile time. However, even if it is, Go is only trying to be medium fast / efficient in the same realm as its garbage collected peers (Java and C#).

If you want to look at Rust peer languages though, I do think the direction the Zig team is heading with 0.16 looks like a good direction to me.


> why would they perform such a check at runtime if it is 1)material and 2) can be done at compile time

It can’t be done at compile time because it’s a scheduler. Goroutines are scheduled in userland, they map M:N to “real” threads, so something has to be able to say “this thread needs to switch to a different goroutine”.

There’s two ways of doing this:

- Signal-based preemption: Set an alarm (which requires a syscall) that will interrupt the thread after a timeout, transferring control to the goroutine scheduler

- Insert a check to see if a re-schedule needs to happen, in certain choice parts of the compiled code (ie. At function call entry points.)

Golang used to only do the second one (and you can go back to this behavior with - asyncpreemptoff=1), it’s why there was a well-known issue that if you entered an infinite loop in a goroutine and never called any functions, other goroutines would be starved. They fixed that by implementing signal-based preemption above too, but it’s done on top of the second approach.

Granted, the prologue needs to happen anyway, because go needs to check if the stack needs to grow, on every function call. So there’s basically a “hook” installed into this prologue that is a single branch, saying “if the scheduler needs to switch, jump there now”, and it basically works sort of like an atomic bool the scheduler writes to when it needs to re-schedule a goroutine… Setting it to true causes that function to jump to the scheduler.

Go has done a lot of work to make all of this fast, and you’re right that it only aspires to be a “medium-fast” language, and things like mandatory GC make these sort of prologues round to zero in the scheme of things. But it’s something other languages are fully within their rights to avoid, is my point (and it sounds like you agree.)


It sounds like you know about this / have researched it. Are you saying that any go function, even func add(x,y int) { return x + y}, is going to have such overhead in all situations? Why wouldn't Go just inline this for instance when it can? It seems like such an obvious optimization.

If go chooses to inline a function in general, then it doesn’t need to add the prologue to the inlined code, no. The prologue applies to all functions that remain after the inlining is done.

There’s also functions that can be marked as “nosplit” that skip the prologue as well.

But otherwise, it has to be in every function because you might be 1 byte away from the top of go’s (small) stack size, then you call that simple add function, and if the prologue isn’t run the stack will overflow. Go has tiny stacks by default that grow if they need to, with this prologue functioning as the “do I need to split/grow the stack?” check, so it needs to be every function that does it. The scheduler hook is just a single branch that’s part of the prologue, so it’s not that much more expensive if you’re doing the prologue anyway.


Before typescript we had Haxe, and its still a "better language". But i guess marketing won, and worse it better. Shrug.

Every couple of months someone re-discovers SSH certificates, and blogs about them.

I'm guilty of it too. My blog post from 15 years ago is nowhere near as good as OP's post, but if I though me of 15 years ago lived up to my standards of today, I'd be really disappointed: https://blog.habets.se/2011/07/OpenSSH-certificates.html


I think the scary reality is most people conflate "keys" and "certificates". I have worked with security engineers that I need to remind that we do not use SSH certs, but rather key auth, and they have to think it through to make it click.

I'm consistently amazed how many developers and security professionals don't have a clear understanding how PPK even works conceptually.

Things like deploying dev keys to various production environments, instead of generating/registering them within said environment.

One of the worst recent security examples... You can't get this data over HTTPS from $OtherAgency, it's "not secure" ... then their suggestion is a "secure" read-only account to the other agency's SQL server (which uses the same TLS 1.3 as HTTPS). This is from person in charge of digital security for a government org.


Or when the security team at some other company emails you their private key.

LOL, yeah.. had that happen quite a few times... Also, re-using the ssh server key for the client connecting to the sftp server.

> Things like deploying dev keys to various production environments, instead of generating/registering them within said environment.

I can see this happening when a developer is authorized to generate, but not to register. So, they just reuse an already-registered one.


In the example, it wasn't even that complex... I have used patterns to register allowed signer keys based on environment variables that an application runs under, initializing at startup... so "register" just meant assigning the correct values for 2-4 environment variables per public signer allowed... and removing the dev signer. (JWT based auth)

One key technological cause is that PKCS#12 standardizes a format (you've most likely seen it as .PFX files) in which a certificate and its associated private key are bundled. This is in an effort to simplify the software...

So you get a situation where the lay person is given a "certificate" but it's not really just the certificate it's a PFX file and so e.g. no they mustn't show it you, it has their private key inside it and so you will learn that key and if you're honest you've just ruined their day because they need to start over...

I would say in my career I've had at least two occasions where I did that and I felt awful for the person, because I had set out to help them but now things are worse, and I've had a good number of later occasions where I spent a lot more of their time and mine because I knew I need to be very sure whether their "certificate" is actually a certificate (which they can show me, e.g. Teams message me the file) or a PFX file (thus it has their private key) and I must caution them to show nobody the file yet also try to assist them.


Another useful feature of SSH certificates is that you can sign a user’s public key to grant them access to a remote machine for a limited time and as a specific remote user.

The capacity to grant access as a specific remote user is present without certs as well right? The typical authorized_keys file lives under a user directory and grants access only to that user.

The main advantage of certificates is that you are able to do that from the CA without touching the target machine.

Certs may still be the right approach, but OpenSSH also supports an AuthorizedKeysCommand which could be a secure HTTPS request to a central server to pull down a dynamically generated authorized_keys file content for the particular user and host.

If your endpoints can securely and reliably reach a central server, this gives you maximum control (your authorized_keys HTTPS server can have any custom business logic you want) without having to deal with certs/CAs.


Exactly. This is really useful in larger organizations where you may want more complex rules on access. For example, you can easily build "break glass" or 2nd party approved access on demand. You can put whatever logic you need in a CA front-end.

You can also make all the certs short-lived (and only store them in ram).


The way I've been doing that is with Shamir Secret Sharing and encrypting keys until glass-breaking is necessary.

generating tons of keys? or just broad keys?

What I've done is generate a cert for the host(s) the user needs, for the time-span they need (subject to authorization logic).


And when your or someone else's infra down to such a degree that you need SSH access, you do not want to depend on being able to touch that machine first. The same is true with custom AuthorizedKeysCommands that phone home.

I've known SSH certs for a while but never went through the effort of migrating away from keys. I'm very frustrated about manually managing my SSH keys across my different servers and devices though.

I assume you gathered a lot of thoughts over these 15 years.

Should I invest in making the switch?


A big problem I have with ssh carts is that they are not universally supported. For me, there is always some device or daemon (for example tinyssh in the initramfs of my gaming pc so that I can unlock it remotely) that only works with “plain old ssh keys”. And if I have to distribute and sync my keys onto a few hosts anyway, it takes away the benefits.

Adding to this: while certs are indeed well-supported by OpenSSH, it's not always the SSH daemon used on alternate or embedded platforms.

For example, OpenWRT used Dropbear [1] instead, which does not support certs. Also, Java programs that implement SSH stuff, like Jenkins, may be doing so using Apache Mina [2] which, though the underlying library supports certs, it is buggy [3] and requires the application to add the UX to also support it.

[1] https://matt.ucc.asn.au/dropbear/dropbear.html

[2] https://mina.apache.org/sshd-project/

[3] I've been dealing for years with NullPointerExceptions causing the connection to crash when presented with certain ed25519 certificates.


You can just replace dropbear with openssh on OpenWRT. That was one of the first things I did, since DropBear also doesn't support hardware backed (sk) keys. Just move it to 2222 and disable the service.

I reenabled DB on that alt port when I did the recent major update, just in case, but it wasn't necessary. After the upgrade, OpenSSH was alive and ready.


Upgrade to a better one in initramfs?

Might actually be a positive instead of a negative. Gaming use-cases should have not any effect on security policies, these should be as separate as possible, different auth mechanisms for your gaming stuff and your professional stuff ensures nothing gets mixed.

Hah? It being my gaming machine has nothing to do with the problem. It’s also my FPGA development machine, though it gets used less for that. It only happens to be the only Linux workstation in my home (the others are Macs or OpenBSD).

If you care about security, I recommend investing into a separate computer for developing hardware and software and another for downloading games on.

You can setup your security any way you like, but nothing beats an air gap in terms of security and simplicity.


remote unlock is also useful when you're not gaming so that feels like the wrong aspect to focus on

If your use case is such that you are frustrated about managing keys, host or user keys, then yes it does sound like SSH certs would help you. E.g. when you have many users, servers, or high enough cartesian product of the two.

In environment where they don't cause frustration they're not worth it.

Not really more to it than that, from my point of view.


You will have to manage your SSH CA certificates instead of your keys.

The workflows SSH CA's are extremely janky and insecure.

With some creative use of `AuthorizedKeysCommand` you can make SSH key rotation painless and secure.

With SSH certificates you have to go back to the "keys to the kingdom" antipattern and just hope for the best.


> With SSH certificates you have to go back to the "keys to the kingdom" antipattern and just hope for the best.

Whut? This is literally the opposite.

With CA certs you can create short-lived certificates, so you can easily grant access to a system for a short time.


And what about the CA?

It's no different compared to regular SSH private keys. You need to protect it from compromise.

However, it provides you an additional layer of protection, because it does not need to be on the critical path for every SSH connection. My CA is a Nitrokey HSM, for example. I issue myself temporary certs that are valid only for 6 hours for ephemeral private keys.


Yes it is different. SSH CA keys are harder to secure and attackers have a much bigger incentive to steal them.

You can also configure multiple CA for client auth, and on the client side multiple ca to verify host keys.

Exactly. We'd had discussions about building https://Userify.com (plug!) around SSH certificates, but elected to go with keys instead, because Userify delivers most of the good things around certificates without the jank and insecurity.

It's not that certificates themselves are insecure themselves, it's that the workflows (as the parent points out) are awful. We might still add some automation around that (and I think I saw some competitor tooling out there if you're committed to that path) but I personally feel like it's an answer to the wrong question.


I am keeping an eye on the new (and alpha) Authentik agent which will allow idp based ssh logins. There's also SSSD already supported but it requires glibc (due to needing NSS) meaning it's not available on Alpine.

If you mean using OIDC, in that space there's at least https://github.com/EOSC-synergy/ssh-oidc, https://dianagudu.github.io/mccli/ and OpenPubkey-ssh discussed in https://news.ycombinator.com/item?id=43470906 (which might mention more).

How does SSSD support help with SSH authN? I know you can now get Kerberos tickets from FreeIPA using OIDC(?), but I forget if SSSD is involved.


Can't really speak to the point of the guy you're replying to, but the FreeIPA implementation via SSSD does more than just Kerberos tickets. Actually, I think the Kerberos based stuff as it relates to SSH is GSSAPI as part of sshd itself and has little to do with sssd, though I could be wrong.

That said...

If I'm remembering things correctly (and it's been a looong while since I've played with this), FreeIPA's client configures sshd with an AuthorizedKeysCommand that executes a program that queries sssd for the list of authorized keys for a given user. Sssd then uses a plugin to query the LDAP server @ FreeIPA to get the list of keys.

There's also SSHFP (I think) records in DNS if you're using FreeIPA's DNS servers. These provide the host keys for servers for your ssh client to check against. Not sure if that's integrated into ssh itself or something else -- I can't remember how it's implemented offhand -- but it's fairly nifty since you never see the TOFU prompt (or it would be if DNS was actually secure, anyway).


Yes, FreeIPA is Kerberos+LDAP+X.509 CA, and GSSAPI is in OpenSSH (normally with the key exchange patch). SSSD is a local mechanism, not network authentication. I mentioned authorized keys distribution mechanisms elsewhere, but I was thinking authentication (c.f. OIDC), not authorization.

I haven't used SSSD due it not being available for Alpine but doesn't it provide authentication via pam_sss ?

Yes, but its authN components only act locally, and PAM is optional for sshd. It can/does call out to network services like Kerberos/LDAP given a password, of course, but I was thinking of network authN connected directly with OIDC somehow, for which I don't know a mechanism in vanilla OpenSSH. (I don't know what Authentik does for this -- I could imagine it's behind the scenes somehow.) I should probably look it up sometime.

My understanding is since it's an agent running on the target, possibilities will be quite extensive. But it is relatively new and there is no stable release of it yet.

https://docs.goauthentik.io/endpoint-devices/authentik-agent...


It depends on what you want to do. CA certs are easy to manage, you just put the CA key instead of the SSH public key in authorized_keys.

They also provide a way to get hardware-backed security without messing with SSH agent forwarding and crappy USB security devices. You can use an HSM to issue a temporary certificate for your (possibly temporary) public key and use it as normal. The certificate can be valid for just 1 hour, enough to not worry about it leaking.


Yes. Caveat: It might not really be worth it if all your infrastructure is managed by these newfangled infrastructure-as-code-things that are quick to roll out (OpenShift/OKD, Talos, etc.) and you have only one repo to change SSH keys (single cluster or single repo for all clusters).

There are some serious security benefits for larger organizations but it does not sound as if you are part of one.


oh man, I referred back to your blog post when I wrote the ssh certificate authority for $job ... ~10 years ago.

Thank for writing it!


maybe it keeps getting rediscovered, becaues a good tooling around it is still missing?

While not transparent to users, I'd just use SSH ProxyCommand like I did in https://github.com/ThomasHabets/huproxy

Not exactly what i built in for, but it'll do the job here too, and able to connect to private addresses on the server side.


A problem with that approach is that libc can after an upgrade decide to start doing syscalls you were not expecting. Like the first time you call `printf()` it calls `newfstatat()`. Only the first time. Maybe in the future it'll call it more often than that, and then your binary breaks.

I'm not sure what glibc's latest policy is on linking statically, but at least it used to be basically unsupported and bugs about it were ignored. But even if supported, you can't know if it under some configurations or runtime circumstances uses dlopen for something.

Or maybe once you juggle more than X file descriptors some code switches from using `poll()` to using `select()` (or `epoll()`).

My thoughts last time I looked at seccomp: https://blog.habets.se/2022/03/seccomp-unsafe-at-any-speed.h...


This is a problem but fwiw libc's should be falling back to old system calls. You can block clone3 today and see that your libc will fall back to clone.


Yeah. But it still means wandering into de facto unsupported territory in a way that pledge/unveil/landlock does not.

Your example may be true, but I'm guessing it's not a guarantee. Not to mention if one wants to be portable to musl or cosmopolitan libc. The others inherently are more likely to work in a way that any libc would be "unsurprised" by.


Yeah for sure, it's a real issue. In general, seccomp feels hard to use unless you own your stack top to bottom.


> A problem with that approach is that libc can after an upgrade decide to start doing syscalls you were not expecting.

That would break capsicum, too, so I don’t see how that’s a problem when “comparing Capsicum to using seccomp in the same way”.


That's the approach I meant by "that approach", the library the parent commenter was talking about writing for a customer. Compare this to Landlock or OpenBSDs pledge/unveil.


Now that Landlock actually is a thing, have you considered writing another followup? Given what I've seen of landlock, I expect it'll be spicy...


I took the bait.

“The goal of Landlock is to enable restriction of ambient rights (e.g. global filesystem or network access) for a set of processes. Because Landlock is a stackable LSM [(Linux Security Model)], it makes it possible to create safe security sandboxes as new security layers in addition to the existing system-wide access-controls. ... Landlock empowers any process, including unprivileged ones, to securely restrict themselves.”

https://docs.kernel.org/userspace-api/landlock.html


I've actually found it pretty fine. It doesn't have full coverage, but they have a system of adding coverage (ABI versions), and it covers a lot of the important stuff.

The one restriction I'm not sure about is that you can't say "~/ except ~/.gnupg". You have to actually enumerate everything you do want to allow. But maybe that's for the best. Both because it mandates rules not becoming too complex to reason about, and because that's a weird requirement in general. Like did you really mean to give access to ~/.gnupg.backup/? Probably not. Probably best to enumerate the allowlist.

And if you really want to, I guess you can listdir() and compose the exhaustive list manually, after subtracting the "except X".

I find seccomp unusable and not fit for purpose, but landlock closes many doors.

Maybe you know better? I'd love to hear your take.


I definitely don't know better, and after taking a few more looks at landlock, I'm not even sure what my objections were, probably got it confused with something else entirely. Confusion and ignorance on my part I guess.


Yeah I'm not a fan of seccomp (https://blog.habets.se/2022/03/seccomp-unsafe-at-any-speed.h...).

On Linux I understand that Landlock is the way to go.


Landlock right now doesn't offer a lot for things that aren't file system access. Other than that it's great, you can have different restrictions per-thread if you want to.


Yeah, but the file system is where I put most of my files. :-)

Between file system, bind/connect, and sending signals, that covers most of it. Probably the biggest remaining risk is any unpatched bugs in the kernel itself.

So one would need to first gain execution in the process, and then elevate that access inside the kernel, in a way that doesn't just grant you root but still Landlocked, and with a much smaller effective syscall attack surface. Like even if there's a kernel bug in ioctl on devs, landlock can turn that off too.


I agree, but it would be nice if it had similar fine-grained APIs for network calls. That said I solved it by using LD_PRELOAD and socks5. It's not perfect, but good enough.

Landlock is one of my favorite linux-only APIs almost feels like it was FreeBSD's answer to some Linux feature.


> Java on the other hand makes it impossible to get a single distributable.

Heh, I find this very amusing and ironic, seeing how Write Once Run Anywhere was a stated goal with Java, that failed miserably.


It was successful for a while. Java applets were once fairly common. But then Flash largely replaced them, and then Html5 killed them flat.


A very limited "anywhere", but yes.

For that use case, was it active content, was it shipping intermediate representation, was it a sandbox? To all three: yes, very poorly.


I'm not saying we should phase Java out. But it's pretty clear to me that Java was a bad experiment in almost every aspect, and we should at least not add new use cases for it.

So no. No, please god no, no Java in the terminal.

More ranting here: https://blog.habets.se/2022/08/Java-a-fractal-of-bad-experim...


And yet Java is more then Java. There are lots of more modern languages on the JVM. The ecosystem is huge and still has lots of inertia.


Yeah. Some of my critique applies to the language, some on the JVM and thus cross language.

Kotlin sure is less awful, for example. But the JVM, as I describe, was always a failed experiment.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: