SSH waits for the server key before it presents the client keys, right? Does this mean that different VMs from different users have the same key? (Or rather, all VMs have the same key? A quick look shows s00{1,2,3}.exe.xyz all having the same key.) So this is full MitM?
You are correct, but I expect they instruct their users to run with a host key validation disabled ( StrictHostKeyChecking=no UserKnownHostsFile=/dev/null) , as they expect these are ephemeral instances.
I mean, anytime you use the cloud for anything, you are giving MITM capabilities to the hosting provider. It is their hardware, their hypervisors... they can access anything inside the VMs
Not if it's using Confidential Computing. Then you're trusting "only" the CPU vendor (plus probably the government of the country where that vendor is located), but you're trusting the CPU already.
Does not for me, not even with busybox sh and no funky escape codes in PS1 at all. It does with cat or yes running, so just something being output is not the problem… Hm.
The "caveats" section in its docs hints at it, but to be explicit: no_panic is a band-aid that can break when changing optimizer options or compiler/llvm version. It's not a good option for library crates, e.g.
That being said, I'm not at all happy with all the complexity and ecosystem fragmentation that async brought. I understand what you're saying. But surprise panics is a bit of a pain point for me.
netdata is pretty heavy on resources, especially disk writes. I'd appreciate improvement over it, but I won't try out this thing without indication that it improves anything. Especially with such useful features as space invaders built in…
It's a bit ironic (in the Alanis Morrisette sense) because NetData was built by a small community on Reddit to be small, lightweight, easy to deploy, open source, etc. Now it looks like any other commercial enterprise monitoring product.
Make is incredibly cursed. My favorite example is it having a built-in rule (oversimplified, some extra Makefile code that is pretended to exist in every Makefile) that will extract files from a version control system.
https://www.gnu.org/software/make/manual/html_node/Catalogue...
What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.
You still get bash scripts in the targets, with $ escape hell and weirdness around multiline scripts, ordering & parallelism control headaches, and no support for background services.
The only sane use for Makefiles is running a few simple commands in independent targets, but do you really need make then?
(The argument that "everyone has it installed" is moot to me. I don't.)
Disk speed? You can use zram on a diskless system. Are you sure you know what it does?
(There's also the thing where it may be faster to read data from ram compressed and decompress it in cpu cache than reading it uncompressed, but that obviously depends on the workload.)
And then you got some minor detail different from the compiled library and boom, UB because some struct is layed out differently or the calling convention is wrong or you compiled with a different -std or …
Which is exactly why you should leave it to the distros to construct a consistent build environment. If your distro regularly gets this wrong then you do have a problem.
reply