Hacker Newsnew | past | comments | ask | show | jobs | submit | krautsauer's commentslogin

SSH waits for the server key before it presents the client keys, right? Does this mean that different VMs from different users have the same key? (Or rather, all VMs have the same key? A quick look shows s00{1,2,3}.exe.xyz all having the same key.) So this is full MitM?

You are correct, but I expect they instruct their users to run with a host key validation disabled ( StrictHostKeyChecking=no UserKnownHostsFile=/dev/null) , as they expect these are ephemeral instances.

I mean, anytime you use the cloud for anything, you are giving MITM capabilities to the hosting provider. It is their hardware, their hypervisors... they can access anything inside the VMs

Not if it's using Confidential Computing. Then you're trusting "only" the CPU vendor (plus probably the government of the country where that vendor is located), but you're trusting the CPU already.

I think the vulnerability would be that not only the host can now MITM, but other co-tenants would have the capability to bypass that MiTM protection.

This approach doesn't give access from the hypervisor to your private keys it gives access to other tenants to your private keys.

Does not for me, not even with busybox sh and no funky escape codes in PS1 at all. It does with cat or yes running, so just something being output is not the problem… Hm.


The "caveats" section in its docs hints at it, but to be explicit: no_panic is a band-aid that can break when changing optimizer options or compiler/llvm version. It's not a good option for library crates, e.g.

That being said, I'm not at all happy with all the complexity and ecosystem fragmentation that async brought. I understand what you're saying. But surprise panics is a bit of a pain point for me.


> It's not a good option for library crates, e.g.

Author here. Yes, it is. It was literally made for libraries. Notably https://github.com/dtolnay/zmij and https://github.com/dtolnay/itoa use it to enforce that the libraries' public API is absent of panicking.


netdata is pretty heavy on resources, especially disk writes. I'd appreciate improvement over it, but I won't try out this thing without indication that it improves anything. Especially with such useful features as space invaders built in…


It's a bit ironic (in the Alanis Morrisette sense) because NetData was built by a small community on Reddit to be small, lightweight, easy to deploy, open source, etc. Now it looks like any other commercial enterprise monitoring product.


exactly this


That's fair. I can't resist putting easter eggs in my software, sorry :)


3…2…1… and somebody writes a malloc macro that includes the defer.


Make is incredibly cursed. My favorite example is it having a built-in rule (oversimplified, some extra Makefile code that is pretended to exist in every Makefile) that will extract files from a version control system. https://www.gnu.org/software/make/manual/html_node/Catalogue...

What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.


https://github.com/casey/just is an uncursed make (for task running purposes - it's not a general build system)


How does `just` compare to Task (https://taskfile.dev/)?


Just uses make-like syntax, not yaml, which I view as a huge advantage.


No I'm saying use Makefiles, which work just fine. Mark your targets with PHONY and move on.


You still get bash scripts in the targets, with $ escape hell and weirdness around multiline scripts, ordering & parallelism control headaches, and no support for background services.

The only sane use for Makefiles is running a few simple commands in independent targets, but do you really need make then?

(The argument that "everyone has it installed" is moot to me. I don't.)


I rely on it heavily. Have you tried zram swap?


wasting ram speed, disk speed AND cpu compression cycles? No, thank you. ZRAM is dumbest Linux idea in the past 20 years.


Disk speed? You can use zram on a diskless system. Are you sure you know what it does? (There's also the thing where it may be faster to read data from ram compressed and decompress it in cpu cache than reading it uncompressed, but that obviously depends on the workload.)


ahh sorry. you are right: no disk usage. :)


That may be related, but it's not what happened here. Wildcard-cert and all.


Why is meson's wrapdb never mentioned in these kinds of posts, or even the HN discussion of them?


probably because meson doesn't have a lot of play outside certain ecosystems.

I like wrapdb, but I'd rather have a real package manager.


And then you got some minor detail different from the compiled library and boom, UB because some struct is layed out differently or the calling convention is wrong or you compiled with a different -std or …


Which is exactly why you should leave it to the distros to construct a consistent build environment. If your distro regularly gets this wrong then you do have a problem.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: