Hacker Newsnew | past | comments | ask | show | jobs | submit | blop's commentslogin

Well I don't know if you wrote this in a sarcastic way or not, but when you write a new message in Thunderbird just turn on `Options -> Delivery Status Notification` and your mail server will email you back with a delivery status message (success or failure, although failure can take some days if the receiving server doesn't outright reject your message)


I was not sarcastic. I just tried this by sending from my gmail account to one of my other accounts. Didn't get any email back even though the email was immediately delivered.


ah sorry, I thought you wanted a delivery notification when you are sending an email via your own SMTP server (i.e. when thunderbird is configured to use your own outbound SMTP gateway)


As my main desktop computers I've been using Fedora and Windows (for gaming only) virtualised on top of a single proxmox host with 2 GPUs passed through for more than 10 years... Upgraded all the way to latest versions (guests and hosts) without ever having to reinstall from scratch. I upgraded the hardware a few times (just cloned the disks), and since the desktops are virtualised, Windows always worked fine without complaining about new hardware drivers (only thing to change was GPU driver)

Another benefit is block-level backups of the VMs (either with qcow2 disks files or ZFS block storage, which both support snapshots and easy incremental backups of changed block data only)

Proxmox is great for this, although maybe not on a laptop unless you're ready to do a lot of tweaks for sleep, etc.


I think people like using makefile as a simple task runner because it's pretty much ubiquitous and also a kind of auto-descriptive standard. Interactive shells usually do autocompletion on makefile targets so it's easy to see what you can run on a project (more so on old or foreign projects)


I found this pdf presentation with lots of great technical details about data management and a devops infra oriented view of this telescope: https://ci-compass.org/assets/602137/2025jan23_cicompass_rub...

Worth a read for the devops guys around here!

  - about 20TB per day, around 100PB expected for the whole survey
  - 0.5PB ceph cluster for local data
  - workloads on 20 nodes kubernetes cluster/argocd
  - physical infra managed with puppet/ansible
  - 100Gbs(+40Gs backup) fiber connection to US-based datacenter for further processing


I wonder if they could reduce the data size at rest by using specialized compressing techniques. Your probably could build an averaged "model" of the sky observed by the telescope (probably with account for stellar parallax and bright planets) and store only compressed diffs, not full images.

But I guess, since storage is relatively cheap, it's simply impractical to bother with such complexity.


There's quite a bit of black out there. That should compress easily.


The usual lossless image compression algorithms is the given. I am talking about compressing it further since the telescope observes the same (or largely overlapping) patches of the sky and the most significant signal is stars, which are more or less "constant". At the very least, they probably could use the lossless "animation" compression algorithms like APNG or FLIF for consequent images of the same sky patch.


Look up fpack and funpack.


actually the telescope devops guys were hiring a couple years ago on HN: https://news.ycombinator.com/item?id=38101085 :-D


Insanity - love it


If you think this is insanity I encourage you to look up the expected data to come out of the SKA. Even after several processing steps they expect several hundred PB/year (the raw data which is not being archived is several orders of magnitude more). That is only SKA-low I think for SKA-mid we are talking Exabyte/year. I recall that their chief scientist said once they are operational they will process more data than google and facebook combined.


Yeppers: https://en.wikipedia.org/wiki/Square_Kilometre_Array

In-page search for "data challenges". Pfew, that's a lot of data.


yes, it's my biggest worry too.

At least with keepassDX on android there is no internet access permission needed by default, but if a compromised update suddenly required it I don't know if Android would prompt about it since all apps have internet access granted without prompting :(

I also wish it was possible to block automatic updates of specific apps on the play store... So at least we could be in control over updating critical apps such as these without having to micromanage updates for all apps.


On GrapheneOS there is a prompt when installing an app that asks if you would like to grant network access. I am not sure if that pop up displays if network access is added later in an app update though.


I'm pretty sure I got prompted once or twice before updating to app with new permissions added.


Just use syncthing ...


Was this a layup for the classic Dropbox comment? https://news.ycombinator.com/item?id=9224


I really just want a nicer `borg-backup` that doesn't involve an SSH server. And to finally dust off my home server. And... and...


Restic supports a similar feature set to Borg (open source, locally encrypted backups, content-addressed snapshots instead separate incremental+full backups, etc) but also works with dumb file hosts (S3, Backblaze B2, a local drive, etc).


Indeed, and it's a powerful combination. With "autorestic" as a bit of a front end, I have backups going to a local SSD, home NAS, and Backblaze B2. My B2 storage is under 200 GB so far, but has also cost basically nothing. For that, I have at least some chance of getting data back if the house burns down.


Syncthing on at least one server with zfs or btrfs snapshots


Nebula is great! From what I found after testing other solutions (headscale, netbird, netmaker) It's also the only completely open source mesh vpn that can be configured with a highly available control plane (just run multiple lighthouses, nothing is shared) and also supports multiple root CAs for nodes, relays and control planes (and each node can be a relay too)

I just wish there was a kubernetes operator to easily set up mesh sidecars like with tailscale and it would be perfect!


FYI, the Nebula mobile client is source-available but not open-source. The devs from Defined Networking have been cagey about this and don't make this fact obvious, which makes me wary of Nebula.

https://github.com/DefinedNet/mobile_nebula/issues/19


Fair enough about the android mobile client... My use case only involves meshing linux appliances across various networks so we only need the nebula core binaries which are under MIT license

https://github.com/slackhq/nebula/blob/master/LICENSE


TLDR: They haven't explicitly added a license, on purpose.

https://choosealicense.com/no-permission/

Sad optimism of the commentor to get a PR going if that was all it needed.


They could have just added a LICENSE file which stated you are not allowed to use to software without a commercial license. Instead they chose to be vague about it. Doesn't really inspire confidence.


There's actually an annual conference dedicated to Spreadsheet risks, they have lots of Excel horror stories on their website: https://eusprig.org/research-info/horror-stories/


I miss perl.

I don't think there are any other scripting languages that allow you to write very concise one-liners that can do very complex things. That's also where perl got its bad reputation from, but it's more a discipline issue than a language problem. I always rewrote my explorative one-liners more cleanly when I committed them to a final program, but nothing forces you to do it like python does.


It becomes language issue when you need to manage code written by others.

Yes, the majority of bad rep it gets is because people picked it as first language and "just started coding", without reading up how to write stuff in readable way but that's... most developers.

Not many start their new language adventure with code style guide and while say Python at least nudges you toward semi-readable code, Perl gives you enough rope to shoot yourself and nuke a nearby city along the way. And any of you that goes "well, ropes do not explode" clearly don't know Perl deep enough.

That births a lot of shit code.... that new developers in a year or two now need to deal with too (which is bad) but also use as example on how to write it (which is way worse).


> I don't think there are any other scripting languages that allow you to write very concise one-liners that can do very complex things.

Ruby maybe? It feels like a saner Perl to me


It was all magic to me until one day I finally took the time to look at git internals.

You can build a valid git repo with simple unix shell commands, and that really helped me to understand the magic behind the git commands:

https://git-scm.com/book/en/v2/Git-Internals-Git-Objects


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: