Hacker Newsnew | past | comments | ask | show | jobs | submit | packetlost's commentslogin

I actively don't want these people infecting spaces they shouldn't be though. If that's all they care about out they should be in spaces about that

There are pockets of this on the internet, but you really have to go out of your way to find it

as a part time schemer, I also love Clojure and reach for it more often than Scheme these days.

Same here. I'm mildly optimistic tangled will go somewhere and be a viable replacement

Is HEAD not just a ref to a commit? There's basically only two "things" in git, refs and objects. Git internals are so easy that IMO people should start off by running through this tutorial [0] instead of learning the basics of git porcelain, it makes understanding what's going on so much easier.

[0]: https://git-scm.com/book/en/v2/Git-Internals-Git-Objects


HEAD in most operations is usually a ref to a branch, which makes it somewhat unique as a ref type (it's a ref to a ref, double pointer). When it is a ref to a commit, that's a detached HEAD state.

Plus HEAD to the CLI can also mean the family of refs under refs/heads/* that relate to the HEADS of each branch (which depending on fetch status may not be the same as the branch ref) and traversal into the reflog.


Objects should be split into trees and blobs to make some operations clearer, especially checkout and rename detections.

There's also commits and tags. Commits are important for understanding how branches and histories work. I was just trying to be brief, the types of objects are important and covered in that tutorial.

Wait, are you serious? This is how it works?

Yes: https://fasterdata.es.net/performance-testing/troubleshootin.... A simplistic TCP server will blast packets on the link as fast as it can, up to the size of the TCP receive window. At that point it’ll stop transmitting and wait for an ACK from the client before sending another window’s worth of packets.

To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.

But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.

Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.


This is how old-school TCP figures out how fast it can send data, regardless of the underlying transport. It ramps up the speed until it starts seeing packet loss, then backs off. It will try increasing speed again after a bit, in case there's now more capacity, and back off again if there's loss.

You can achieve a bit of performance here by tuning it so it will never exceed the true speed of the link - which is only really useful when you know what that is and can guarantee it.

TCP still works this way?

I experienced this with a VDI project when we mistakenly got 25Gb links delivered to the hosts.

We were expecting to get some sort of unbelievably fast internet experience, but it was awful as the internet gateway was 1 Gb or something similar.


Heh heh. If that shocks you, search engine for "bufferbloat" and prepare to be horrified.

Neutral atom too. You need fairly clean light to pump atoms into Rydberg states


Note: this readme appears to be from a very old version (5.x)


This seems to be the latest release, 16.0:

https://forum.tinycorelinux.net/index.php/topic,27681.0.html

But I don't see a comparable overview.


17.0 has preview build, but yeah the readme is still mostly relevant I think


Is the suggested download a plain http without any signing?


I'm really quite confident I don't want these companies collecting face and ID scans to prove age, so no I think this being an OS problem is actually a very reasonable solution.


This was the case before Obsidian existed, see Org-mode, vimwiki, etc.


I was using vimwiki with a ton of plugins for many years before Obsidian came along. It was very nice to be able to open all of my notes in a UI made for editing them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: