Indeed; most personal banking customers can fall back on FDIC insurance ($250k should be more than enough to cover your emergency fund). This isn't the 1920s.
Alas, for Silicon Valley Bank they went with 'too big to fail' and also covered uninsured deposits. That's moral hazard and endangers the core purpose of the insurance.
Agreed. That said, FDIC would have not been able to cover all $150 billion or so of uninsured SVB deposits directly from the insurance fund, so had that been the only available option for making depositors whole, then FDIC would have had to pass.
Well, insurance should only covered insured deposits.
> [...] so had that been the only available option for making depositors whole, [...]
On paper, FDIC might be independent and have its own balance sheets. But in practice and given politics, FDIC itself can't fail / isn't allowed to fail. It'll always be bailed out, and that's what the market expects.
For the stability of the economy, it would have been better not to make uninsured depositors whole.
It sure isn’t the 1920s, it’s the 2020s so things like digital money are ephemeral and whimsical.
The bigger question is how much food and medicine is there in the supply chain buffers? If all production was to stop immediately — how many calories are on the continent? How many grams of insulin or penicillin?
In a crisis how will those things be distributed? Will it be based on immediate need or social class?
What’s keeping the system going anyways? Why do ships continue to come with consumer goods from China? Why do farmers send their grain to market?
It’s kind of neat to think about what will happen in this sort of scenario. I wonder how long the data centres will keep running, churning out models that don’t have a market an aren’t quite good enough for AGI.
https://en.wikipedia.org/wiki/Happy_Eyeballs is the usual name. It's not quite identical, since you often want to give your preferred transport a nominal headstart so it usually succeeds. But yes, there are some similarities -- you race during connection setup so that you don't have to wait for a connection timeout (on the order of seconds) if the preferred mechanism doesn't work for some reason.
Request hedging or backup requests are indeed the terms
I know for requests where you give the first request a bit of a headstart. I didn’t know about the term happy eyeball to signify that all requests fire at the same time.
> I didn’t know about the term happy eyeball to signify that all requests fire at the same time.
It's not quite the same. Usually with Happy Eyeballs, you want to try multiple protocols (e.g. QUIC vs TCP, or IPv6 vs IPv4), and you have a preference for one over the other. As such, you try to establish your connection via IPv6, wait something like 30ms, then try to establish via IPv4. Whichever mechanism completes channel setup first wins, and you can cancel the other one.
It's a mechanism used to drive adoption of newer protocols while limiting the impact on end users.
The actual explanation starts a couple minutes later, around https://youtu.be/KKbgulTp3FE?t=1553. The short explanation is performance (essentially load balancing against multiple RAM banks for large sequential RAM accesses), combined with a security-via-obscurity layer of defense against rowhammer.
Eh. It depends what your bottleneck is. If the bottleneck is now, say, CPU cache contention because you've doubled your thread count, it's entirely possible that FL1 running on the new server generation is operating in a different regime than on the previous generation. You can see some hints of that happening, since doubling thread count didn't result in a doubling of throughput.
In fact, I suspect based on the throughput doubling with FL2, we're back in the same regime as the baseline.
It would be useful to see what the latency is of FL2 on Gen12 compared to baseline (FL1 on Gen12), just to confirm.
Yes fair points. The think it’s also indicative of how important it is that code be optimized for the specific hardware it will run on. Systems need to be considered and optimized as a whole. Still an interesting post.
It depends what dates you're looking at, but energy (gas prices and more) and food (including eggs) are generally recognized as way more volatile than the rest of the CPI.
Eggs were actually quite stable for the 20 years prior to 2001, so maybe don't put your life savings into egg futures...
That is very curious, yes. Eggs seem to just start to increase dramatically after 2000 and indeed outdo the CPI, disregarding the peaks and valleys of the different shocks to egg production like covid and the avian flu.
I read that the price includes free range, eco, etc varieties which are more expensive and in more demand nowadays, probably just that explains a good chunk of the price increase.
Or maybe they are? I'm not an expert in this and reading through some of the government literature there's no mention of this.
Then at least you would know that a given price marker is a good empirical index of how other prices are changing also, at least for a given dimension/component.
The preference is to use a separate pair of communal chopsticks that is not used directly for eating.
> Kosuribashi
I have heard that this one is because it's considered to be an insult implying that the chopsticks are low-quality. (That said, if your chopsticks are indeed low-quality, then avoiding splinters is probably preferable to then visibly plucking splinters out of your fingers.)
Looks like the repo owner force-pushed a bad commit to replace an existing one. But then, why not forge it to maintain the existing timestamp + author, e.g. via `git commit --amend -C df8c18`?
The value of the technique, I suppose, is that it hides a large payload a bit better. The part you can see stinks (a bunch of magic numbers and eval), but I suppose it’s still easier to overlook than a 9000-character line of hexadecimal (if still encoded or even decoded but still encrypted) or stuff mentioning Solana and Russian timezones (I just decoded and decrypted the payload out of curiosity).
But really, it still has to be injected after the fact. Even the most superficial code review should catch it.
Agreed on all those fronts. I'm just dismayed by all the comments suggesting that maintainers just merged PRs with this trojan, when the attack vector implies a more mundane form of credential compromise (and not, as the article implies, AI being used to sneak malicious changes past code review at scale).
Yeah, the attack vector seems to be stolen credentials. I would be much more interested in an attack which actually uses Invisible characters as the main vector.
> if indeed he went all-in on AI in 2015, that seems to me like a damn near prophetic vision.
Also note that 7 years later, when ChatGPT came out, built on top of Google Brain research (transformers), Google was caught flat-footed.
Even supposing that Pichai really had the right vision a decade ago, he completely failed in leading its execution until a serious threat to the company's core business model materialized.
Well, it shouldn't be slower than "Read 1,000,000 bytes sequentially from memory" (741ns) which in turn shouldn't be slower than "Read 1,000,000 bytes sequentially from disk" (359 us).
That said, all those numbers feel a bit off by 1.5-2 orders of magnitude -- that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.
// NIC bandwidth doubles every 2 years
// [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
// TODO: should really be a step function
// 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.
> that disk read speed translates to about 3 GB/s which is well outside the range of what HDDs can achieve.
That’s PCIe 3.0 x4 or PCIe 4.0 x2, which a decent commodity M.2 NVMe SSD can use and can possibly saturate, at least for reads.
> which means that in 2026 we'll have seen 11 doublings since gigabit speeds in 2003, so we'll all have > terabit speeds available to us.
We’re not that far off. 100GbE hardware is not especially expensive these days. Little “AI” boxes with 400-800 Gbps of connectivity are a thing.
That being said, all the connections over 100Gbps are currently multi-lane AFAIK, and the heroic efforts and multiplexing needed to exceed 100Gbps at any distance are a bit in excess of the very simple technology that got us to 100Mbps “fast Ethernet”.
> That’s PCIe 3.0 x4 or PCIe 4.0 x2, which a decent commodity M.2 NVMe SSD can use and can possibly saturate, at least for reads.
Given that there's a separate item for sequential disk reads vs SSD reads, I think it's pretty clear that particular item meant hard drives specifically. Agreed that modern SSDs should be able to pull that off.
> That being said, all the connections over 100Gbps are currently multi-lane AFAIK, and the heroic efforts and multiplexing needed to exceed 100Gbps at any distance are a bit in excess of the very simple technology that got us to 100Mbps “fast Ethernet”.
Yeah. Terabit networking is not here yet, and it's certainly not "commodity network"-grade. We can LACP a bunch of 100G optics together, but we're probably 5-10 years out for 800G ethernet to become widely adopted and for 1600G to even be developed.
You probably meant to say oversubscribing, not overprovisioning.
Oversubscription is expected to a certain degree (this is fundamentally the same concept as "statistical multiplexing"). But even oversubscription in itself is not guaranteed to result in bufferbloat -- appropriate traffic shaping (especially to "encourage" congestion control algorithms to back off sooner) can mitigate a lot of those issues. And, it can be hard to differentiate between bufferbloat at the last mile vs within the ISP's backbone.
reply