Hacker Newsnew | past | comments | ask | show | jobs | submit | mandevil's commentslogin

No, the laws are different- and more consumer friendly in the US- so the US consumer behavior is different.

Back when credit cards were first starting out (which happened in the US) the US Congress passed a law- the Fair Credit Billing Act of 1974- that consumers were only liable for $50 of losses as long as they reported the missing credit card within 60 days of the end of the fraudulent billing cycle. This was back when credit cards purchases were all made on paper with the machine that went "kachunk" and transferred a carbon copy of your card- everything was done completely offline. That law has not been changed, in fact, most banks completely waive the $50 and don't hold card-holders liable for anything reported (basically, annoying a customer over $50 isn't worth it to the bank). Thanks to the internet, suddenly cards got a lot easier to steal and a lot easier to exploit- but banks are still on the hook for all losses reported within 60 days of the end of the cycle. The result is that American banks have invested an enormous amount in real-time monitoring of credit card transactions, and are doing lots of stuff to monitor this- they care deeply since ultimately they are on the hook- but the consumer doesn't care. This is why US card's from the consumer perspective are so much laxer, because our banks have invested far more on the back-end because the consumer is held harmless in a way they aren't with European cards.

As a totally separate issue, the EU has regulated the amount of interchange fees that card-companies can charge, but the US has not capped them. The result is that US card-holders can get significant kickbacks for using cards (especially true for the top decile of wealth), in a way that is functionally impossible with EU issued cards that have capped interchange fees. There is a big lawsuit happening now to try and allow merchants to only accept low-fee cards (the standard VISA/MC/AMEX deal requires treating all cards equally, which gives them an incentive to push people to higher interchange cards). We will see what happens with that suit, but until then, American high-spenders can have much higher rewards on their cards, which also encourages greater use of the cards- and making them have less friction than the EU versions.


> Thanks to the internet, suddenly cards got a lot easier to steal and a lot easier to exploit- but banks are still on the hook for all losses reported within 60 days of the end of the cycle.

For card-not-present transactions (i.e. online ones) the liability is on the merchant. They however also have an incentive NOT to use 3DS because it adds real friction to purchases. I'm also not sure if all USA banks even support 3DS.


This theory explains why cardholders in the US are still using cards despite these being relatively less secure than in other countries, but fails to explain why issuing banks wouldn't take steps to protect their own fraud losses, such as introducing 3DS or PINs.

The actual explanation lies in the game theory of fraud prevention; see my sibling comment for details.


Why would the law being different mean they wouldn't use 3DS though? Surely it'd cut out a good amount of fraud along with the realtime monitoring? I understand that US consumers don't have a stake in this, but can't all the banks just agree to enforce 3DS? I can't imagine Americans are going to stop using their cards because of a small amount of friction added

Because adding friction will deter many impulse purchases. Americans use credit cards constantly. The equilibrium would be perturbed in a way very much not advantageous for the credit card issuers if consumers became more cautious about using credit cards.

It’s the same reason credit card issuers are willing to pay Apple a few basis points to participate in Apple Pay: reducing friction has a non-linear impact on propensity to pay.


> can't all the banks just agree to enforce 3DS

They could, but it's one of those things that really only work if everybody joins. Because 3DS is rarely used right now, a portion of merchants don't even support it, so if you start enforcing is as a single bank, your customers will start complaining their card doesn't work. The banking industry in the US is also more decentralized than in the EU, so getting everybody to join in simultaneously is hard.

The window of opportunity for 3DS has also more or less passed, the industry is moving on to the next generation of tech (wallets/tokenization), that should be both easier to use and more secure.


The problem with this is now you are solely responsible for managing all of the changes, all of the variation of life. Chrome changed the shape of this API, you are responsible for finding it and updating it. Morocco changed when their daylight savings took effect, now you need to update your date/time handling code. There are a lot of these things that we take for granted because our libraries handle it for us, and with no dependencies you have to do all the work. Not a big deal for making a double-pendulum simulator for your daughter to play with that will stop mattering next week, but is a concern for a company which is trying to build something that can run indefinitely into the future.

> you are responsible for finding it and updating it.

vs the dependency broke something and now you're responsible for working around someone else's broken code.

Honestly, I've seen much more of the latter. Especially nowadays with every single dependency thinking they are an fully fledged OS because an agent can add 1000 feature/bug in no time. Picking the right dependency maintaining by a sane maintainer is like digging potatoes in a minefield.


As a general principle, I agree with you that large companies and teams benefit from common runtimes (i.e. libraries and frameworks).

I don't buy the notion of things breaking down over time, though. For "first-party" code that sticks to HTML and CSS standards, and Stage 4 / finished ecmascript standards, the web is an absurdly stable platform.

It certainly used to be that we had to do all sorts of weird vendor hacks because nobody agreed on anything and supporting IE6 and 7 were nightmares, and blackberry's browser was awful, but those days are largely behind us unless you're doing some cutting-edge chrome-only early days proposed stuff or a browser specific extension or something else that isn't a polished standard.

Even with timezone changes, you're better off using the system's information with Intl.DateTimeFormat.


I don’t know where the fear of breaking changes in deps comes from, but most good projects tries to keep their API stable. Even with fast-evolving platforms like Android and iOS sdk.

It comes from trying to use Python apps you found on GitHub before uv tool install was a thing

In the Python ecosystem making software with reproducibility in mind was a thing before the advent of uv. Some earlier options include Pipenv and Poetry. I used Pipenv already some 6y ago to achieve that and later switched to Poetry.

I think devs who didn't care back then also won't care in the future and will still run around with requirements.txt file in 10 years.


In companies, though, you often wind up with three+ massive dependency trees in your software to handle the same problem because people went and added the new hotness without deprecating the old stuff. You also find dependencies that are much heavier than necessary for the actual task at hand because the software developer was also solving the problem of needing that dependency on their resume. And then there's just the relatively tiny dependencies for fairly solved problems, like leftpad, which don't really require deps, and you can accept the maintenance burden, because not everything is an abstraction layer over chrome.

So if you just need to do something simple like fire off a compute heavy background task and then get a result when it is done, you should probably just roll your own implementation on top of the threading API in your language. That'll probably be very stable. You don't need a massive background task orchestration framework.

People might object that the frameworks will handle edge cases that you've never thought of, but I've actually found in enterprise settings that the small custom implementations--if you actually keep it small and focused--can cover more of the edge cases. And the big frameworks often engineer their own brittle edge cases due to concerns that you just don't have.

So anyway, it isn't as simple as "dependencies are bad" or "dependencies are good", but every dependency has a cost/benefit analysis that needs to go along with it. And in an Enterprise, I'd argue that if you audit the existing dependencies you will find way too many of them that should be removed or consolidated because they were done for the speed of initial delivery and greenfielding. Eventually when you accumulate way too many of those dependencies the exposure to the supply chains, the need to keep them updated, the need to track CVEs in those deps, and the need to fix code to use updated versions of those dependencies, along with not have the direct ability to bugfix them, all combine to produce an ongoing tax of either continual maintenance or tech debt that will eventually bite you hard.


> The problem with this is now you are solely responsible for managing all of the changes

We seem to greatly overestimate the amount of code needed to do something.

For example, there are billions of lines of code from me pressing a key, to you seeing what I wrote. But if we were to make a special program that communicates via ipv6 and icmp, and it is written for hazard3 pico2350 with wiz5500 ethernet breakout, the whole thing including the c compiler to compile your code (which could very well outperform gcc -O3) will be 5-6k lines of code, including RA, and even barebones spi drivers, and a small preemptive os.

So, it is not unreasonable to manage all of those changes.


I think we are stuck with LLMs. They are already in a place where they can find these issues in the first place. They can access RSS feeds. You could cron an agent to look to see if you are pwned as frequently as you want at literally almost zero cost. When you do ingest the libraries, keep a list and of what version and that can help as well.

The US Government made ASML dominant when it allowed it to acquire (US Company) Cymer, Inc.- the company that was best in the world at the time at EUV. Merging Cymer's EUV work with ASML's meticulous perfection and delivery of the entire rest of the system is what made them the only vendor that matters for semiconductor manufacturers.

This acquisition is also what gives the US Government the ability to veto customers of ASML even today- this is why Chinese semiconductor manufacturing is so far behind, because the USG controls who can access ASML's EUV work.


TIL! I had assumed that veto was purely diplomatic muscle.

That seems like a potentially very cunning soft takeover, in that case.

Still, I think onshoring is strategically wise in a world where the US is actively antagonizing the EU.


Over the past 17 years I've lived in three houses (in the suburbs of two different cities in two different states- one East Coast, one land-locked) and an apartment in NYC (obviously also East Coast). In all of the East Coast spots (urban and suburban) there was a mosque closer than the nearest McDonald's. For the land-locked state suburb the mosque was 2 miles away and the nearest McDonald's was 0.75 miles away.

I'm not selecting these houses to be convenient to the Mosque- I've never been in any of those Mosques. It's just an artifact of living in the sort of neighborhoods that I like. I tend to agree that it isn't urban/rural per se, as much as it's Openness of the Big Five personality traits. Which, at least in the US, tends to be correlated with a lot of other things (college education, density of living, etc.).


IRT the "college education", Collin County is statistically higher educated than most of the country demographically-speaking. >56% achieved bachelors or higher compared to NYC at ~42%. For reference, Santa Clara County in California is also at 56%, so about as educated as the area with Apple, Google, and Facebook at least as far as that statistic analyzes.

https://www.census.gov/quickfacts/fact/table/collincountytex...

Many people I've heard say extremely Islamophobic things have masters degrees and higher. I'd be interested in seeing real statistics on it.


It turns out there is extensive research on this, and you are mistaken. Most politicians actually do try to deliver on their promises. They might get stopped, but they try.

https://strathprints.strath.ac.uk/59403/1/Thomson_etal_AJPS_... for one quick to find example of the literature on this.

Most of the research on this was done before Trump entered office. Trump is a wildly unusual political leader, who is significantly more corrupt than other politicians, promises random things and then fails to deliver them, and generally breaks all of the rules that politicians follow- this is what his supporters describe as his "authenticity", that he "tells it like it is". The more people believe, incorrectly, that "all politicians are corrupt" and "no politicians deliver on their promises" the more likely they are to accept Trump- who again is an extreme outlier among American politicians.

Your cynicism actually ends up ruining the country and makes it more likely that we have bad government.


the reason we have bad government is solely because we're stuck in two silos

want real change? vote third party. the problem is the same as that red/blue button thought experiment recently posted to HN. one of the hardest things to do is to get 50% of people to agree with you, so everyone keeps hitting the red button (voting D or R) and nothing changes


When originally coined (circa 1950 around the Korean War), the First World was the US aligned block of countries, the Second World was the USSR aligned block of countries, and the Third World was all of the countries not part of either. Egypt, India, Yugoslavia, Ghana and Indonesia viewed themselves as leaders of the broader political movement during the 1960's and 1970's.

Even into the 1960's there were few industrialized nations outside of those two main blocks, so "Third World" quickly lost its explicitly political meaning and became more a description of the level of capital investment and worker productivity.


S&P500 had a rule from 2017 to 2023 that prevented companies with dual classes of shares (the sort that allow them to maintain founder control- like what GOOG and META did) that went public after the rule was instituted from ever being in the index. To be clear, META and GOOG were both in the index, but it was to prevent new companies from coming along and doing it. (I think it was related to SNAP going public?)

They removed it largely because investors wanted higher returns, and the tech companies that had such dual classes (1) were doing really well, and the S&P ended up caving on that rule.

1: Perennial hot button around here Palantir did this in a more extreme fashion than most. The three founders F class shares will always be at 49.9999% of the votes and the early investors B class shares have 10 votes each as compared to the publicly traded A class shares 1 votes.


There is a lot resting on Starlink, 11 gigadollars in direct revenue that accounts for fully 60% of SpaceX's total revenue of 18 gigadollars. It's hard to see how that level of revenue can sustain a 1 terradollar valuation.

Like, TSLA had 94 gigadollars in revenue last year, and it's a 1.2 terradollar company, and most outside analysts are frankly skeptical of that multiple. SpaceX is trying to get a similar valuation on a fifth of that revenue.


Interesting to see if Claude Code gets a lot better with a complete set of all jira tickets along with the integration to see the associated actual PR's, the linking of issues... it would depend on who owns the Atlassian data, of course. But that could be the last best set of programming data out there, if you had the complete Atlassian cloud-hosted archives.

Jet fuel in particular is more complicated than that. At the moment, most of the shipping passing through the straits are coming to and from Iran. I believe only a few ships for other countries have transited, none of them tankers- the GCC countries are not willing yet to acknowledge Iran's control over the Straits, since doing so would be to admit that this war was a giant catastrophe.

Iran, for sanctions related reasons, is unable to make international grade jet-fuel. Only the GCC countries can (in the Persian Gulf). And so not a single tanker of jet fuel has transited the Straits of Hormuz to Europe since this incredibly dumb war started. Iran does export raw crude to China, which refines it to international grade jet fuel, and China is getting some shipments from Iran, but China's raw crude imports have dropped, and they have responded by ending jet-fuel exports to the rest of Asia.

My understanding is that Europe can produce jet-fuel from the North Sea deposits, but they rely on imports because it is not sufficient for their consumption (My memory is that 'domestic production' was on the order of 60% of consumption). So as long as the Straits are blocked to GCC traffic there will be problems for European commercial aviation, getting worse over time.


Is there a cite for that explanation? That doesn't sound right to me. My understanding is that almost all Hormuz oil is crude, the refineries are elsewhere.


Which part? That GCC countries export refined Jet-A? Kuwait was responsible for 15% of seaborne jet fuel exports in 2025 (1), something like 10% of the world's total exports. In 2024, Bahrain exported 20 million barrels of jet-a (2). South Korea, #1 in the world, exported 90 million barrels in 2025- all by sea- (3), so Bahrain isn't a dominant player, but it's still an important amount.

Obviously most of ROK's oil was crude imported to South Korea for re-export elsewhere, but the GCC has spent the last few decades trying to get up the value chain of petro-chemicals and capture more of the value themselves.

1: https://www.vortexa.com/insights/jet-fuel-margins-hit-record... 2: https://www.data.gov.bh/explore/dataset/petroleum-products-e... Note that Bahrain's data explorer doesn't cover 2025, just 2024. 3: https://koreajoongangdaily.joins.com/news/2026-04-07/busines...


Yeah, those number seem cherry-picked. The fact that refineries exist in gulf isn't saying that refinery capacity doesn't exist elsewhere to manage the crude that is transiting the straight. It doesn't mean they do either, but I'd want to see a deeper analysis than any of that stuff you're linking.

Supply chain management is hard, but it's not nearly as fragile as people tend to fool themselves into thinking. How many chip or egg shortages have we lived through which showed up as pretty routine price disruption? And that's especially true in areas like fuel, which everyone recognizes as national security issues worthy of careful study and planning.

My gut says that's bunk, basically. Europe isn't running out of fuel.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: