We pay money online mostly through credit cards. Credit card transactions can be reversed. If children spend money on porn, those payments are likely to be reversed. This is really bad for the ability of the porn sites to continue receiving credit card payments, and continue making money.
An age header is a trivial step that can reduce the odds of the adult site receiving payments that later get reversed. Win, win.
But if someone is willing and able to pay, then the adult industry wants the choice of whether to access content to be up to them. If government tries to regulate them, they'll engage in malicious compliance - do the minimum to not be sued, in a way that they can still reach customers.
For example Utah tried to institute age verification. The porn industry blocked all IP addresses from Utah. Business boomed for VPN companies in Utah. Everyone, including porn companies, knows that a lot of that is for porn. But if you show up with a Nevada IP address, the porn's position is, "You're in Nevada. Utah law doesn't apply." Even if the credit card has a Utah zip code.
If you live in Utah, and you're able to purchase a VPN, the porn companies want your money.
If someone is willing and able to pay, they have a source of money. If they aren't allowed to buy something, that control should be applied at the level where they get the money. If the child is using an adult's credit card, responsibility lies with the adult. If children need to have their own credit cards, the obvious point of control is the credit card itself.
But also, most porn is ad-supported, pirated or free. Directly paid content is a small fraction. So all of this is moot for porn.
There's an anecdote about an attempt to ban porn in Utah, which cited a survey which found that most people were opposed to adult content. The defense argued that most people will oppose porn when asked in public in order to appear moral, even if privately they are avid consumers.
As proof, they provided records of cable TV pay-per-view purchases in Utah. The defense won.
My perspective is that network effects are far less long-lasting than they feel in the moment. For example if being decent at everything and having a huge community was the only thing that mattered, Perl would still be a big deal. Many similar examples exist.
In the case of Lean, being the first with a huge library really makes a difference. Just as Perl got a big boost from having CPAN. (Which was an imitation of CTAN, except for a programming language instead of TeX.)
When your library is small, this looks like an insurmountable barrier. But you don't have to match the scale for factors of usability to become more important. And porting mathematical libraries is a good target for LLMs. The source is verified, the target is verifiable, and the reasoning path generally ports.
The flip side of this is that, thanks to LLMs, working on a minority platform isn't the barrier that you might expect. Because if their library can be ported to your platform, then your proof can probably be ported to their platform as well!
> The flip side of this is that, thanks to LLMs, working on a minority platform isn't the barrier that you might expect
This is a nice thought, but with Agda in particular it's just not true. It's one of the few languages I've seen that's sufficiently unrepresented in training data. Frontier LLMs (Codex, Claude Code) reliably say "I realized I can't do this." after wasting lots of tokens going back and forth.
In fact, I think this positions Agda uniquely poorly.
This. LLMs are no good at stuff they haven't seen a lot of training data for (saying this as a SystemVerilog programmer who also does some C-coding for interfacing with SystemVerilog, neither of which has a lot of open source code to show LLMs)
> except for a programming language instead of TeX
TeX is a programming language. It's not a very good programming language, but people have implemented floating-point math [0], regular expressions [1], an Arduino emulator [2], and terminal input/output [3] in pure TeX. The last two examples are obscure, but the first two examples are used (internally) by the vast majority of modern LaTeX documents.
The problem is that the top researchers in the field have spent their lives devoted to amyloid beta. They may well have helped direct hundreds of millions in grants to this line of research.
The idea that they have blocked the treatment of Alzheimer's rather than helping it, is very, very painful. This is perfect for creating cognitive dissonance.
And so, no matter what the evidence, they remain committed to the conclusion.
As a result, the latest Alzheimer's drug to enter stage 3 FDA trials is Trontinemab. (It is currently in stage 3.) It targets the amyloid target.
A relative drop in the bucket is going to, say, the infection hypothesis. This despite the fact that the only intervention that has been shown to reduce the incidence of Alzheimer's, is the shingles vaccine.
Yes, there is overwhelming evidence of climate change. And that we are causing it.
However research into what we humans can do to ameliorate it is verboten. For example https://www.scientificamerican.com/article/fertilizing-ocean... was an actual experiment that found a low cost way to both remove lots of CO2, and improve a fishery. But that line of research has been shut down.
Likewise research into the current impact is only allowed if it agrees with what is politically correct. For example many researchers have found that current severe California forest fires are mostly due to poor past policies, that have resulted in very dense forest, with a large fuel overload. But research that stresses the impact of climate change are easier to publish, and this shifts the apparent consensus on the causes of things like the major 2025 fires in the Los Angeles area.
> The project is also unlikely to bury much if any carbon dioxide for one simple reason: metabolism. As other iron fertilization experiments have shown, it is relatively easy to get plankton to bloom, but it is harder for that bloom to sink to the bottom of the ocean, where it takes CO2 with it.
Do you really think that adding iron releases net carbon? That would be hard to explain chemically.
The worst case scenario laid out in that article is that most of the carbon absorbed, was later rereleased. So net zero carbon, not net carbon emitter.
I've seen other reports of that exact experiment that estimated a significant net carbon sink. The actual experiment failed to make measurements that lets us know which actually happened.
Regardless, we've certainly demonstrated that, at least sometimes, there is significant net carbon capture, at low cost. Given the certainty of global damage at present, I believe that this justifies continued experimentation. Even if it means accepting possible risk to local ecosystems. The local ecosystem can generally recover. The planet, not so much.
You think the chemical compounds were just magiced into the ocean with no carbon intensity. It literally says carbon will not be absorbed and that geoengineering is not the goal.
I would expect that for any sampling of data that has a roughly similar distribution over many scales.
Which will be true of many human curated corpuses. But it will also be similar to, for natural data as well. Such as the lengths of random rivers, or the brightness of random stars.
The law was first discovered because logarithm books tended to wear out at the front first. That turned out to because most numbers had a small leading digit, and therefore the pages at the front were being looked up more often.
The exact phrase appears in the title. There is a title length limit. In this case, I don't think that it is wrong to pick the most interesting piece of that title that fits in the limit.
Unfortunately, while mathematics should be about a human element, research has focused into little cliques of up to a dozen or so people who are essentially unintelligible outside of the clique. That wasn't historically the culture, but the trend towards it accelerated after WW 2.
Hardy's curse is one of the underlying causes of this. The resulting extreme social isolation was why I chose to leave mathematics. My inclination was very much to try to explain and popularize. See, for example, the discussion at https://news.ycombinator.com/item?id=44269822. However such an inclination has been actively punished for decades within academia. The key to tenure is sufficient work within a clique to get solid recommendations from the most prominent members of that clique.
I agree with the article on how AI will transform and destroy what's left of mathematics. But I see it as the logical conclusion to what the fragmentation of mathematics has already done.
I do believe that you are missing a healthy dose of sarcasm. Such as faking downloads to give yourself inflated statistics so that your employer will trust untested and AI-written garbage.
That said, there really are good use cases for readable formatted files. For example configuration files that are checked into source control are far more trackable than a SQLite database for the same purpose. For another example, the files are convenient for a lot of data transfer purposes.
But for updateable data? You need a really good reason not to simply use a database. I've encountered such edge cases. But I've encountered a lot more people who thought that they had an edge case, than really did.
For something completely different, try learning Mandarin.
reply