Hacker Newsnew | past | comments | ask | show | jobs | submit | kloop's commentslogin

That doesn't seem like the obvious story to me. Normies and even most (non-tech) companies don't really know the difference between chatgpt and claude yet. And they generally don't have opinions or ideas on agentic X.

The obvious story seems to be that OpenAI was reckless and got way ahead of their revenue assuming it would keep hockey-sticking


> No more: five code monkey contractors under a lead. Two top-notch devs are all that is needed now, unrestrained by sprints and mindless ceremonies.

This doesn't tell me anything. Two devs who cared and didn't have a bunch of pointless meetings could already, and regularly did, scoop the big tech teams.

There were always 2 ways to complete a ticket. One that did what the stakeholder wanted, and one that does what the ticket says.

But devs that care about the product and what the stakeholders need are rare, and finding one of them was already a significant bottleneck on most projects.

AI might be an accelerator, but we've yet to see if it's optimizing the part that was actually the bottleneck yet.


To be blunt, those freelancers wouldn't be doing this if they had better options

Every time one of these articles come up, you can recognize that silicon valley is treating these people badly, but you should remember that everyone else is treating them worse


That shouldn't be viewed in isolation. A major root cause is essentially overproduction of academics downstream from the Cold War, and obviously the private sector is not to blame for that.

But you can't ignore how much modern Big Tech has sucked away from academia compared to the tech companies of the Cold War era. Microsoft Research and Google Research have some impressive folks, but even combined they are a scientific pittance compared to the might of Bell Labs, and there is far more interference from the business side. This despite the fact that the executives of those companies are vastly wealthier than anyone from Bell Labs in the 20th century, even adjusting for inflation.

And of course it's not just the executives: every 7-figure Google software engineer should get a >$100k pay cut, and that money goes to a STEM PhD to pursue nonprofit research at Google Labs. Believe it or not, $100k is still pretty competitive for a young PhD mathematician (similar to assistant professor at a selective state school). Even if it's chump change for a guy who fine tunes AdSense.


Describing it as "overproduction of academics" is kind of begging the question, though: is it not at least as much "deprioritization of basic research and education"?

It's not like the current demand for scientists is somehow a completely natural value, arrived at objectively and with no human biases involved.

And the private sector is heavily to blame for that. In ways that you even describe, as well as others (as another commenter noted, regulatory capture is one).


Peter Turchin’s theory of “elite overproduction” suggests this is a cause for social instability and revolutions


> To be blunt, those freelancers wouldn't be doing this if they had better options

Correct, this is what the article points out.

Their options were squashed when SV was praising DOGE and the cuts to national research grants based on keywords like “inequalities”.

Nobody had the time to check that mathematicians also use the term.

We wrecked our research and the vultures got cheap labor to put lipstick on their slop machines.


PhD was always a fools errand. There are only so many possible professorships with tenure and the people there never seem to retire because obviously they like being paid that good money and being basically able to do what they want.


The problem is much older than that. Academia didn’t start overproducing PhD’s and exploiting grad students and adjuncts in 2025.


Yes, it’s much better to spend “$400,000 for a Research Project on Whether Ducks Enjoy Classical Music”, just to ensure not a single grant went unfulfilled.

We have a $1.78T deficit. The ducks and the mathematicians will need to take a cut at this point.


> We have a $1.78T deficit

The fatal assumptions many people thinking about government spending from the outside make are that

a) money is limited

and

b) money is redistributed (~to a cause of their choice) after funding for something else gets cut


Money isn’t limited, as in we can just print more, but it comes at an inflationary cost: see the last 5 years. MMT doesn’t work.

Since money is limited, and we’re spending in a deficit, money shouldn’t be redistributed to another bad cause after something gets cut. Unfortunately, and all too often, it does.

One side cuts taxes to spend on blowing up the world’s energy market, another side raises taxes to buy votes among the people who pay 0 in taxes (or fund a study on if ducks like Mozart). They’re both wrong, and people are too blinded by the sports-team nature of politics to recognize this.


If the economy still has a pulse today it’s because of math. Literally linear algebra is what drives LLMs.

On the question on music and ducks, music seems something that humans universally get, but it is not clear if animals do so. Why we should not research it? What if music is the secret to human consciousness?


Yes, let’s pay down the deficit by cutting funding to the sciences. While the latest war is running at ~1 billion a day (we’re in day 48 btw).

https://iran-cost-ticker.com/


Cut the duck study and avoid blowing up the world’s energy markets for $1B a day. Nowhere in my comment did I argue for another trip to the sandbox.

The US had a balanced budget as little as 30 years ago. This current state of fiscal profligacy isn’t inevitable, except for the fact that both parties realized they could buy votes with your children’s financial wellbeing.


This assumes regulatory capture is not a thing.


Doesn’t make it ok.

I do wonder how minor this foundation has been laid w where graduate students may be conditioned exploited by colleges.


Academia already has a well-established structure of exploitation, with menial work falling down on grads and some undergrads, while credit for it being captured higher up in the tree.


Some institutions after all were created, for establishment and preservation of a privileged class.


Someone: it is bad that people are being treated poorly. We should effect changes such that they are no longer treated poorly.

Resident libertarian moron: uuuuhhhhhh have you considered that they voluntarily consented to being treated poorly? Actually this is the least poorly they could possibly be treated.


I’m curious what you are proposing exactly. I see articles even from year 2000 about PhD lifestyles being terrible during and after school.


Also if they're solving problems to help LLM training in their domain, that's actually pretty useful contribution to science - and definitely more directly useful than the work that dominates actual research, i.e. chasing grants instead of researching.


"that's actually pretty useful contribution to science"

Why? Serious question. Surely the only people using the LLM for such specific STEM domains are the exact same people who are "chasing grants instead of researching." Certainly I can see how training an LLM on this stuff can help automate the process of grant-chasing, and maybe OpenAI can expand their homework cheating business to graduate schools. But I do not see how this stuff helps honest researchers, except a bit around the margins (e.g. perhaps Claude isn't so good at the Perl used in bioinformatics, that's a use case justifying some RLHF from a PhD).

It really seems like the main utility of this stuff is getting a higher score on Humanity's Last Exam and showing the customers/investors that actually Opus 4.9 is 2% smarter than GPT 5.5. Separately there are AlphaProof/etc-style LLMs for solving real research problems in math and CS, but those techniques don't even work for theoretical physics, let alone biology.


LLMs are actively used in research all the time, they help with finding and processing existing knowledge, forming and testing hypotheses, analyzing data, writing software, brainstorming, and countless other tasks that form actual research work, as distinct from "grant chasing" and "publishing papers", in which they help, too.

(I mean, OpenAI released GPT-Rosalind just yesterday, and - surprise - it's not meant for chasing grants.)

It's not 2023 anymore, it's 2026. LLMs are good enough to be useful. They have been for at least a year, and they keep getting better. You need to be living under a rock for the past few years to not notice that.


This doesn't even slightly answer my question. The incredibly frustrating thing about the AI discussion is the refusal to consider actual evidence because of shifting targets. In 2026 there is evidence that 2024 LLMs did enormous damage to scientific research in 2025: hallucinated citations, hallucinated experiements, an onslaught of unreadable prose, etc etc. But we can't talk about that, can we? That's old hat, everybody knows 2024 LLMs were stupid and useless. Instead we have to discuss our vibes about 2026 LLMs, and maybe in 2028 we'll be able to tell whether or not our vibes were correct.


LLMs couldn't do any damage with hallucinated citations - on the contrary, this is only ever a problem for people so clueless and uncaring that they didn't even bother reading what LLMs wrote for them. Hallucinated citations are evidence of fraud or level of uncaring unbecoming a scientist, or any professional on that matter.


"LLMs couldn't do any damage with hallucinated citations"

If you're saying stuff like this with a straight face then you are clearly not a scientist and you don't know what you're talking about. In 2021 there were maybe 10 papers with fictional citations. Even the publication mills at least linked to other junk papers. Now there are hundreds of thousands of papers with dishonest and useless bibliographies. This is because LLMs are an unbeatable force multiplier for dishonest and useless scientific work.

I am sure some legitimate academics are getting real use out of them. I am also sure that the net effect of LLMs on science is enormously negative, and it will take decades to fix the mess.


Unlike the industry, science has actual standards of conduct, which puts it in a unique position to fix it quickly - if only the journals were doing the one job they have.

Hallucinated citations == strong evidence of scientific fraud. Name and shame and don't publish.

Alas, what's happening only shows the emperor has no clothes. If anyone slept through the replication crisis, they surely can't ignore it now. Can't really blame LLMs for lighting up the structural corruption of scientific process for everyone to see.

If anything, it's doing us a favor - if the journal gatekeeping and peer review can't handle people putting literal, obvious bullshit in their papers today, think what else they aren't handling either, and for how long this has been the case.


Still bad for the scientists. They get little money and zero recognition.


Right. They get to contribute something useful and be paid for it, which is better than nothing, but it's sad that their talent is being wasted.


They already didn’t get money or recognition.


I think they're talking about this bit:

> We finally observed signals of selection for combinations of alleles that today are associated with three correlated behavioural traits: scores on intelligence tests (increasing γ = 0.74 ± 0.12), household income (increasing γ = 1.12 ± 0.12) and years of schooling (increasing γ = 0.63 ± 0.13). These signals are all highly polygenic, and we have to drop 449–1,056 loci for the signals to become non-significant (Extended Data Fig. 10). The signals are largely driven by selection before approximately 2,000 years )*, after which γ tends towards zero

Presumably pressure in different regions lead to different combinations of those alleles, which I think they are shorthanding a bit, but the fact that those alleles exist makes blank slate theory a kind of rough assumption


It is important to consider that these alleles are merely correlated to behavior and are not proved to be causal of any behavior. For example, maybe you sample bankers in NYC. You can probably assume you'd get a lot of perhaps semitic genetic background in this dataset. Now, would you conclude that Jewish people have some inherent gene that makes them want to be bankers like a moth to a lamp? Maybe you would. But more likely situation is that people tend to follow the profession of people in their lives who work that profession and can inform them about it, and for centuries there were real legal restrictions in a lot of places preventing anyone but jews from being allowed to charge interest. So, pretty good odds today as a jew you know someone who works in finance and can help at least to some degree point you towards that field.

So really when you say select for household income among western populations, it might be hard to actually find any real signal that is actually causal that isn't due to simple demographic and historical reasons, due to the lack of power you have in sampling rare demographics within a given category such as high income.


I haven’t had time to really dig in to the paper but these data (from only one region) are limited in their ability to compare regions, right?

If anything they seem to support homogenization of intellectual capacity/mental health in Eurasia since 2kya.

The methodology, if it holds up, seems to hold a lot of promise for answering questions like this in the future.


No, this paper doesn't seem to talk about regional differences. The implication seems to be that it wouldn't be surprising to find differences between groups that separated more than 2kya, as there was active changes going on before that time. Not that it predicts any specific differences

> If anything they seem to support homogenization of intellectual capacity/mental health in Eurasia since 2kya.

I would be interested in how you came to that conclusion, unless I'm misleading your post and you specifically mean West Eurasia


I meant West Eurasia, I agree it doesn’t seem to support any broader conclusion.


Yes, they only had data for West Eurasia.

> Just because an allele, SNP, or trait swept into or out of West Eurasia during this time doesn’t mean this happened only in West Eurasia. Researchers can use the new computational methods to look for directional selection in other populations worldwide that have enough ancient DNA sequences and construct a clearer picture of what’s unique to different groups and what generalizes across populations.

> Reich expects that future studies will show that shared selective pressures acted on some of the same core traits across diverse human groups, even as those groups split off and migrated to different parts of the world over tens of thousands of years.

https://hms.harvard.edu/news/massive-ancient-dna-study-revea...


Feature usage can't tell you that.

There's often a checklist of features management has, and meeting that list gets you in the door, but the features often never get used


> how big is the text file? I bet it's a megabyte, isn't it?

The edit in the article says ~1.5kb


Single page on many systems, which makes using mmap() for it even funnier.


Not to mention inefficient in memory use. I would have expected a mention of interning; using string-views is fine, but making it a view of 4kB cache pages is not really.

Though I believe the “naive” streaming read could very well be superior here.


You have made a claim with zero rationale to back it up.

Why shouldn't it look like that? Especially with a law this dumb


It doesn't make strategic sense to make open source projects the enemy of the people. Incentivizing legislation that hurts open source software is not helpful for open source software to thrive.

>Especially with a law this dumb

Allow software to know if the user is an adult or a child seems like a useful signal to me and is not dumb.


You're ignoring the biggest part of SaaS as far as management is concerned.

There's a large, stable entity that management can sue if something goes very wrong.


We don't know. We seem to be hitting diminishing returns, but we don't exactly know where it will stop


Is there a source for this? Scaling laws work and we have about 4 orders of magnitude in the exponential growth before we run into true bottlenecks


I hate that it's a method. That can get lost in a method chain easily enough during a code review.

A function or a keyword would interrupt that and make it less tempting


Well, you can request Clippy to tell you about them. I do that in my hobby projects.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: