Hacker Newsnew | past | comments | ask | show | jobs | submit | BrenBarn's commentslogin

> After spending an additional $69 million and years of reverse engineering, they finally produced viable Fogbank. Then discovered the new batch was too pure. The original had contained an unintentional impurity that was critical to its function.

Same thing that happened to the unfortunate Dr. Jekyll!


There is another critique that is not specific to AI but I think is bigger than all of these: that a relatively small number of large companies, and the small number of very wealthy people who control those companies, have an outsize influence on many aspects of society. AI is the poster child for this right now, but tech companies in general are also reviled, and more generally all kinds of companies (media, fossil fuels, etc.) are targets of opprobrium.

From this perspective, the main irritation of AI is that it is the biggest, most intrusive case of "some rich guy is messing with my life". This is driven largely from the willingness of a small number of rich people to lose large amounts of money shoving AI down everyone's throats in the hope that that will eventually lead to them recouping those losses.

I believe a significant amount of AI criticism is really about this, and that means we need to resolve the overall issues of wealth inequality and economic skewing. People would be much less angry about AI if its development and ownership were more diffuse, and if the patterns of its use were more directly connected to its current observable abilities, rather than based on what some group of insiders thinks about how much its stocks may go up in the future.


> If Altman, Amodei, and their Big Tech peers want to rebuild public trust and create a genuine technology that benefits the public, then the path forward isn’t another white paper or postulating about the existential risks of their technology. It’s sustained, verifiable action: genuine transparency about what their products can do, a willingness to accept meaningful regulation and responsibility even at financial cost, and real democratic input from communities on the growth of data centers.

They need to accept far more than that. They need to accept that they may not be able to "create a genuine technology that benefits the public" at all, and that they therefore may be required to stop completely and totally dissolve all their operations if it turns out that is what is best.


An article with this title needs only three words in the body: Too much money.

> And they put it succinctly: buying from a small innovative company is brave while buying from a big, well recognised name is an insurance policy and the risk-averse buyer must have the insurance.

As the article notes, the alternatives from the large companies suck. So this is like buying fire insurance from a company that promptly sets fire to your house. You are buying the insurance while knowing you will need it because the disaster is already happening.


But if that stuff is stuff the bottom 90% wouldn't be competing for, it's even worse in a way, because it means that portion of the market is entirely focused on stuff that's entirely out of reach for that 90%. If there are people out there who could be making tennis shoes but are instead making luxury cars, that's a problem.

Another case where "starting" is the ha-ha-sob part. There's never been anything good about Palantir.

As usual, "empowering" the FTC to issue fines, or even allowing private suits, is ineffective on its own. The fines need to be required, their levels set by law in a manner proportional to the size of the companies involved, and it needs to be made clear that there is no statute of limitations and that all growth built on ill-gotten gains from past surveillance will (not can) be rolled back when the hammer finally drops. That means, e.g., if you start using surveillance pricing in 2016 and you get caught for it in 2026, everything your company (and its executives and board members) gained in the interim will be rolled back. Current conceptions of punishment for these types of things are simply way too low. The entire tree that has grown from these kinds of activities must be pulled out from the root to adequately deter potential malefactors.

Just eyeballing those graphs, the striking thing is the difference from the beginning (early 70s) to the peak in 2012. During that period, reading scores only increased by 8 points while math increased by 19 points.

I'm definitely in the camp that thinks cell phones have something to do with what's happened since 2012. If we start from the iPhone in 2007, it seems plausible to me that a 5-ish year lag is consistent with the time it took for smartphones to rise in popularity and their effects to filter into society. (I got my first smartphone around 2012, by which time pretty much everyone else I knew already had one.) That's at least a gesture toward explaining why there was a peak around 2012. What it doesn't explain is why the ascent to that peak was so much steeper in math than in reading.


One thing I notice is there seem to be far more students who finish elementary school unable to comfortably do basic math in their head (stuff like 17+36 or 144 or even basic multiplication tables like 38).

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: