Hacker Newsnew | past | comments | ask | show | jobs | submit | yobbo's commentslogin

> this law will make phones _worse_ for most people

Not really. The battery just needs to have a connector rather than soldered, and no other things blocking the battery once the back-case is opened. Realistically, a service shop will do the replacement like how watch-batteries are typically replaced.


They are conditions to be met. It's not enough to proclaim them as "your process" and expect results.

When playing piano, the condition you are measured by is acoustic harmonies in the air, not finger movements. The only reasonable advice is either practice more or give up. If you are tone-deaf, it's not reasonable to expect you will learn to play the piano.


Deepmind was not a viable business at the time Google acquired it. Today, it is probably even less viable. It functions as R&D-lab for Google which has its own products and datacenters.

Start by shifting taxation from worker incomes to corporate incomes?

What do you mean?

"What you do" seems more sympathetic than "who you are" and "who you know". American culture might be more meritocratic at a basic level.

"The only reason people disagree with me is because they are emotionally deficient."


It goes from "llms can do everything as well or better to a human" to "quality doesn't really matter" really fast.


So may be a charitable interpretation could be that quality does not matter because LLMs can deal with any complexity that comes with the reduction in quality...


It also went real fast from "GPT hallucinated a library, literally useless" to "this agent has created this entire service up to spec, no notes".


It seems that we are getting bitten by the law that says things that can be measured trumps things that cannot be.

How fast it was to create an initial version of a piece of software can be easily measured.

But how efficient it is, how easy it is to make changes to it, how easy it is to debug it, how easy it is to extend in the direction that the domain requires...all these cannot be easily measured or quantified, but is 10 times more important than that initial creation time....For a software that has to run and maintained for decades delivering value all that time, it does not really matter if the initial version was created in 5 minutes or 1 month, if the 5 minute version does not contribute negatively to all those non-measurable, non-marketable traits of the software.

It is like how camera marketing was mostly around the megapixel value, instead of something vastly more important like low light performance, dynamic range, or fast auto-focus. Because the LCD of the market won't be able to grasp the relevance, and would not act on it. So it was all about megapixel, but at least that does not have a lot of negative consequence unlike the marketing around AI...


I said nothing about speed, I said to spec. Speed is a welcome side effect.


As opposed to me, who is perfectly rational.


> No one has ever made a purchasing decision based on how good your code is.

I routinely close tabs when I sense that low-quality code is wasting time and resources, including e-commerce sites. Amazon randomly cancelled my account so I will never shop from them. I try to only buy computers and electronics with confirmed good drivers. Etc.


> We can assume they are doing so at a profit

This is false. We may assume it's the most efficient way of generating revenue given their GPUs, but their overall profitability will just be a guess. They would still have incentives to run hardware at maximum, even when it's uncertain to eventually recoup costs.

> a world where those API prices aren't profitable

A lab with employees and models in training has other costs than the operating expenses of a GPU farm.


Why would a company sell inference on Openrouter if they're not profitable? Except for Grog/Cerebras and a few other hardware companies looking to showcase their new chips.

If they're losing money and have no VC backing, they'd just turn off the lights.


The actual inference is operated at a 95%+ margin.


> Isn't that how LLM models are trained right now

It's neither how computer chess works or how LLMs are trained.

Computer chess uses various tricks to prune the search space of board states, where the search is guided by the "value" of each board state. Neural networks can be used (and probably was at the time) to approximate this value, but there can be hand coded algorithms with learned statistics or even lookup tables for smaller games than chess.

There's no search in LLM training.


Or, are you using Gooby because debugging in the other language is painful and fraught?

Whereas Gooby "just works" once it compiles?

"Fun" for programmers means satisfaction and achievement in my experience.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: