Hacker Newsnew | past | comments | ask | show | jobs | submit | AlexandrB's commentslogin

> Microsoft spent literal decades rehabilitating their reputation.

"Decades" is a stretch. There was a brief window around the Windows 7/8 era and then, like a dog returning to his vomit, they returned to their user-hostile bullshit. Windows 11 is the culmination of that, but Windows 10 was plenty bad. Remember how Windows 10 made Solitaire a subscription service? Sticking copilot into everything is just more of the same.


The insane practice was allowing "O" and "0" to be used in license plate numbers in the first place. Once you do that, you're stuck dealing with the fallout of trying to distinguish confusing glyphs at distance on a moving vehicle. Many places omit letters that can be confused like this for good reason - e.g. Ontario plates can't have the letters G, I, O, Q, and U.

In the font used for British number plates O and 0 are identical and 1 and I are identical. This link might work for an example:

https://www.dafont.com/uk-number-plate.font?text=OO01+III

Software that handles number plates needs to take account of this. Not all of it does but the glyphs being identical makes it quite clear where the responsibility lies.


My Ontario green plate starts with GV...

I'll grant you smartphones, but smart TVs usually don't have cameras/microphones. The problem with smart glasses is that they constantly capture video and upload it to $VENDOR like in this case.

It's not about us older folk, but the computing environment itself. We're heading into a world of centralized control where your personal computer is mostly a "thin client" for a bunch of online services. Combined with omnipresent age/identity verification and you will basically need permission from someone to do anything interesting with a computer. Especially on the internet. This is in contrast to the 1990-2010 era where software was generally "buy once use forever" (plus kept working regardless of what politician you support online) and general purpose, open hardware was the norm. You could hook your homemade server up to the internet with a minimum of fuss and start running a service or forum or website or whatever.

There are plenty of bright kids out there, but they're going to be operating from a position of dependence on the OpenAIs, Googles, and Apples of the world if they want to ship a product.


>It's not about us older folk, but the computing environment itself.

Whether you like it or not, the computing environment of today is a product of the labor and financial participation of older folk of the past. Google, Facebook and Microsoft weren't built by Zoomers. Everyone contributed to the current state of things, either direct thought labor and fiance or indirectly by just using the products.

>We're heading into a world of centralized control where your personal computer is mostly a "thin client" for a bunch of online services

And who built those online services?

People make it sound like Azure, AWS, Facebook, X, etc were just snapped into existence one day by Zuckerberg, Bezos, and Musk, and aren't decades of labor by hundreds of thousands of workers who voluntarily did this in exchange for cash.

> This is in contrast to the 1990-2010 era where software was generally "buy once use forever" and general purpose, open hardware was the norm. You could hook your homemade server up to the internet with a minimum of fuss and start running a service or forum or website or whatever.

I know, but how does remanenceing help here? You can't put the toothpaste back in the tube same how you can't turn back housing affordability back to how it was in 1995, or bring back those lucrative union manufacturing jobs that could support a family from just bolting bumpers to a Chevy on an assembly line.

Those are all one-time things of the past now, never to return again in the same form. You have to work with the cards you've been dealt today, not moan about how much better the past was since that doesn't help anyone.


Are captchas still effective against modern LLMs?

For this use case it matters a lot less if LLMs can solve it. As long as it costs you more to solve the captcha than it costs your adversary to serve it to you, it is still (some what) effective.

They are if your goal is to burn their GPU time but instead of hundred requests a second you're busy solving captchas

Claude can burn tokens solving the captcha for me. Double the effect.

> but there is nothing fundamentally preventing us from sending AI into the deep end of under-explored territory and perhaps coming back with something new

What's stopping us is that AI works by manipulating tokens and language but has no connection to reality as it exists. Einstein famously conceived of special relativity by imagining what it would be like to fly alongside a light wave[1]. This is a process that integrates spatial reasoning and imagination informed by living in the real universe where you can see objects moving or waves propagating in a pond. The language only comes later as a means of communicating these intuitions to others.

[1] https://sites.pitt.edu/~jdnorton/Goodies/Chasing_the_light/


Not to be too hippy/dippy, but it's only the average of all human knowledge expressible in language. There is plenty of knowledge not expressible this way - for example, the sequence of muscle contraction orchestrated by your brain that allows you to walk. Likewise with feelings like love, pride, etc. There are words for these things, but they're merely labels on an experience that almost all humans know but the specifics of which can't be written down using text.

Expect to see more of these kinds of announcements as companies need to start showing returns on their AI investments. It's hard to say how subsidized the current AI products are[1] but we're definitely getting a free lunch at VC's expense the moment.

[1] Ed Zitron speculates the actual prices with token based billing for heavy users will be something like 10x the subscription price, but this seems high.


Not that I give much credence to anything Zitron says, but the amount of inference you can get on a £200 a month OpenAI or Anthropic subscription is easily an order of magnitude more than what you'd get paying the same amount at subscription rate.

Although I would also point out that OpenAI recently tripled the amount of Codex inference you get per month for £200 (and to head off the suggestion, this is distinct from their current 2x promotion on £100/month plans)


Yeah, I'm sure the numbers are a bit inflated compared to API, but with my Claude $200/month subscription I've supposedly consumed 12,160,410,828 tokens in April for a cost of $22,733.03.

Is that taking cache hits into account?

Cache create is 202,746,985 and cache read is 11,998,411,722 from claude-code-monitor

I make that $7000 :o

> Not that I give much credence to anything Zitron says, but the amount of inference you can get on a £200 a month OpenAI or Anthropic subscription is easily an order of magnitude more than what you'd get paying the same amount at subscription rate.

Neither of those is how much it actually costs the company selling the service. And I have feeling they are running at loss here so the play is "get everything possible using LLMs then jack up the pricing"


There have been plenty of studies which indicate that inference considered by itself is almost certainly quite profitable at all the frontier labs. The problem is amortizing the cost of all the expensive training runs required to train new models into the revenue stream.

Does that mean those running the open models are highly profitable since they don't have to do any training?

Yes obviously, otherwise they wouldn't be doing it; they'd just go back to mining shitcoins.

I don’t know about highly since they have no moat even more than Antrhropic and OpenAI have no moat. Anyone with a few hundred thousand dollars or sufficient free GPUs can compete with them. So running an open model should earn a market-rate margin.

*more than what you'd get paying the same amount at usage rate.

Yes, thanks. Too late to edit now, sadly.

> 10x the subscription price, but this seems high

Inference is cheap but training is quite expensive. Plus all the money they've invested and keep investing on hardware, data centers, etc. And evidently they also need to make a profit at some point.


> Inference is cheap

Maybe from the perspective of traditional, turn-based chat. But when you start having developers command an army of agents that work around the clock, those cheap tokens start adding up fast...


If the unit-economics work out and they can sell $0.99 of tokens for $1.00, doesn't matter how many agents you spin up. The flat rate subscriptions can't last though.

> If the unit-economics work out and they can sell $0.99 of tokens for $1.00

I think the margins have to be a lot higher than that in order to give investors the return they're expecting, to continue the never-ending training treadmill, and to build more and more datacenters to accommodate people basically DDOS'ing the GPUs in order to run their workloads.

Yes, in theory what you said makes sense. But the tightrope these companies have to walk is that the per-token costs still have to be low enough that developers and companies don't just say "ehhh I guess we can still do all this work the old-fashioned way" but ALSO high enough to cover the massive expenses AND astronomical returns everyone's expecting.


VC investment isn’t about margins, it’s about finding a unicorn. It doesn’t matter if margins are negative if your product is dominant in the market as you can fiddle with the margins after the fact. You just need to be invested long enough to see everyone else fail.

The problem with AI is that there doesn't seem to be a durable barrier to entry for a "winner take all" dynamic to work. The biggest barrier to entry seems to be the capital needed to train the models, but even free models are getting "good enough" for some uses and there's little friction to stop users from switching between models. Many frontends make this explicit by letting you pick the model you want to run inside the same environment.

If prices go up, I suspect a bunch of folks will jump to cheaper, less capable models instead of eating the added cost. The whole value proposition of AI in enterprise is around cost-cutting, so that mentality is likely to persist when choosing which model to pay for.


I imagine the calculus changes a little bit when you've invested hundreds of billions (trillions?) of dollars in a relatively short period of time. Priority number one is probably getting that money back. I think the fact that providers are RAPIDLY cutting back/jacking up prices points to this being the case.

Yes. A more useful number would be how many employees are working on macOS specifically. Hard to find a definitive number for that.

Less than 1% of that number. Of course this is hard to actually count properly since there is a lot of shared work across platforms.

I can't believe this idiotic project is running so long after the "blockchain for everything" mania ended. Seems like they can't believe it either since they changed their name from "Worldcoin" to just "World.

I'd love to see some credible reporting on the graveyard of blockchain projects.

So many obviously stupid ideas cropped up on the blockchain in 2021-2022. How many of those are still going concerns?

I guess the problem with blockchain stuff is that often there's no servers to shut down or other clear indication that a project has failed - presumably you can look at on-chain data to see if people have stopped trading various backing tokens, but does trade ever clearly stop or are there always bots exchanging tokens back and forth?


Transactions on a blockchain have a cost, so it's kinda hard to sustain faking usage. Unless they count random bogus blockchains.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: