Hacker Newsnew | past | comments | ask | show | jobs | submit | fg137's commentslogin

Only for software engineers who are already familiar with terminals. Most non tech people I know and in my company absolutely hate TUI. Even a fraction of software developers who spend most their time outside terminals (especially those that are on Windows and/or use specialized tools/IDEs) prefer to avoid TUIs as well.

Many "non-technical" folks who have interacted with virtual 3270 terminals for all sorts of mission critical tasks would disagree sharply with that assessment. And those are essentially TUIs.

Eh... no. Never underestimate people's ability to make software bloated and slow. You haven't spent enough time with Claude Code, Gemini CLI I guess.

Oh, I've definitely seen the results. :) But it's nice when people don't do that.

Probably, at some point.

People used to say "So Windows is bad, but does it matter?" And it seems that this does matter, so much that Microsoft (appears to) want to improve Windows' user experience.


Microsoft? Improve windows? Too little too late I say. I've switched to kubuntu and will never ever by choice go back. And I'll use what choice i have to avoid those situations which force it. It is consistently hostile to me and has been increasingly so for many years now. So I'm hostile back, fuck em. Sorry, it just makes me so upset lol

I'd argue it's often the contrary -- since it's easy to ship features and fixes, people often ship things without questioning if it makes business sense to support a use case, or if the design is solid. Now you have exactly the same revenge but more things to maintain

What if you're the SRE and the code fixes mean the site goes from 99% uptime to 99.9% up? How do you measure the revenue from that?

On this side of the equation I think you start pulling in customer context and risk analysis on the downside. What is the churn risk for operation at 99% vs 99.9% availability.

If your site is for B2B and impacts customers own operations or revenue, you'll likely be wanting to chase the 99.9%, customers won't tolerate the 1.5 hours per week of downtime and will churn.

However, if the value you're site creates is tolerant to those sorts of disruptions, someone is just inconvenienced and can come back later, a large investment to move from 99% to 99.9% wouldn't be justified. There is literally no impact from the investment. The harder part will be the reality, most investments will be somewhere in the middle with ambiguity on the impact. IIRC, SRE principles do talk about this when setting SLOs in different terms.

I've heard some companies refer to the concept as economical thinking, which is I think a great way to think about it. Doesn't mean you'll always get it right, more so that we embed being conscious about the ROI in our work.

I also believe this is an area that I've observed several engineers really struggle with, especially when moving from big tech to startups, where it's really easy to import culture from another company, and in earlier stages of startup life... if you don't have product-market-fit, it doesn't matter how good you're availability is. Attention is a resource, make sure it's allocated to what creates value for the customer.


Depending if the site has a direct competitor and non-sticky customers, you can often get accurate loss estimates from outages. For example, friends of mine at Doordash would know when UberEats was down by the corresponding spike in traffic to their app. The competitor captures all the lost traffic.

Most enterprises will have a harder time quantifying losses, as some percentage of customers will come back later. To understand that, you need to look for a drop in completed purchase rates compared to site visits.

For a SaaS, it's even more difficult, as customers are often held captive by long contracts and might tolerate SLA breaches up to a certain point. A reasonable, though fictional, proxy would be the revenue for the contract pro-rated against the uptime during that period.


Seems like an unscrupulous operator would take action to take down their competitor's site with a DDoS in order to drive business to themselves.

I mean, Firefox is 3rd party, so.

I have learned to take always Willison's words with a giant grain of salt, despite how popular those articles are here.

How can I do better?

Quality over quantity

I aim for both.

My blog is a combination of different content types. "Entries" are the ones I spend the most time on - https://simonwillison.net/entries/

Links and notes are more short form - I try to keep the quality high (especially with regards to accuracy) but they're also much higher volume than entries: https://simonwillison.net/blogmarks/ and https://simonwillison.net/notes/


It doesn't make sense at all. So as a user how do you choose which model to use? There could be 3824 models to choose from. The browser might as well set one as default, and we all know how that goes (see: search engine).

Not to mention many other UX questions the come with this, most importantly, how unusable these local models are on regular 3-year old laptops that are constrained in RAM, GPU/CPU capability and likely disk space despite what enthusiasts say here. (They have a Macbook Pro with 32+GB of RAM, reports it works great with xyz model -- fine -- but somehow thinks it works for everyone and local models are the future.)


The Chrome model requires either "16 GB of RAM or more and 4 CPU cores or more" or "Strictly more than 4 GB of VRAM", and "22 GB of free space" (it uses around 4.4GB but it doesn't want to use the remaining free space).

The model is pretty slow on my M4 Pro mac.

The API allows the browser to use a cloud service instead, but then privacy is lower. So, more privacy for the rich.


> It doesn't make sense at all. So as a user how do you choose which model to use? There could be 3824 models to choose from. The browser might as well set one as default, and we all know how that goes (see: search engine).

...what's the exact problem here? Believe it or not, most non-tech-savvy users use the search engine just fine.


With regards to search engines, Google paid billions of dollars [0] to become the default on major browsers. I guess GP's implying that something similar might happen with LLMs.

[0] https://www.reuters.com/technology/google-paid-26-bln-be-def...


If every browser vendor already has their experimental APIs that can work with different models, it might be a good idea to standardize this in WhatWG living standards (which would still be bad user experience on today's consumer hardware)

But if no browser other than Chrome supports this, and only Google's (proprietary) model (edit: plus Microsoft's Phi-4 mini in Edge), it should be clear it's Google abusing its position. There is nothing worth standardizing.

And we have seen that too many times -- FLoC/Privacy Sandbox/Topics API, Web Environment Integrity just to name a few. Google has been relentless in using its dominant position to push terrible ideas that harm both users and other browser vendors but help only Google's business.

Surprised this did not really come up in previous discussion in https://news.ycombinator.com/item?id=47917026

PS: looks like Google's fanboys have arrived. Someone better finds good counterarguments, especially technical ones, instead of just downvoting.


They likely refer to "WebSearch", not "WebFetch" (and the original statement is not correct).

Are you sure you have the correct reference?

I think everyone else is relating to

https://futurism.com/artificial-intelligence/microsoft-bans-...


Heh well, that article says it "clearly infuriated executives at the company", and links to [1], which is exactly what I described. But banning it on Discord does kind of retroactively prove their point, I suppose.

[1] https://futurism.com/artificial-intelligence/microsoft-satya...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: