Hacker Newsnew | past | comments | ask | show | jobs | submit | sudb's commentslogin

I wonder what the numbers say about desktop applications now, and how much the arrival of Electron changed things up here.

Nowadays, it seems to be that mobile apps have the "best metrics" for b2c software. I'd be interested to read a contemporary version of this article.


“Metrics”

This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.

So many people who should be smart based on their job titles and salaries, got the causation completely backwards!


This stupidity might go a long way towards explaining the relentless push towards apps.

Hey, I notice this kind of thing all the time. People use "data" to tell the story they want to -- similar to how it seems humans make a decision subconsciously then weave a rational decision to back it up afterwards.

Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.


Review the methodology, if you can, and form your own conclusions. Don't bother trying to change people's minds. It rarely works, and often causes conflict, even in the case of people who say they're data-driven.

Survivorship bias

Some of us are still making a living from desktop apps, 17 years later.

Please tell your tales. We beseech thee of thine humble wisdom.

I've written about it lots at:

https://www.successfulsoftware.net


Electron is the worst of both worlds. I have never paid for an Electron app, and never will. Horrid UX.

> I have never paid for an Electron app

Your employer most likely has.


Sure, and so has my government. But I can only control what I personally pay for.

In 2026, the number of mobile applications in the App Store and Google Play increased by 60% year over year, largely because entry into the market has become much easier thanks to AI.

What 'best metrics'?

I think in this case it can be approximated as 'largest market'

I'd wager there are more people paying for software for their smart phone than any other platform they use.


Having my credit card already is an overwhelming advantage for the Apple App store and for Steam. I won’t say it is impossible to overcome, but I think I could count on my fingers the number of instances where I, like, typed my card into a website to buy anything, in the last decade.

Yes, but they are mostly paying little or nothing. How much did you spend on phone apps this year? And ads pay a pittance, unless you have massive scale.

Anecdotally, conversion - from free to trial, trial to paid, one-off purchases, etc.

Did you consider websockets? Curious to know if I'm missing something!

If agents are async, is streaming still important? I think the useful set of interactions with an async agent are pretty limited - you'd want to stop, interrupt with a user message, maybe pause, resume, or steer with a user message?

All of those can be done without needing streams or a session abstraction I think, unless I'm misunderstanding.


I think this post ignores, deliberately or not, the large group of async coding agents that have been GA since around early 2025 - probably the most well-known of which is Devin (which has been around since 2024, but not available to the public).

As an aside, I've built and deployed a production system in which disconnecting & reconnecting from an in-progress LLM stream works and resumes from wherever the stream currently is, through a combination of redis/valkey & websockets - it's not all that hard, it turns out!


LLMs have absolutely made my mechanical ability to write code much worse day-to-day. I'm still not sure if this is a good thing or not.

For you, no. For the services you depend on and will continue receiving your data, and may jack up the prices/add limitations knowing that your dependency won't be easily broken, yes.

I thought it was against Slack's ToS to exfiltrate data like this?

Also surely most of a startup's Slack activity is just fluff - is there some amount of preprocessing the AI companies have to do, I wonder.


How is the ToS relevant when the company is already bankrupt (IANAL)? Slack can cancel the customer-relationship with the bankrupt company, but that's it, no?

huh, I always assumed they were metal-clad objects with something inside

wikipedia tells me they are machines, but not what they're made of


As cool as this technically is - who is the target market for this? I think people building coding agents and coding agent platforms are for the most part building on non-Cloudflare sandboxes, and can tolerate minutes of latency for setup.

I am not sure what people who roll their own in-house solutions for coding agents do, but I suspect that the easy path is still one of the many sandbox providers + GitHub.

I would love to find out who would use this & why!


You can always have a dynamic pool based on your load of ready to use docker containers, then it takes like 15 seconds and is faster than basically any sandbox provider.

yeah absolutely! the tricky bit here is predicting/forecasting load though - possible for many applications but not all

lets just say with Artifacts you could create millions of repos every day, one for each agent/chat/user/session.

Its all durable objects :)


I think this is a great idea in general - security through obfuscation, kinda.

What problem is it that you are confused isn't solved?

I think the codec analogy is neat but isn't the codec here llama.cpp, and the models are content files? Then the equivalent of VLC are things like LMStudio etc. which use llama.cpp to let you run models locally?

I'd guess one reason we haven't solved the "codec" layer is that there doesn't seem to be a standard that open model trainers have converged on yet?


llama.cpp is the ffmpeg/libavcodec equivalent in this story.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: