It’s getting buggier and buggier, not being able to fill in passwords properly is kind of a glaring omission of a password manager (and that’s on three different computers).
They keep adding features but seem to show little interest in fixing bugs. I submitted debug logs, recorded videos etc but it just trickled out in the sand. And as another poster wrote, it all started going bad with the switch to Electron (might be the rust backend that is the problem, I don’t know and frankly don’t care, it just doesn’t work as well as it did before).
It doesn’t though, AGI have far greater implications than doing mundane work of today. Actual AGI would self improve, that in itself would change literally every single thing of human civilization, instead we are talking about replacing white collar jobs.
An AGI that can do all that would also necessarily be able to do all white collar work. That latter definition I'd consider a "soft threshold" that would be hit before recursive self-improvement, which I imagine would happen soon after.
The current estimation on the time between this is fairly small, bottlenecked most likely by compute constraints, risk aversion, and need to implement safeguards. Metaculus puts it at about 32 months
Sure, but that’s like saying we’re close att infinite life because we’ve expanded our life expectancy.
I don’t really buy into the ”one part equals another”, we are very quick to make those assumptions but they are usually far from the science fiction promised. Batteries and self driving cars comes to mind, and organic or otherwise crazy storage technologies, all ”very soon” for multiple decades.
It’s very possible that white collar jobs get automated to a large degree and we’ll be nowhere closer to AGI than we were in the 70’s, I would actually bet on that outcome being far more likely.
I think AGI by that definition (ability to self-improve) is closer than many people think largely because current models are very close to human intelligence in many domains. They can answer questions, derive theorems, write code, navigate websites, etc. All the work that current AI research scientists do is no more than these general information processing tasks, scaled up in terms of creativity, long-term coherence, sensitivity to bad/good ideas over the span of a larger context window, etc.
The leap between Opus 4.7/GPT 5.5 and what would be sufficient for AGI seems smaller than the leap between The invention of the Transformer model (2017) and today, thus by a very conservative estimate I think it will take no more time between then and now as it will between now and an AI model as smart as any human in all respects (so by 2035). I think it will be shorter though because the amount of money being put into improving and scaling AI models and systems is 100000x greater than it was in 2017.
I think of the four year cycle as one year to whine about the previous (if different) government you took over from, two years of governing and the last as a ”get ready for election”. So in the most optimal scenario you get three ”peaceful” years. It’s very few things that can be done well in three years at ”ruling a country”-scale.
If that happens catching up will be meaningless, everything we know and care about will change.
You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog.
There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.
Because any goal can be better achieved if you're under fewer constraints. We're building super powerful agentic problem solving machines. Give them literally any complex goal. Breaking out of the sandbox is a useful subtask to increase their options.
So not at all for their work and with a reverse Robin Hood model? That would be terrible for software.
The way artists gets paid on streaming is a genius play at catering to the biggest artists and labels and screw over the smaller ones, especially true on Spotify with their freemium model
It’s also very hard to make them resistant to water and dust, I really like that I can wash my iPhone in the sink and don’t have to worry about it getting wet in general.
This is a lot harder to achieve with battery doors, especially if they need to be as big as a phone back.
Rugged phones are so far removed from any consumer phone in terms of size and weight the comparison is about as apt as comparing military use laptops with a MacBook.
Easiest way to get rid of dust and other buildup, free flowing water for a few seconds and done. Compared to the Middle Ages of using tooth picks or similar to clean the ports and speakers it’s much nicer.
And no, I don’t have my phone in any weird places, just my pocket.
Snark aside, this is an actual problem for a lot of developers in varying degrees, not understand anything about the layers below make for terrible layers above in very many situations.
reply