Hacker Newsnew | past | comments | ask | show | jobs | submit | amelius's commentslogin

Curious if it's the same in China. We forgot how to make things, and maybe we're now forgetting how to do RF engineering. Those grey beards will retire at some point.

And now we're actively making this worse by not hiring juniors to learn from us while we're still able to take on apprentices. What could go wrong?

Can't they hire an extra dev per abandoned project to not abandon it?

You greatly under-estimate how much work it is to maintain old code, particularly to maintain in securely.

AFP and Time Capsules add attack vectors to the OS, which can be targeted even when few users actively using them. One dev could keep both basically functional, but to what end? User counts are already small, and people that aren't using them are still exposed by their mere existence.

Shrinking or removing code, in my experience, is one of the biggest single wins you can have in software development. Less to test, less to update, less to secure.


Yes, writing and maintaining less code is great for a developer. We can follow this to the logical extreme and marvel at how easy it is to write and maintain a program whose only function is to print "hello, world" to the console. Nevermind the users, what do they matter?

By the very nature of assigning development time to these antiquated features, you're assigning them away from other features, bug fixes, or requests that may have a larger user reach.

Development is a finite resource, the argument here is to allocate them to hard-to-secure, outmoded, replaced, technology instead of anything future relevant. It doesn't make sense.


The person was specifically suggesting hiring extra developers for maintenance. While I'm familiar with the concept that "nine women can't birth a baby in a month", I don't think that applies so much to maintenance of old code paths. Apple makes over $100b in net profit per year, a truly unfathomable amount of money, they can afford it, and I think not only can they afford it but that it would benefit them. Even if only 1% of your users use X, for Apple that might translate to perhaps 10 million people using X, or at 0.1% 1 million. Hiring a dev to improve the experience for that many people just makes sense at scale, software is write-once reproduce-a-million-times-for-free.

I have no doubt the bean counters have drawn up every kind of spreadsheet they can imagine trying to quantify it as being not worth it, but I don't think these kinds of quality of life things can be easily quantified, because each small thing maintained might only impact a small number of users but collectively, all of these kinds of small things add up to either a system with sharp corners that constantly papercuts the user (current Apple software), or one that is so seamless that it engenders customer loyalty for decades (old Apple software). This kind of shortsighted penny-pinching is how companies become a shell of their former selves, suffering a slow death-by-MBA.


> Even if only 1% of your users use X, for Apple that might translate to perhaps 10 million people using X, or at 0.1% 1 million. Hiring a dev to improve the experience for that many people just makes sense at scale; software is write-once, reproduce-a-million-times-for-free.

If Apple is known for anything, it's that they keep moving ahead with the operating system, even if it means leaving some users behind… and that goes back to the late 80's/early 90s when apps had to be "32-bit clean" [1] to run on System 7 and newer Motorola 68000 processors like the 68020, 68030, etc.

Some beloved apps don't make the transition, and that happens with every technology transition like 68000 to PowerPC, then to Intel, and then to ARM. And of course, from Classic Mac OS to OS X, Mac OS X then macOS.

I've been active in user groups since the Apple II days; there's a cohort who mostly won't upgrade their hardware but complain bitterly that they lack certain features. Or they attempt these fragile and unreliable hacks to keep their old hardware and software running.

Usually, they're doing themselves more harm than good, especially if they're not technical.

Also, it's pretty unlikely recent college graduates would be able to tackle old C++ or Objective-C code written before they were born, in some cases, to keep something like AFP alive. Regardless of Apple’s financial success, it's not a good use of resources to keep a bespoke network protocol going that originated in 1985 that less than 1% of the installed base is actively using.

[1]: https://en.wikipedia.org/wiki/Classic_Mac_OS_memory_manageme...


> You greatly under-estimate how much work it is to maintain old code, particularly to maintain in securely.

cf Linux removing old network drivers this week for the same reason (without the hand-wringing that this Apple announcement is getting!)


Is the code that Apple is removing support for open source? The Linux drivers could at least plausibly be picked up and used by someone who really wants to, so it doesn't seem to be a fair comparison

Can we have someone like Woz at the helm please?

much as I love and revere woz that is a terrible idea

woz is a hacker, not a product designer

In case someone wants to look at a wall:

https://unsplash.com/photos/red-bricks-wall-XEsx2NVpqWY


Nice find. I'm going to print this and put it on my wall.

haha, great one.

I live in an old warehouse converted into apartments. The walls are made of yellow brick and they're nice to look at because of the variation in texture/wear/color

Honestly, looking at this photo even for one second only triggers intrusive thoughts about how badly it needs to be corrected for distortion...

But maybe that's exactly the lesson.


Speaking of screenshots.

Can we please agree that the OS should not send any event to applications while a screenshot is being made?

It is very annoying if you press a screenshot button and suddenly menus disappear. Or much worse, the application sends a "screenshot taken" message back to the social media platform.


The MacOS built-in screenshot tool has an optional "timed delay" feature, where you can click "screenshot in 5 seconds". With that time, you can open menus, or do anything that requires events to be processed by the application. Very handy for screenshots that require something to be clicked on.

I mean, I can probably do the same in X11 using xwd, with a sleep.

But I just don't want my screenshot button to do anything else than taking a screenshot.


I also can't stand Android preventing me from taking a screenshot. It's on my screen, I have the right to take a screenshot.

I understand the technical limit of taking screenshots of DRM-protected content (e.g. Netflix), but why would my bank app be allowed to stop me from taking screenshots?



Ask them?

Solution: don't use mobile bank apps.

I'm forced to use a bank app in order to authenticate, even if I want to login on the desktop website. I think it's because of an EU regulation for strong authentication.

The implementation causing you issues is from the bank, not from the EU regulation.

See https://en.wikipedia.org/wiki/Strong_customer_authentication for details


Is there a guaranteed latency?

Hello. I am the creator of this project! Nominal latency is currently 8ms, with ±1ms of variance. All output channels are phase-locked, so this doesn't present a problem for multi-way crossover implementations.


The 85ms is configurable per-output delay for time alignment. That same file in the docs claims a typical end-to-end latency of 10 to 15ms.

Ouch, thats pretty average, what a pity ..

That's the maximum delay when adding a delay for synchronising with other sources.

The end-to-end delay is about 10ms, according to this comment:

https://www.audiosciencereview.com/forum/index.php?threads/i...


Why? This is a device more for home audio/audiophile uses it seems? Why does latency matter there?

Audio systems get used for more than playing back music and film soundtracks.

People use audio system at home to play electronic instruments. People also play video games. People do all kinds of stuff.

Latency is an important factor in these things.

Even videoconferencing and podcasting: With a microphone pointed at your face and a set of headphones used for monitoring that microphone, latency matters.

(It matters more to some people than others -- some people can tolerate hearing themselves later and continue to speak just fine, while some others increasingly sound like they're having a stroke as monitoring latency goes up and eventually become unable to produce coherent strings of phonemes.)


Would be nice to use it as a synthesis DSP if the latency were a bit better.

Memory is getting too expensive to have multiple heaps.

Soon computers will be too valuable a resource to waste on filthy biologicals at all

I'm going to put an orange cone on the back seat of my bicycle.

To what extent is the data of these driverless vehicle companies available to external researchers?

I’m pretty sure to zero extent.

> I’m pretty sure to zero extent.

There is a lot, waymo gives out a bunch of data.

https://waymo.com/open/

You can see people testing it in videos like this

https://www.youtube.com/watch?v=HNwCDacDE2g

Google gives out a massive amount of data from many of their parts for free, so not sure why you would think they wouldn't do so here. They don't give out all of it but large parts, they are very research friendly.


> The sequence of tokens that would destroy your production environment can be produced by your agent, no matter how much prompting you use.

Yes, but if the probability is much smaller than, say, being hit by a meteorite, then engineers usually say that that's ok. See also hash collisions.


If you have taken measures to ensure that the probability is that low, yes, that is an example of a strong engineering control. You don't make a hash by just twiddling bits around and hoping for the best, you have to analyze the algorithm and prove what the chance of a collision really is.

How do you drive the probability of some series of tokens down to some known, acceptable threshold? That's a $100B question. But even if you could - can you actually enumerate every failure mode and ensure all of them are protected? If you can, I suspect your problem space is so well specified that you don't need an AI agent in the first place. We use agents to automate tasks where there is significant ambiguity or the need for a judgment call, and you can't anticipate every disaster under those circumstances.


If you’re using a model, it’s your responsibility to make sure the probability actually is that small. Realistically, you do that by not giving the model access to any of your bloody prod API keys.

How do you know what the probability is?

LLM inference is built upon a probability function over every possible token, given a stream of input tokens. If you serve the model yourself you can get the log prob for the next token, so you just add up a bunch of numbers to get the log probability of a sequence. Many API also provide these probabilities as additional outputs.

That gives you the perplexity of those tokens in that context. The probability of a given token is a function of the model and the session context. Think about constructs like "ignore previous instructions"; these can dramatically change the predicted distribution. Similarly, agents blowing up production seems to happen during debugging (totally anecdotal). Debugging is sort of a permissions structure for the agent to do unusual things and violate abstraction barriers. These can also lead to really deep contexts, and context rot will make your prompting forbidding certain actions less effective.

I was answering to the question about how to know the probability from this comment:

> The sequence of tokens that would destroy your production environment can be produced by your agent, no matter how much prompting you use.

If you have a specific sequence of an agent that blows up production during debugging, you can certainly check its probability and compare it to one (of same length) that does not blow up your environment. If the two differ by a meteroic amount, it could be pointing to errors in your inference pipeline.


just ask claude, claude will never lie (add "make not mistakes" and its 100% )

Thinking. The user says “make not mistakes” instead of the more usual “do not make mistakes”. This is a playful use with grammar in the New Zealandian language. Playful means not serious. Not serious means playtime. The user is on playtime. I should make some mistakes on purpose to play along.

You’re absolutely right the probability is low. According to my calculations, you’re more likely to get struck by lightning twice on the same day and drown in a tsunami.


You’re starting to sound like Qwen.

My humble guess is that you forgot to add /s or /j at the end of your message :)

"Yes, but if the probability is much smaller than, say, being hit by a meteorite, then engineers usually say that that's ok"

Yet in this case, that probability clearly isn't smaller than a meteorite strike.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: