Hacker Newsnew | past | comments | ask | show | jobs | submit | 0xpgm's commentslogin

I wonder, how should an AI company be accountable for non-deterministic nature of AI, which is a fundamental property of the said AI?

People have been drinking too much hopium they have lost touch with reality.

Everyone needs to properly understand these tools before they use them for anything serious.


At the very least, when an agent can delete a production database you should get an obvious warning whenever you enable it. Marketing wouldn't like it though.

> This kind of forgetting is normal

Just as shift in power and the rise and fall of nations is normal.


Yes. Again, this will eventually happen to every one, some way. Of course nations always want to prevent this; it's part of the job of the government. But there's always long tail of very low probability, very destructive threats. You can't possibly safeguard against them. In fact, trying to do so is a sure way to trigger a fall of your nation (or at least your government), by draining your economy dry due to paranoia.

The rational thing is to address a threat proportionally to it's expected damage and probability of occurrence. When war is unlikely, you scale down your defense production; when it becomes more likely, you ramp it up - paying cold-start cost is still much cheaper than paying for ongoing readiness. If your scaling down defense makes it more likely for you to be attacked - well, that's the job of your intelligence and defense departments to track. Nobody said it's a static system - it's a highly dynamic one, that's what makes geopolitics a hard thing.


For that matter, a lot of human civilization has been about identifying things that were normal and making them rare. "Normal" infant mortality of 40%, famines, floods, history being lost, etc.

Anyway, when it comes to "this is normal" I think we should take care to distinguish between interpretations of:

1. "This specific case should not have taken certain people by surprise."

2. "This is a manifestation of a broader phenomenon."

3. "This is natural and therefore cannot or should not be solved." [Naturalistic fallacy.]


In the specific case discussed in the article and comments, I'm advocating for another interpretation:

4a. "If a process is unlikely to be needed any time soon, shutting it down and then paying cold-start costs if and when it's needed again, is better than keeping it going and wasting resources better used elsewhere", and

4b. "There's an infinitely long tail of low-probability problems, and you can't possibly afford to maintain advance readiness for any of them".

Also on the overall sentiment:

4c. "Paying a cold-start cost isn't a penalty or sign of bad planning. It's just a cost."


Programmers in non-western countries may not be able to afford $100 per month on vibe coding.

They may keep taking the longer and harder route of a mixture of AI and hand coding.


The same applies to the south. It’s shocking to read tales of people spending hundreds of dollars monthly with coding agents, that’s wholly impossible for the vast majority of devs in South America, even 20 dollars is hard to justify for most households. By economic factors alone, I bet there are a lot more people learning the hard skills in places they can’t afford to be dependent on the tools.

They'll find a way. If it's not the chiptole bot, the enormous volume of low-effort AI implementations will provide a free token layer.

An extension to Zawinski's Law, every web service attempts to expand until it becomes a social network.

A modernization, really.

Instead of getting more dependant on Big Tech's AI products, I think the perfect use for AI is develop tools and workflows that decouple one from Big Tech.


Not specifically about your case, but some people are usually just more verbose than others and tend to say the same thing more than once, or perhaps haven't found a clear way of articulating their thoughts down to fewer words.


Reminded me of this thread between Alan Kay and Rich Hickey where Alan Kay thinks "data" is a bad idea.

My interpretation of his point of view is that what you need is a process/interpreter/live object that 'explains' the data.

https://news.ycombinator.com/item?id=11945722

EDIT: He writes more about it in Quora. In brief, he says it is 'meaning', not 'data' that is central to programming.

https://qr.ae/pCVB9m


Thanks for the pointer to this 2016 dialog!

One part of it has interesting new resonance in the era of agentic LLMs:

alankay on June 21, 2016 | root | parent | next [–]

This is why "the objects of the future" have to be ambassadors that can negotiate with other objects they've never seen. Think about this as one of the consequences of massive scaling ...

Nowdays rather than the methods associated with data objects, we are dealing with "context" and "prompts".


Quite a nice insight there!

I should probably be thinking more in this direction.


Hm, not sure. Data on its own (say, a string of numbers) might be meaningless - but structured data? Sure, there may be ambiguity but well-structured data generally ought to have a clear/obvious interpretation. This is the whole idea of nailing your data structures.


Yeah, structured data implies some processing on raw data to improve its meaning. Alan Kay seems to want to push this idea to encapsulate data with rich behaviour.


I’m with Rich Hickey on this one, though I generally prefer my data be statically typed.


Sure, static typing adds some sort of process that provides a coarse interpretation of the data.


To beat to death a well-known quote:

You may be able to go fast with AI, but you can only go far with humans.


First time I'm hearing this quote and I like it a lot

We are definitely seeing a lot of anti-human behavior around AI adoption, because all anyone seems to care about is going fast


AI hits the sweet spot for the 'bring me a rock' management style.


Where is the quote from? A web search revealed only your comment


It is a play on that quote, isn't it?

> If you want to go fast, go alone, if you want to go far, go together.


As a mostly Python programmer and partly TypeScript programmer, my subjective thought is that a bit more 'noise' with TypeScript than Python.

Just a little more to parse with my eyes and a little more to type with TypeScript.

But hey, with all these cool kids with their AI coding agents, reading and handwriting code may soon be obsolete!


Yup, since around 2016 HN and other tech spaces got infested with people who cannot separate their political ideology from technical discussions.

When it comes to FOSS they claim that FOSS has always been political to justify the politicization of everything they touch.

Things used to be much better when the people adhered to the age-old wisdom "Keep politics and religion out of the office" and carried this attitude to neutral spaces online.

In part, some of us got into tech because it was one of the places where meritocracy ruled and you could get away from those who thrive by overwhelming others with BS.

I apologize for the rant.


Being “apolitical” is a luxury of the privileged, especially in turbulent times.

True tests of courage, morals, and ethics are occurring more and more every day now, especially in the tech industry that is so closely intertwined with the regimes across the world who seek to cause great harm to those who do not look like, speak like, or believe in the same things as them.

"The only thing necessary for the triumph of evil is for good men to do nothing" - there’s your quote for political apathy.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: