Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

yeah, I get that.

But the actual bit that's doing the thinking is restarting from scratch every time. It loads the context, does the next thing, maybe updates the context, shuts down. One second later the same thing. This is not "highly autonomous" Artifical Intelligence. Just IMHO. Other opinions are also valid.

 help



> But the actual bit that's doing the thinking is restarting from scratch every time.

A sibling comment questions the relevance of this by asking what would change if that were true of some low-level component of the human thinking engine, which is a good point, but also: what "actual bit" does this? Both commercial backends and even desktop inference software usually does prefix caching in memory, so arguably that doesn't model the core piece of software running low-level inference, except when either the past context is changed (e.g., when compacting to manage context, or if you are using one software instance and swapping logical histories because you are running multiple agents concurrently — but not in parallel — on one software engine.) And it obviously doesn't match the system at any higher level.

> This is not "highly autonomous" Artifical Intelligence.

Even if that was an accurate model of a component of the system, that a component considered alone is not highly autonomous artificial intelligence is not an argument that the aggregate system is not a highly anonymous artificial intelligence.


If you learned that a piece of the brain where meaningful computation takes place was stateless, would that cause you to question whether the human mind was "highly autonomous"?

Good question.

I don't know enough about neuroscience to really answer your question in depth.

My opinion, uninformed as it is, is basically around the intuitive reasoning that something cannot be "highly autonomous" if it has to be kicked every second ;) Autonomous is defined as not needing to be controlled externally. And coupling that part with something as simple as cron job doesn't solve that in any meaningful way or make it "autonomous".

A batch file coupled with a cron job that triggers it once a day is not an "autonomous system" to my mind. It's a scheduled system, and there's a significant difference between those things.


It seems to me that you are trying to define "autonomy" as a structural property rather than a behavioral one, and then adopting arbitrary rules as to what structures do not count as autonomous whether or not they produce the same behavior as structures which do.

I guess that's fine, autonomy has lots of definitions (some in overlapping domains) and I guess one more doesn't hurt, but I'm pretty sure the intended use in the discussion here is the standard mechanical one where it is a behavioral trait defined by the capacity of a system to decide on action without the involvement of another system or operator, and therefore it is something that could be achieved by a system composed of a processing and action component called repeatedly by a looping component.


yeah, if we're just arguing semantics then I'm happy to let it go ;)

I think I'd say that the batch file is not itself autonomous but the system as a whole is autonomous (if limited) but I'm not prepared to argue that's the correct definition.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: