Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a general name and framing we could apply to these “AI” that is equally as accurate but sheds all of the human biases associated with the terms?

Like… it’s just a really, really, really good autocomplete and sometimes I find thinking of it that way cleans up my whole mental model for its use.



I like something related to "interns" (artificial interns?) because it keeps the implication that you still always have to double-check, review and verify the work they did.


AInterns?


Does that actually clean up your mental model though? At some number of "reallys" that autocomplete starts to sound like intelligence. Like, what is "taking customer requirements and turning them into working code" if not just really really really really really really really good autocomplete with this mental model?


A lot of people are just doing the job of a really good autocomplete, not being asked to make many, if any, nontrivial decisions in their job.

Taking requirements and making working code is something some models are adequate at. It’s all the stuff around that, which I think holds the value, such as deciding things like when the requirements are wrong.


It's really difficult because many of the task types we use AI for are those that are linguistically tied to concepts of human actions and cognition. Most of our convenient language use implies that AI are thinking people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: