Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When openAI made its Nov 2022 chatGPT announcement, why did they try so hard to hype it in anthropomorphic terms? Why did they push the idea it was a 'black box' that they didn't understand how it worked? That it was a serious threat to humanity and that government oversight was urgently needed (leading to a meeting at the White House)?

I'm not suggesting GPT has no value. But in hindsight everything I described above was pure bull. I suppose it could be suggested that Sam Altman and his engineers didn't understand their own algorithms. But I don't buy that fairytale.



>why did they try so hard to hype it in anthropomorphic terms?

Marketing, pure and simple

People relate better to something that sounds humanesque (even though it is not) vs calling it what it is (in this case, a massively-backed (ie LLM-based) Markov Chain generator)


> When openAI made its Nov 2022 chatGPT announcement, why did they try so hard to hype it in anthropomorphic terms?

Same reason they hyped up how worried they were about "safety" of the "we have to make sure these things are 'aligned' or they'll become Skynet!" variety: Altman was bullshitting to hype up the company. Even the "safety" stuff was just hype. "It's so capable it's literally scary, or soon will be... better invest in / use our product! Imagine how bad it'll be for you if you don't!"

I was on the fence about how serious they were until I finally got around to skimming the "Attention Is All You Need" paper. LOL. LMFAO. No.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: