The first thing I thought when I read the abstract of the underlying paper was that this sounds like "model collapse" at the society level.
I don't feel super confident that we'll "soon" find ourselves in a world where there is no variance left in thought (would that be the net effect of total model/epistemic collapse?), though if you do accept that there could be any loss of variance due to AI, perhaps it's not unreasonable to consider how much and how quickly could this happen?
All this is by way of saying, I don't think it's wrong to ask these kinds of questions and think deeply about the consequences of societal shifts like this.
Just because someone lets the electrician (LinkedIn) into their home (browser) doesn't mean they can do whatever the hell they want that isn't expressly prohibited. If the electrician wants to rifle through my desk drawers, they should ask for permission, and I will politely tell them to leave.
To this day, I wonder if Google knew that they couldn't be the ones to unleash AI unto the world. They clearly had the wherewithal and expertise to do it (Vaswani et al, 2017), but were under so much antitrust pressure at the time that it seemed inconceivable that they could be the ones to introduce such a polarizing technology to the world. What kind of firestorm would have rained down on them if they were the first.
Or, you might think, if Google had the technology, and they knew how to turn it into a trillion-dollar product, it's beyond ridiculous to think they would just hand the win over to someone else.
I think they just saw it as slop, they were working to make it more reliable and accurate. Releasing it first would tarnish their name, it just wasn't ready. OpenAI had no name to tarnish, so people were more willing to deal with the subpar experience as they refine it.
I don't feel super confident that we'll "soon" find ourselves in a world where there is no variance left in thought (would that be the net effect of total model/epistemic collapse?), though if you do accept that there could be any loss of variance due to AI, perhaps it's not unreasonable to consider how much and how quickly could this happen?
All this is by way of saying, I don't think it's wrong to ask these kinds of questions and think deeply about the consequences of societal shifts like this.
reply