> Majority of AI text, music, images, videos and code is indistinguishable and you use it every day.
I really don't think this is true. If it was, we'd be able to point to countless examples of things assumed not to be AI that actually were, but there's a dearth of such examples.
Examples _are_ countless. Look around yourself - it's simply indistinguishable. Videos are still not quite there, the biggest telltale is how short they are, but we are very very close.
> There's also the point that LLMs can give you explicit control over features like reading age, social register, metaphor frames/ themes/imagery, sentence structure, grammatical uniqueness, rhythmic variation, and other linguistic markers.
You already have this. Control over your writing is the default position.
What do you mean by wider impact? Model collapse would be the opposite of a wider impact: it's an immediate impact, and I'm fairly sure the people training these models have good incentives to avoid that.
Eg by filtering data, by procuring better data, by applying techniques for making do with more limited data (we used to have a lot of those, and they are still known), or you can also adapt your training process to be less vulnerable to model collapse. Just because some researchers have shown that this happened for the models they tested, doesn't mean it has to be a universal thing.
Children have a more developed sense of ethics than that.
reply