Hollywood retitles movies based on books all the time[1], for the silliest of reasons ("Sorcerer's Stone" was contemporaneous to LOTR too); so given there's precedent, it follows that those wanting to retain the original title from the books should defend their position.
It's hard to tell if that's what's going on here, but it seems pretty clear this ability and more like it will be quite apparent in the future.
I have seen some poorly considered projections of what the world might look like when this happens. Usually by assuming bad actors will use the abilities and we will be powerless.
Except I don't think that is true.
Imagine if we had a world where nobody had the ability to keep a secret of any sort. Any action that a bad actor might perform would be revealed because they couldn't do it secretly.
You could browse your ex-girlfriend's email, but at the cost of everyone knowing you did it.
I don't really know how humans as a society would react to a situation like that. You don't have to go snooping for muck, so perhaps the inability to do so secretly would mean people go about their lives without snooping.
> projections of what the world might look like when this happens
I've done this a few times. A world with 0 privacy would definitely be safe (given benign governance), but also would likely be pretty boring. Crime would become a non-issue as everything about everyone being easily known/knowable by everyone else means the root of any given crime, some desire/need, could be brought to the fore and resolved before it became an actual issue. But also there would no longer be any kind of surprise in anything; everything and everyone would essentially become dull and grey, and humanity isn't about that kind of life experience at all.
In such a world, the government could never be overthrown.
All governments go bad eventually, so the ability to overthrow is critical to prosperity.
Government's are either overthrown internally (revolt, uprising) or by external parties (invasion). A worldwide everyone-knows-everything would prevent both.
What would prevent a bad government from being removed if everything about them is also known? Note this is 100% availability across the board, so the governed would know everything about every potential member of government, which means if they said anything that the many disagreed with at any time, they'd simply be excluded from future candidacy, never gaining the political power/recognition in the first place.
It can be quite expensive to get the models and machines to do this.
That's what the money pays for when the Comment above mentions
'that you might have to eventually pay an AI company a large amount of money to ask ChatGPT such a question'
Putting aside that it won't be a large amount of money For any particular query , that's how the AI companies see themselves, not as providers of information, but as providers of mechanisms that provide information. It is not selling the Information of others, it isn't selling information at all. They are selling the service of running the mechanism.
How close can you get to a verbatim work if you train on an author's style and provide detailed chapter summaries?
If it could produce a close to verbatim copy of a work that had not been written when the model was trained would it still count as a copy.
I feel this would be a continuum that extends either direction.
Consider the thought experiment of a hypothetically smart model that knew all of an author's work and a detailed background of the author's experiences and psychology. If you ask the model to write a sequel to "Not that Jenny" and it produces a verbatim version of what the author will write next year, does it count as a copy?
Put aside the notion of whether you think this would ever be possible, think of how you would consider the book if you found a model had succeeded in this task.
Going in the other direction you have a model that has been trained in an author's style with very little in the way of knowledge or reasoning, barely more than the ability to speak and an understanding of idioms and structures that the author might use. This can't write a complete novel but it can correctly guess the next word of a novel 99% of the time.
If you have a map of the 1% of words it gets wrong, you can reproduce the novel from a very small amount of information. Would you say that the model contained the novel, or would you say that the word error list was a compressed representation of the novel and the model did not contain the novel.
This is where things get difficult to quantify what exists 'as a copy' in a generative model.
Surely it would be reasonable for a model to know an outline of what happens in a story. If it knows the outline and style, I don't think that would count as containing the copy. As you increase the ability of the model to infer, and increase the information that it holds to the point that it can reproduce verbatim does it contain a copy? What about if you reduce the ability to infer back to where it was earlier and it can no longer reproduce the novel, does it now not contain the novel? Even though the amount of information about the novel has not been descreased, just its ability to infer, it can never produce a verbatim copy.
In the end I think the notion of whether the model represents a copy in itself becomes too nebulous to be meaningful. It's like an artist who can draw a copyrighted work from memory. They may be able to commit copyright violation but they themselves are not a copyright violation simply for having the ability.
I'm with you on this. There's a difference between exposing wrongdoing and being antagonistic.
Doing them both together increases the amplitude of the signal at the cost of reducing the integrity of the signal.
If you wonder why the world is informationally too loud and too noisy these days, it's because everyone who does this is turning up the volume to be louder than others who are also, in turn, turning up the volume for the same reasons.
Perhaps this is a matter of who is being referred to by 'we'.
Obviously someone can do it because it got done.
If the 'we' is referring to some team handling issues it would make more sense. In that case they should have said something along the lines of "I have informed someone who can help"
Does AI using first person pronouns gross anyone else out? If there’s one AI regulation I could get behind it would be banning the use of computer systems to impersonate a human
I don't perceive an AI as impersonating a human if it uses first person pronouns. Emulating is not impersonating. One is behaving similarly, the other is asserting that the similarity implies equivalence.
I have not personally encountered an AI who claimed to be human (as far as I could detect)
I agree with you, but I also envy you for having never encountered an AI scam bot (where someone would hack someone's WhatsApp or other account and use an Ai to get money from them, or even do the "hey sorry I missed your call" scam).
I get “loan advisors” calling me at least 2-3x a day, always different names and numbers, different voices, same message about my supposed loan application and how I’m approved for $10k-60k. Started maybe 6 months ago after I’d been free of spam calls/texts for a few years on my current phone number. This is in the US, assuming my number must have been leaked in one breach or another to get me back on the target list.
Wow these were quite common to me personally a few years ago. Still get them time to time but I used to get them weekly. In the US, where scams are pretty rampant.
I have been trying to convince Claude to use "Claude" instead of first-person pronouns, and only recently have gotten it to say stuff like "Claude'll go ahead and take care of that now", but it's very inconsistent (shocking).
reply