Which logic are you saying “can’t encode the speculative moment”?
I think the two logics can emulate one another? Or, at the very least, can describe what the other concludes. I know intuitionistic logic can have classical logic embedded in it through some sort of “put double negation on everything”. I think if you add some sort of modal operator to classical logic you could probably emulate intuitionistic logic in a similar way?
You don't even need to add a modal operator since modal logic itself can be embedded in classical logic via possible-world semantics. Of course the whole thing becomes a bit clunky - but that's the argument for starting with intuitionistic logic, where you wouldn't need to do that.
This isn’t quite right. Classical logic doesn’t permit going from “it is impossible to disprove” to “true”. For example, the continuum hypothesis cannot be disproven in ZFC (which is formulated in classical logic (the axiom of choice implies the law of the excluded middle)), but that doesn’t let us conclude that the continuum hypothesis is true.
Rather, in classical logic, if you can show that a statement being false would imply a contradiction, you can conclude that the statement is true.
In intuitionistic logic, you would only conclude that the statement is not false.
And, I’m not sure identifying “true” with “provable” in intuitionistic logic is entirely right either?
In intuitionistic logic, you only have a proof if you have a constructive proof.
But, like, that doesn’t mean that if you don’t have a constructive proof, that the statement is therefore not true?
If a statement is independent of your axioms when using classical logic, it is also independent of your axioms when using intuitionistic logic, as intuitionistic logic has a subset of the allowed inference rules.
If a statement is independent, then there is no proof of it, and there is no proof of its negation. If a proposition being true was the same thing as there being a proof of it, then a proposition that is independent would be not true, and its negation would also be not true.
So, it would be both not true and not false, and these together yield a contradiction.
Intuitionistic logic only lets you conclude that a proposition is true if you have a constructive/intuitionistic proof of it. It doesn’t say that a proposition for which there is no proof, is therefore not true.
As a core example of this, in intuitionistic logic, one doesn’t have the LEM, but, one certainly doesn’t have that the LEM is false. In fact, one has that the LEM isn’t false.
I agree that it is probably best to speak nicely to them, but, I’m not so sure about the “It’s not like it’s their fault.” justification for this? Not that I think it is their fault. Just, I don’t think the reason to treat these models well is for their sake, but for ours. I don’t think these models have a well-being (y’know, probably..) but when one interacts with one, one often feels as if it does, and it is best to treat [things that one feels like has a well-being] well (or, in a way that would be treating it well if it did have a well-being).
Like, if someone mistakes a manikin or scarecrow for an innocent person, and takes action in an attempt to harm that imagined person (e.g. they try to mug the imagined person), they’ve still done something wrong, even though the person they intended to wrong never actually existed.
I guess maybe it kind of depends how strongly and deeply one feels as if the manikin/scarecrow/chatbot is a person? If one is playing make believe using scarecrow, role playing as a mugger, but only as a game, then that’s probably fine I guess. Like, I don’t want to say that it is immoral to play an evil character in a D&D campaign; I don’t think that’s true.
But if one is messing with some ants, and one conceives of oneself as “torturing some ants”, I think one is fairly likely doing something wrong even though I don’t think the ants have a well-being, and there’s nothing wrong with killing a bunch of ants. And I think this is still true even if one has the belief “ants don’t actually have a well-being” at the same time as one conceives of what one is doing as “torturing some ants”.
I suppose when I say, "It's not like its their fault", I'm more saying that expressing any frustration you feel towards an imagined AI personhood is wasted effort.
Claude Code has analytics for when you swear at it, so in a sense it does learn, in the same very indirect way that downvoting responses might cause an employee to write a new RL testcase in a future model.
I don’t see why any of those should be exonerating?
Also, I feel like “nothing wrong if it does happen” regarding shooting someone, is the wrong perspective. If shooting someone is necessary, then it is necessary, but that doesn’t mean nothing went wrong. Anytime someone gets shot is a time something has gone wrong.
Yes, something has gone wrong: someone threatened to kill me and my family, and apparently the only way to stop them from doing so was to kill them. That may be the best option available, but it is still a tragedy.
What is the smallest level of additional security such that, if you assumed that the TSA only provides that much additional security over the alternative of not having them, you would regard it as worth it?
And, is the actual amount of security provided greater than that amount?
I think the two logics can emulate one another? Or, at the very least, can describe what the other concludes. I know intuitionistic logic can have classical logic embedded in it through some sort of “put double negation on everything”. I think if you add some sort of modal operator to classical logic you could probably emulate intuitionistic logic in a similar way?
reply