> All of which is to say, I don't see how "what constitutes exposing a pointer" would be a hard question.
It's a hard question because any coherent semantics you come up with breaks optimizations that are obviously correct™. This is difficult to convey in toy examples, because in toy examples, there's clearly enough information present that the optimizer could do the right thing™ if it were smart enough.
The other thing to keep in mind is that optimization generally works on semi-mangled forms of the source code, so if you have a pointer-to-integer that has no use at the IR level, that doesn't necessarily mean that the pointer-to-integer had no use at the source code level (and same for the converse, incidentally--the IR might materialize uses that didn't exist at the source code level).
> It's a hard question because any coherent semantics you come up with breaks optimizations that are obviously correct™. This is difficult to convey in toy examples, because in toy examples, there's clearly enough information present that the optimizer could do the right thing™ if it were smart enough.
I don't buy this (particularly your first sentence). I need to see it to believe it.™ I'm not saying it's not the case, just that it's kind of a tough sell when the illustration fails basic scrutiny.
But if it's actually true, then that's a reason to stop using toy examples in explanations, not a justification for plowing ahead with obviously wrong toy arguments.
Imagine if laws worked this way? "As you can see here, you actually ran a red light." "But the light is literally green?!" "It's difficult to show in this picture, but it is red. Trust me bro, we wouldn't catch obvious criminals if we only addressed the cases where it was obviously red." Uhm, what?
A good mental exercise with these kinds of examples is to imagine splitting up the parts of the program that each optimization looked at, so that they live in different translation units.
This prevents your mind from playing games with what exactly the optimization can know. The article tries to do something similar by fully writing out the output of each optimization as a separate program.
> A good mental exercise with these kinds of examples is to imagine splitting up the parts of the program that each optimization looked at, so that they live in different translation units. This prevents your mind from playing games with what exactly the optimization can know. The article tries to do something similar but fully writing out the output of each optimization as a separate program.
I already do that in my mind, and my mind isn't playing games. If you have a specific point to rebut please do so. Though please read https://news.ycombinator.com/item?id=42907292 in its entirety beforehand. It seems we are reading those intermediate outputs differently.
My point was that toy examples like this are useful as long as you think about them consistently. I wasn't trying to make any claims about your concrete argument in particular.
> My point was that toy examples like this are useful as long as you think about them consistently. I wasn't trying to make any claims about your concrete argument in particular.
Actually I'd suggest the law analogy illustrates the problem with your approach.
"You agree on the principle that I am Innocent until proved Guilty right?" Sure. "And I haven't been proved guilty, have I, so therefore I'm innocent right?" And that's why we need a trial, "But it makes no sense to try the innocent".
> Actually I'd suggest the law analogy illustrates the problem with your approach. "You agree on the principle that I am Innocent until proved Guilty right?" Sure. "And I haven't been proved guilty, have I, so therefore I'm innocent right?" And that's why we need a trial, "But it makes no sense to try the innocent".
Your statement makes no sense. The semantic analysis is the trial. The optimization is the sentencing. The compiler is making a false assumption during trial and then proceeding with sentencing as if the code was guilty when the evidence clearly pointed otherwise.
(I'm not about to digress into a random quibble about an analogy so this is my last comment on that.)
It's a hard question because any coherent semantics you come up with breaks optimizations that are obviously correct™. This is difficult to convey in toy examples, because in toy examples, there's clearly enough information present that the optimizer could do the right thing™ if it were smart enough.
The other thing to keep in mind is that optimization generally works on semi-mangled forms of the source code, so if you have a pointer-to-integer that has no use at the IR level, that doesn't necessarily mean that the pointer-to-integer had no use at the source code level (and same for the converse, incidentally--the IR might materialize uses that didn't exist at the source code level).