> But anything that gets into bayesians and frequentists ends in a fundamental divide in humanity. Humans don't agree on which interpretation is correct.
There is no divide, there is the illusion of divide because we didn't have a rigourous formal model of how to build reliable knowledge and everyone focused on different but relevant aspects.
Bayesian reasoning is the correct way if you have justifiable priors, but we didn't have a way to calculate the correct prior.
Solomonoff showed us how with his theory: Kolmogorov complexity is a measure of parsimony, and this is how to select priors in a formal, rigourous way.
Solomonoff induction is to knowledge what Turing machines or the lambda calculus are to computation. Sure, aliens might not discover Turing machines exactly, or the lambda calculus exactly, but whatever they do build that's capable of universal computation, we already know it must be isomorphic to a Turing machine, because all constructions capable of computation must be by necessity.
The frequentist/Bayesian divide is a separate issue about how to interpret statistical data in useful ways, not specifically about how we know what we know and what confidence we should have in our knowledge, which is what you were asking about.
Interesting. Do you know of any popular science articles or books that can describe what you're talking about? Academic papers are fine too, just harder to parse.
Hard to find simple articles on such an esoteric topic as algorithmic probability, which cuts across subjects like probability, information theory and computation. This one seems to hit all the notes but who knows if it's as accessible as it's aiming to be:
There is no divide, there is the illusion of divide because we didn't have a rigourous formal model of how to build reliable knowledge and everyone focused on different but relevant aspects.
Bayesian reasoning is the correct way if you have justifiable priors, but we didn't have a way to calculate the correct prior.
Solomonoff showed us how with his theory: Kolmogorov complexity is a measure of parsimony, and this is how to select priors in a formal, rigourous way.
Solomonoff induction is to knowledge what Turing machines or the lambda calculus are to computation. Sure, aliens might not discover Turing machines exactly, or the lambda calculus exactly, but whatever they do build that's capable of universal computation, we already know it must be isomorphic to a Turing machine, because all constructions capable of computation must be by necessity.
The frequentist/Bayesian divide is a separate issue about how to interpret statistical data in useful ways, not specifically about how we know what we know and what confidence we should have in our knowledge, which is what you were asking about.