I have a lot of background in programming language theory and mathematical foundations, which is sort of one half of the topic that's explored in this post. Two thoughts:
1. Rewriting systems are very useful tools. One of the things I learned from this post was about the existence of FullEquationalProof [1], which I think is pretty darn neat and super useful.
2. This post is imbued with a latent metaphysics that is somewhat common in formal mathematics and will invariably come out at relevant meetings/conferences after a few glasses of wine. Not this instantiation in particular, but a generally similar sort of metaphysics. I've always gotten church vibes from that sort of thing. (I never "got" church.)
I never got the "emergent properties have special aesthetic and nearly spiritual significance" or "everything is just an <insert structure here>" cognitive confusion that so many mathematicians (especially formalists) seem to have.
But the way that it happens does make sense.
Intellectual curiosity is a valid work selection strategy and sometimes invention/discovery requires a leap of faith. Developing a predisposition to spiritual thought patterns while doing work that requires a leap of faith makes sense, I guess, but it's something I warn young mathematicians to guard against.
But then, I'm also not allergic to fava beans. Maybe math-as-spirituality and believing beans are evil are also just useful tools.
> But I ultimately think of mathematics as a just an invented tool whose only reason for existence is to solve concrete problems.
This might be the source of disconnect. I frequently encounter this perspective and worry there's a fundamental problem with how mathematics is taught if so many people walk away believing this. Whether or not humans ever mastered mathematics, what is and isn't mathematically true would not change. Humans can create notation and formalisms, but they do not invent the truths those mathematics represent.
> Humans can create notation and formalisms, but they do not invent the truths those mathematics represent.
The land represented by a map exists independently of humanity. Another intelligent species would have to come up with a roughly isomorphic representation if they wanted a similar tool.
Maps, to be clear, are just invented tools. They can be more or less right or wrong, but they are not the territory.
Moving up the meta stack does sort of confuse this initially if you don't stay grounded. I wonder if there is a field of meta-map-making and whether map makers sometimes confuse maps with territories when they start meta-map-making work.
> I frequently encounter this perspective and worry there's a fundamental problem with how mathematics is taught if so many people walk away believing this.
I didn't walk away (undergrad, PhD, PI, editorial committees, grant reviewer, ...). I'm about as far from walking away as is possible. But I suppose it is possible I'm an imposture :)
'roughly isomorphic' would be saying 'not isomorphic', so I'm not sure what you're trying to say. Actually I'm frustrated with most physicists/math folks misusing this term 'isomorphism' to mean "a bijection".
Which gets to the second point, if there is a true isomorphism between the map and the land, it doesn't matter that one isn't the other. That would mean that the land is constrained by the same axioms as the 'map', which gives some significance to them.
It's not a bijection either. If we take your level of pedantry seriously, it's nothing. There are no mathematical structures in play. I'm using natural language words to describe informal ideas outside of any mathematical system.
Let's use bloogidy-blop to avoid silly arguments over sequences of characters that have clear contextual meaning which you refuse to acknowledge for some reason :)
I'm not sure what your second point is supposed to be. Here's the relevant quotes:
>>> Humans can create notation and formalisms, but they do not invent the truths those mathematics represent.
>> The land represented by a map exists independently of humanity. Another intelligent species would have to come up with a roughly isomorphic representation if they wanted a similar tool.
> Which gets to the second point, if there is a true isomorphism between the map and the land, it doesn't matter that one isn't the other. That would mean that the land is constrained by the same axioms as the 'map', which gives some significance to them.
Right... but I didn't say that there was a bloogidy-blop between maps and land. I said that a useful alien map would have to be roughly bloogidy-blop to our maps. So I'm not really sure what you're trying to say here that is relevant to my original post.
This is clearly not true. If the aliens happened to be 10% or 1000% of our size, their concept of a relevant physical feature would be very different.
Maps are basically feature extraction -> data compression. The feature extraction part is subjective and depends on the experience of a species.
A map of cellphone towers is useless to a cat. A map of blobblytoids is useless to a human who doesn't know what a blobblytoid is or how to recognise one.
A human may have some vague awareness that something is there, but it's also possible that blobblytoids look like random noise, or like weird probabilistic anomalies that travel around inside a multidimensional space, or like something completely unimaginable.
So in the limit features can't be extracted because they are invisible to a different consciousness. They can still be physically present, but their meaning as a feature of interest depends on having a subjective referent for them.
This seems to be something many humans struggle with. We assume everyone else - including other humans - has the same set of referents, and therefore our personal feature maps are somehow universal.
Of course they aren't. They aren't even universal among humans, never mind a completely unknown alien species.
I think there is some sort of threshold analogous to Turing-completeness (it probably is just Turing-completeness) when it comes to intelligence, in that information is eventually accessible to any system that passes it.
If there is a species that thinks in terms of things that humans never could understand, then I would argue that species isn't part of our physical reality. If their expression of concepts is at all rooted in physical reality, then we, as physical beings, would have access to it. Maybe not the first person that sees it, nor the second, nor their great-great-great-grandchildren, but at some point their children could build things/begin to appreciate what was being communicated.
> If there is a species that thinks in terms of things that humans never could understand, then I would argue that species isn't part of our physical reality.
What if they think can think in our terms AND they can think thoughts we are physically unable to, thoughts that are literally inconceivable?
If the thoughts have any interaction at all between each other, we would be able to leverage that to understand those higher-level thoughts to the degree that they affect the ones on our level. As an analogy, we can project an N dimensional object onto N! planes and get a complete, but not intuitive, description of what it is.
Maybe they know the exact value of Chaitin's constant, but at least we know its properties.
Yes, the exact sense in which maps are bloogidy-blop to one another may vary considerably, but if they cover the same geographic region there'll be some bloogidy-blop, ie common reference points, which would relate to the maps. Maybe blobblytoids always go in valleys. Or whatever.
We seem in agreement on the object question in any case.
Well, we can say 'an equivalence' (or adjoint) which would be accurate and still using natural language, no?
Onto my second point, you don't need an isomorphism, an equivalence works fine if you find that the axioms hold in both cases, no? So now you're not just working with a 'formalism' that has no basis in reality.
Otherwise, if reality wasn't constrained by axioms (or even meta axioms) we'd use it to do things we couldn't with our formalisms.
> Well, we can say 'an equivalence' (or adjoint) which would be accurate and still using natural language, no?
No, their maps could be VERY different but be used in roughly the same way and therefore useful in the same way. There is an operator involved -- reading/navigating/interpreting -- which is why I chose "isomorphism" instead of "equivalent".
Also, "isomorphism" IS natural language that is used in many fields outside of mathematics and also has a vernacular sense to it. The word happens to also be used to describe certain formal constructions by mathematicians from time to time, but it is natural language.
Again, this is all very pedantic and silly. I insist on bloogity-blop. If we're going to have silly arguments, let's use silly words :)
> Onto my second point, you don't need an isomorphism, an equivalence works fine if you find that the axioms hold in both cases, no? So now you're not just working with a 'formalism' that has no basis in reality.
Reality isn't constrained by maps.
Useful maps are constrained by reality.
> Otherwise, if reality wasn't constrained by axioms (or even meta axioms) we'd use it to do things we couldn't with our formalisms.
Huh? I can't use reality the way I use maps because I'm not always able to fly into the air and look around before navigating to the supermarket or a trail head.
Reality is not constrained by my inReach. I promise you it's the other way around. And I promise you I can't fly, which means I need maps, even if flying high into the air would make it way easier to find a trail head than following a not-great trail map.
Maps are useful. Very useful. But they DO NOT constrain reality.
A belief to the contrary in the case of mathematics and physics is quite spiritual. Which was kind of my original point :)
Any species would prove the exact same theorems given the same axioms, and since mathematicians only claim that their axioms imply their theorems, I think they are right to claim absolute truth.
Oh, dear. Truth being the operating word… There is no truth in a set of axioms we cannot even conceive properly (any infinite set has properties beyond what seems reasonable, even “just” the Natural numbers). From that comes arithmetic, the “most elementary” form of mathematics which cannot be proved consistent…
We (I am a working mathematician) do not understand our objects, we can just make do. Only finite graph theory has a chance of being “real”. And it stops being finite very soon.
And we certainly should be honest enough to admit that our “science” says very little about the “real” world, where truth lies.
Maths is just a tool. Funny, exciting and even in some sense beautiful. But “truth” does it not contain. Except, I insist, in very specific finite constructions.
Statements hold but they are not “true” because they do not relate to the real world (otherwise, Frodo reaching Mount Doom would also be “true”).
There are no continuous functions out there. Bolzano’s theorem is not “true”.
I would contend that A -> B can be true even if A is not true or more relevantly to this discussion if A is unknown. That's math's version of objective truth, where "A" is filled by our various axioms and rules of inference.
How can you explain appealing to these “unreal objects” (real numbers, set theory, arithmetic) * does* help science? (Effectiveness maybe)
I see you are also a non realist about science.
But even the methodological naturalist (one who takes natural empirical science to be the best method but not an ontology) must wonder how we are uncovering and putting more precision to more and more of the world.
I don’t think we can currently explain why this made up tool “works”.
ok fair enough. There is no issue with proving arithmetic consistency on a finite number of symbols. Incompleteness entirely relies on the unbounded induction step.
Any species could prove the same theorems given the same axioms, but (besides the fact that they might not choose the same axioms) I'm not sure if they would prove the same subset of theorems that we have proven/will prove. Perhaps they'd have different ideas about what is interesting.
Human mathematicians are already fanning out into other systems of deduction (constructive mathematics being a great example), and given enough time the mathematicians of each galaxy will eventually discover the other galaxy's mathematics, even if it perhaps happens in a different order.
Surely intergalactic mathematicians already know that the only time is now? =)
> As Prigogine explains, determinism is fundamentally a denial of the arrow of time. With no arrow of time, there is no longer a privileged moment known as the "present," which follows a determined "past" and precedes an undetermined "future." All of time is simply given, with the future as determined or as undetermined as the past. With irreversibility, the arrow of time is reintroduced to physics.
This is either an extremely obvious and boring observation or the basis for a metaphysical trip, depending on how pre-disposed you are to "mathematical spiritualism" :)
Nothing can be objectively interesting, only objectively true. Just because math is objectively true does not mean you've been robbed of your license to decide if you think it's interesting. :)
Agreed. Similarly, just because I don't "get" church doesn't mean I can prove God doesn't exist. And it certainly doesn't mean I should stand in the way of others enjoying the experience of going to church regardless of their beliefs. It just means I don't "get" it.
Religion is slightly different though, they claim actual direct truth (not mere truth of implication given certain assumptions) which makes their claims more interesting but prevents them from claiming automatic objective truth. The Formal Gospel would go, "If God so loved the world that he gave his only begotten son, ..." ;)
That begs the question. Would any other species pick the exact same axioms? Why would they have the exact same theorems? Are you suggesting there is only one way to think logically?
Any species would face a selective pressure towards theorems which help them understand the world around them (if they have any motivation to prove theorems at all), and they will similarly face a pressure towards choosing the smallest/simplest set of axioms which allows all those theorems to be proven (and new ones to be discovered).
In fact, if we assume that neural networks are the only sorts of intelligence that can occur naturally in the universe and be sophisticated enough for arbitrary abstract calculation[0], then we might be able to infer things about the sorts of concepts they will develop and in what order. For example, having the concept of finite sums would likely occur before having the concept of infinite sums.
[0] I know that cellular automata can emulate a universal Turing machine, but I can't imagine a situation existing in nature where the cells evolve into an arrangement that produces a Turing machine, much less a machine running a program of instructions that lead to it generating mathematical theorems.
There certainly is, although I'm not sure it has a name. Kids gets introductions to it on schools, when they have classes about how to read a map in Geography.
> Whether or not humans ever mastered mathematics, what is and isn't mathematically true would not change...[we] can create notation and formalisms, but they do not invent the truths those mathematics represent
This is a big philosophical question. (Kant would agree with you. Some of his detractors would not.)
Setting that aside, I agree with your criticism of the claim that mathematics only exists to solve concrete problems. Mathematical fiction, e.g. exploring how a system following nonsense rules might behave, is perfectly good math. It's interesting, potentially beautiful, first and foremost; it might also be useful, though that's of secondary concern. (It has an uncanny knack for being so [1].)
To say math must serve physical reality is to discard its artistic side, perhaps essence; that's disappointing, debilitating and reductive.
With respect to aesthetics, to each their own. I do find some mathematics beautiful. In fact, "I am bored" is a real problem and mathematics can be used to solve that problem by being a tool that tickles our brains in pleasant ways.
The "tool" and "problem" here are meant as comments on the metaphysical content of mathematics, not some sort of statement that mathematics is for engineering and that's all.
In particular: I'm commenting on the imbued/latent metaphysics of Wolfram's post, which goes beyond mere artistic appreciation. If his framing were "and look how pretty cellular automata are!" then I guess my reaction would be "yeah they are quite cool aren't they?"
I find Church-Rosser quite beautiful and also think Wolfram puts way too much metaphysical weight into the behavior of confluent rewrite systems. Similarly, some Psalms are beautiful and the story of Jesus is very nice but god does not actually exist. There's no contradiction there -- you can take the beauty and spit out the metaphysics.
> Whether or not humans ever mastered mathematics, what is and isn't mathematically true would not change. Humans can create notation and formalisms, but they do not invent the truths those mathematics represent.
We quite literally have no way of ever knowing this. This proposition and its negation are both beyond the scope of human knowledge.
We do have a way of knowing this, it is as simple as saying that the finish line of a symbol game will stay the same given the initial symbols and the rules that can be used to move them around.
How is the Banach–Tarski paradox a truth that exists independent of humanity? It makes a physically implausible assumption (existence of infinitely small objects) and reaches a physically implausible conclusion (violation of conversation of mass). Mathematics is full of things like this. They all look like human inventions to me.
Banach–Tarski relies upon the Axiom of Choice / Law of the Excluded Middle. Zermelo–Fraenkel set theory is independent of the Axiom of Choice and there's an entire field of Mathematics called Constructivist Mathematics which avoids including the Axiom of Choice / Law of the Excluded Middle.
I had a hard time grasping why the Axiom of Choice / Law of the Excluded Middle was so problematic until I heard it translated into a Computer Science context.
The Law of The Excluded Middle sounds very reasonable at first. For all propositions P, P ∨ ¬P. i.e. Every proposition is either true or false. Sounds fine right? But when viewed in the context of computer science via the Curry-Howard Isomorphism. A proposition is actually a program, and deciding the truth value of a proposition involves "running" that program. So The Law of the Excluded Middle is actually the Halting Problem! It's really saying that all possible programs terminate and yield true or false, but we know that some programs don't terminate, some propositions aren't true or false, but undecidable.
So circling back around to the Banach-Tarski paradox. I would be very skeptical of any paradoxes resulting from assuming the halting problem doesn't exist!
Underrated point. Conservation Laws emerge from symmetries via Noether's theorem. In particular, Conservation of Mass / Energy arises from Time Translation Symmetry. General Relativity doesn't have Time Translation Symmetry because the universe is expanding. Now the question is can we extract free energy from the expansion of spacetime and avoid the heat death of the universe?
>they do not invent the truths those mathematics represent
I got the idea somewhere that because Principia Mathematica was doomed to failure, that means any two "islands" of math are not necessarily related to each other.
So I would think that hypothetical aliens in different circumstances could in fact have math that didn't intersect with ours at all.
If they did, wouldn't it be possible to build one system up from foundations?
> I never got the "emergent properties have special aesthetic and nearly spiritual significance" or "everything is just an <insert structure here>" cognitive confusion that so many mathematicians (especially formalists) seem to have.
People just like spiritual beliefs. There's a reason why a majority of human beings hold non-rational beliefs like religion or money having intrinsic value.
Note that "well-behaved" rewriting systems are usually confluent; the nLab wiki has a useful description of confluent categories[ https://ncatlab.org/nlab/show/confluent%20category ]. In general, category theory has plenty to say about any mathematical structures where simple operations may be arbitrarily "composed" in sequence to build more complex ones, and rewrite systems seem to be one example of this (if perhaps one where physical substrates that directly reflect that structure may be easier to come across).
That's certainly a description, but I'm not sure it's terribly useful for anyone who doesn't already know what confluence means in the context of a rewriting system.
If that work is too dense, all you need to know is the first paragraph. If that's all greek to you, it can be expanded as follows: Suppose you have a program e and an interpreter that executes e by applying a big set of rules that rewrite e until e becomes a value (such as a number, string, function, etc.).
So, for example, (function add(x) { x+x }; add(5)) -> 5+5 -> 6+4 -> 7+3 -> ... -> 10.
Here, "->" is thedefined by a big set of rules that basically pattern match on the syntax of the left-hand term and produce a corresponding right-hand term. So, for example, we have rules like:
(function F(x) { B[x] }; F(y) -> B[y]
and
n+m -> (increment of n) + (decrement of m)
and
n+0 -> n
where n,m are defined to be non-negative natural numbers.
Your interpreter is just a big set of these sorts of rewriting rules, and there's no "ordering" on the rules. Any rule that is applicable to the left-hand side could be used at any point where it's applicable.
Imagine, now, that you define your rewrite rules and then notice that at some points more than one rule might be applicable to the left hand side!
That could be bad if a single program could compute different values depending on which rule was chosen!
In general: Suppose there are some rewrite rules such that that e -> ... -> e1. (Which we sometimes write as e ->^* e1 for readability.) Suppose there are also some rules such that e -> ... -> e2, where e1 and e2 are different and might not be values (ie there's still more rules applicable to e2 and e3).
Your system is confluent -- i.e., not bad in the above sense -- if whenever the above happens there are also some rules such that e1 -> ... -> e3 and e2 -> ... -> e3. Then you know "there are many possible executions of the interpreter but at the end of the day the interpreter always spits out the same value"
>I never got the "emergent properties have special aesthetic and nearly spiritual significance" or "everything is just an <insert structure here>" cognitive confusion that so many mathematicians (especially formalists) seem to have.
Say what you like about formalists, but at least they're not goddamn Platonists.
Last I heard about Stephen Wolfram (10-15 years ago) he had gone down a rabbit hole of trying to use cellular automata to model all of reality. I guess this "ruliad" concept is where he ended up. I can never figure out whether there's any actual substance there -- every time something of his gets posted it's a long, abstract article that links to multiple other long, abstract articles, but I never get any sense of something like "...and this is how you get special relativity" or "this is how you get Maxwell's Equations", or even something abstract but relatively concise like Noether's Theorem. Maybe I just don't know enough higher math and physics -- is there a more concrete explanation of any of this?
Same. He really needs to learn how to omit the hype and self-aggrandizement and reduce his articles down to their scientific essence only. Stop burying the lede and put the main ideas in an abstract, then develop, support and critique them in the text.
That would reduce their length by roughly 50% and increase their readability and information density immeasurably. For all its faults, this is one thing the academic scientific publishing system does well.
He needs to just let his work speak for itself, instead of trying to be a salesman for it. His inability or unwillingness to do that is negative signal, no matter how positively he phrases it.
I continue waiting with an open mind to see if his techniques produce any kind of predictive theories. But without testable predictions, there's no way of knowing if ruliads and the like are a new and useful representation of reality, or just the rules for an alternate simulation wholely different from and unconnected to our reality.
It would be really cool if they turn out to be the former, and he's figured out a new symbolic+computational representation of Maxwell's equations and relativity. But we'll see.
There is some interesting material here, yet at a basic level this sounds like it is all steeped in the kind of misunderstanding of mathematics that is common among physicists.
In physics there are real tests and relations that have meaning so it makes sense to ask if String Theory is correct or useful. In mathematics there are complex structures built from axioms and sometimes these structures can be related to each other in interesting ways. Will anything important ever come of String Theory? Maybe, but from a mathematical point of view no theory needs to correctly model what is real or have demonstrated application to be worthy of exploration and study.
The particular thing that comes to mind repeatedly when reading this is the fact that more or less all of mathematics can be derived starting either from set theory or from logic theory. There is no particular reason to choose one or the other, but set theory is the most common foundational substrate. Depending on how this is done it may be more simple, easy, or direct to express or make proofs regarding some particular idea or domain. But Wolfram seems to be seeing things in terms of there being a big space with defined paths through it. Instead of leaning on set theory or logic theory to get to a particular expression or result there should be a global basis for expressions and results. That different models can be used for the same phenomena doesn't seem to be considered valuable or even a possibility.
> The particular thing that comes to mind repeatedly when reading this is the fact that more or less all of mathematics can be derived starting either from set theory or from logic theory.
I don't know if this is the actual foundation of mathematics though - we're seeing more advances in category theory, the Russell-Whitehead project of reducing mathematics to pure logic is generally considered a failure, and set theory's bogged down in issues of axioms in the wake of Cohen's proof of the indecidability of the continuum hypothesis. It's probably better to see these foundational projects as providing windows into the mathematical universe instead of being the actual substance of mathematics.
After all, we do mathematics without pure logic or sets all the time. Axioms are chosen for their elegance and ability to describe conceived mathematical concepts, not the other way around.
> The particular thing that comes to mind repeatedly when reading this is the fact that more or less all of mathematics can be derived starting either from set theory or from logic theory.
No, most of mathematics can be expressed in set theory. It's like saying every program can be written in C. It's more or less true, but the philosophical implications are overblown. That is, it's important that set theory and C are so powerful, but there's nothing[1] special about them in particular, we could just as well choose different foundations/Turing complete languages.
[1] Disclaimer: I am not a set theorist and I presume there's a reason set theorists study ZFC and its more powerful cousins so intensively.
> the Russell-Whitehead project of reducing mathematics to pure logic is generally considered a failure
sigh I find these discussions tedious, but oh well...
I've heard this before and it seems wrong and rooted in a misunderstanding. Can someone (not necessarily the person I'm replying to) explain why you believe this?
AFAICT among non-academics it's mostly rooted in pop sci story telling. Same genre as “Godel went insane because of his impossibility result” and nonsense like that.
Among academics (esp. mathematicians) this impression comes from the fact that if you look at almost any mathematics department, there aren't many people working in/on formal logic. But that's mostly because all of the mathematicians working on/in formal logic suffer the humiliation of sitting in the fancy new CS building with higher salaries and lower teaching loads ;)
> In physics there are real tests and relations that have meaning so it makes sense to ask if String Theory is correct or useful. In mathematics there are complex structures built from axioms and sometimes these structures can be related to each other in interesting ways. Will anything important ever come of String Theory? Maybe, but from a mathematical point of view no theory needs to correctly model what is real or have demonstrated application to be worthy of exploration and study.
You say this as if it contradicts some point that the author is trying to make or some key assumption that he holds.
Yet on the contrary, the author himself makes a similar point in the article:
> But the way we’ve modeled mathematics here has been much more about what statements can be derived (or entailed) than about any kind of abstract notion of what statements can be “tagged as true”. In other words, we’ve been more concerned with “structurally deriving” that “” than in saying that “1 + 1 = 2 is true”.
That is, math is not necessarily about truth or usefulness in some (meta)physical sense, it is about entailment from certain axioms via formal rules.
You insert "an observer like us" and see if the universe it generates looks like ours. If it does, we learn about the observer. This was the original project of natural philosophy. - Know thyself.
But what our Physics Project suggests is that underneath everything we physically experience there is a single very general abstract structure—that we call the ruliad—and that our physical laws arise in an inexorable way from the particular samples we take of this structure.
I call it the ruliad. Think of it as the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways.
My initial objection is the following. I can imagine a universe where what is computable inside the universe is not sufficient to describe the universe. The universe might, for example, run on real numbers but due to something vaguely resembling the uncertainty principle those can not be fully used for computations within that universe and so the most powerful computational device within the universe ends up being something discrete like a Turing machine.
Admittedly those two quotes are essentially everything I have read about this topic and this might be addressed somewhere, maybe my objection itself is not consistent, but I think one needs a good justification why computability within a universe is essential for understanding or explaining that universe.
I'm not sure that this objection has much practical significance even if it turns out to be true.
I think the more pressing concern with the ruliad program is that a description of all things possible is a also description of nothing in particular.
Other workers developing mega-logical type stuff, frameworks and so on, ran into similar problems when it came time to find actual utility for their work. Sure, you have this super expressive thing... but the things you're supposed to build in it are better built on their own terms and the super expressive framework doesn't buy you enough to be worth engaging with.
no need to imagine this, this is already the case in this universe. at least if we're talking about actually computing something, not just writing down the equations on paper.
for example from everything we know at least so far there are some truly continuous non-quantized quantities yet all numerical solutions can ever produce is an ever increasingly good approximation of something.
some constants are irrational so we can never get true values of certain physical constants, etc...
Why does Stephen Wolfram have to be so long-winded? I’m sure there is something valuable buried in this post, but it is over 50000 words. I guess this is what happens when a researcher doesn’t frequently publish in peer-reviewed formats with page limits.
> When we do physics, the traditional approach has been to start from our basic sensory experience of the physical world, and of concepts like space, time and motion—and then to try to formalize our descriptions of these things, and build on these formalizations. And in its early development—for example by Euclid—mathematics took the same basic approach.
I don’t think Euclid started from “basic sensory experience”. Euclid based his mathematics on non-existent things, neither point nor line exist in this world as defined by Euclid: A point is that which has no part. A line is breadthless length.
Also, in physics “space” does not exist as a quantity, only distance exists. Space is an abstraction.
I have thought about this thing for some time and I think that the unifying idea of math, physics (quantum shit in particular), programming, type theory, probability etc is the idea of fixed points which is just ubiquitous. Fixed points are based on the idea of an adjoint (1-to-many relationship) and a norm (many-to-1 relationship).
My own attempt at what Wolfram is doing resulted in:
1: f(a,a)
2: f(a,b)
I call the former introspection and the latter is regular old composition. This seems to be a taxonomy of everything my discerning mind is capable of and hence the limit of what aspects of human experience I can peer review. The rest (the bulk) is mysticism and always-valid personal experience.
---
“Whatever we call reality, it is revealed to us only through the active construction in which we participate.”
– Ilya Prigogine, Order Out of Chaos: Man’s New Dialogue with Nature (1984)
I think that identifying the ruliad with the "logos" makes sense - the idea that there's some underlying abstract shape to the universe that makes rationality possible in the first place.
Perhaps we can back-propagate physics through mathematics to discover physical proof of logos? Then we could substantiate the claim that reality is not illusion composed of ever fluctuating qualia.
That discovery would certainly fill this one with wonder.
Here’s something I don’t get about Wolfram and insisting on a computation-like underbelly of the universe.
Computation is built on the idea of Turing machine. But what is reading the tape in Turing’s analogy? A human! The tape and Turing machine are designed so that every humanagrees upon its formal validity.
It’s not a statement about mental states or computation based “reality”. More than all of those and first, it is a about how society can use social rule-following and basic step-by-step processes (using language) to create formal systems.
A human reading the tape or even a human with pencil and paper. That is the main analogy.
So why do so many like Wolfram think computation is reality? It seems from the get go he is headed down the wrong track.
You and I are computers and computers can think according to Turing. But TM’s came about to develop formal systems.
To me Wolfram and the Churchlands seem to have completely unjustified claims.
Computation is not built on the idea of the Turing machine... That is a very, very strong claim that I assume you did not mean to imbue with such heft.
The Turing machine, rather, is a Platonic ideal -- a model -- of a computing machine which is transparent to humans performing analysis on it. There are other models, and there are other means of computation.
I think you missed the forest for the trees. It may help to familiarize yourself with turing completeness and what wolfram calls computational equivalence.
We could be a little more permissive here about what constitutes a "model".
Models are not systems. <=> Maps are not territories.
Formal systems are models. They model the world. Depending on which formal system you use, one may model the world better than another. There is no "perfect" there is only "comprehensible", "insightful", and, when predicting the future, "accurate". All of these are measured in degrees, not kinds.
Models are not related to systems a priori. You are permitted to apply any model to any system. Whether the application is useful is only posterior to this first step.
To me this a problematic viewpoint. All these formalisms are not meant to be models of the world. I’d even argue for anything to be accepted as a formal system it must be far removed from taking stances on reality.
> that we call the ruliad—and that our physical laws arise in an inexorable way from the particular samples we take of this structure.
What is a physical law? To me a physical law is a proportionality, that is, an equality of ratios. When we find something that stays constant while something else is changing, we call this a law. But it is really a proportonality. So, mathematics and physics is tied by proportionality. They have proportionality in common.
I have no special knowledge, but I've read that Wolfram has been shopping his grand unified theory around for a while with no takers in academia.
He reportedly started his own business to be free to pursue his research program on his own terms, but you have to wonder if the bumpers of peer review and academic respectability aren't there for a reason.
I think he started his company to sell Mathematica - the research agenda stuff has been more recently emphasized. When I first started using Mathematica almost 30 years ago the company was pretty much just selling the tool for technical computing. It was after his book came out around 20 years ago that things drifted more towards his weird research, which really amped up in the last decade.
The biggest complaint I’ve seen about his work, especially the new Wolfram Physics stuff is that it’s a theory without predictions. He pretty much exclusively talks about representing things we already know but in terms of his world of cellular automata and rewriting systems. Until it is applied to make a testable prediction, it’s not as much science as it is gratuitous programming, visualization, and grandiose claims.
It is impressive the sheer volume of self-congratulating text that he can produce. It’s quite hard to read since it spends nearly as much time talking about himself and how insightful he is as it does the technical topics. From the things I’ve read, there isn’t much of value buried in there.
It's not complete bullshit, it just fails to engage with the already existing work on the subject. He's creating his own entire system instead of looking at similar work by others and fitting his theory in with those. If you don't play philosophy on their turf, philosophers won't engage with you.
Most responses are clouded by emotion still. This is a kind of war on reality since the proposal is to pass the torch of Truth-saying from one institution to another. That no heads are being lopped off in more than verbiage is truly great progress in our quest for self-knowledge.
Wolfram is a famous crackpot. I wish a peer review before I start reading his essay. No doubt he is again advertising his Wolfram* products, New kind of science, etc. No?
It seems to me that 'crackpot' is a bit strong. Even if one thinks Wolfram's foundations of physics project will never bare useful fruit, it's undeniable that he has made progress in other fields that are of interest to many people.
A simple case in point: the study of logic has been a interesting human endeavor for thousands of years, since at least the time of the Greek and Vedic schools. After thousands of years of study, a major breakthrough was made with the first formalization of propositional logic (as Boolean algebra) by George Boole in 1854. Since then, there has been a search for the simplest foundational formal axioms from which all of propositional logic could be derived. This project ended in 2000 with Wolfram's discovery of, and proof that, [1] is the shortest possible single axiom that can be used as a foundation for all of propositional logic.
The tricky bit is that "simplest" has no formal definition. Here Wolfram claims to simplicity is having the least number of axiom, even if that axiom is very complicated.
Unfortunately, that is a perversion of the idea of axiom. An axiom should be as simple as it can be and ideally self-evident. Being self-evident is a strong requirement because axioms are not proven but accepted as true.
Clearly, no-one would say that Wolfram's axiom is self-evident. As a matter of fact, even Wolfram does not find it self-evident: that it is equivalent and sufficient as a basis needed to be proven, via proving it can generate all other set of axioms.
Basically, once a field has matured enough and we found a simple set of axioms, one can come up with a new set. All set of axioms must encode the same information, so you can either have multiple simple one or a single complex one. The quantity of information that is encoded must be constant.
So having a single axiom is not simpler, except by the very naive measurement of count.
> Even if one thinks Wolfram's foundations of physics project will never bare useful fruit, it's undeniable that he has made progress in other fields that are of interest to many people.
This is true, I'm sure, in Physics. But I don't think it's true in logic. Did anyone working in logic at the time care about this question? Does it have any practical significance? If experts at the time didn't care and it has no useful purpose, then why should this impress me? Doing novel work is really easy -- just work on stuff other people don't care about.
Workers in the field of logic made steady progress in shortening the axioms throughout the 20th century, after Alfred N. Whitehead's axiomatization in 1898. Edward Huntington reduced it to three axioms using fourteen instances of two operators (OR and NOT) in 1933. Herbert Robbins conjectured it could be reduced to thirteen instances of those operator but could not prove it. Alfred Tarski also investigated and could not prove it. In 1967, Carew Meredith proved the axiomatization could be reduced twelve instances of OR and NOT (with only two axioms). Later, Meredith shows it could be reduced to two axioms with ten instances of a single operator (NAND). Several single-axioms systems were later found, but they were very long axioms. William McCune proved the Robbins conjection in 1996. And finally Wolfram and McCune apparently independently discovered and proved that the shortest possible single-axiom has only six instances of a single operator (NAND).
That this list of workers includes names like Whitehead and Tarski suggests that this was in fact something that (some) significant people did care about. I'm not sure how much value to give to this heuristic, but every name on the list of workers above is an 'important enough person' to have their own Wikipedia article. Certainly when I look discrete math classes in university, this was discussed as if it was an interesting topic (and I personally was interested).
I wonder if this is related to the shortest one combinator basis being λx λy λz. x z (y (λ_. z)), whose type is
(a -> b -> c) -> ((d -> a) -> b) -> a -> c.
He made novel contributions to computability theory. The first person to show that one-dimensional cellular automata could be turing-complete.
That being said, he’s very much a crackpot nowadays. Being sane in the past does not imply you will be sane in the future, no matter how significant your list of accomplishments.
His results on cellular automata are clearly solid contributions. Any academic would be proud to open a whole line of inquiry. I understand that he's also done good work in Physics.
I was just surprised to see this particular result about propositional logic in particular highlighted because, as a logician, I really don't care :)
1. Rewriting systems are very useful tools. One of the things I learned from this post was about the existence of FullEquationalProof [1], which I think is pretty darn neat and super useful.
2. This post is imbued with a latent metaphysics that is somewhat common in formal mathematics and will invariably come out at relevant meetings/conferences after a few glasses of wine. Not this instantiation in particular, but a generally similar sort of metaphysics. I've always gotten church vibes from that sort of thing. (I never "got" church.)
I never got the "emergent properties have special aesthetic and nearly spiritual significance" or "everything is just an <insert structure here>" cognitive confusion that so many mathematicians (especially formalists) seem to have.
But the way that it happens does make sense. Intellectual curiosity is a valid work selection strategy and sometimes invention/discovery requires a leap of faith. Developing a predisposition to spiritual thought patterns while doing work that requires a leap of faith makes sense, I guess, but it's something I warn young mathematicians to guard against.
But then, I'm also not allergic to fava beans. Maybe math-as-spirituality and believing beans are evil are also just useful tools.
[1] https://reference.wolfram.com/language/ref/FindEquationalPro...