I loved school math classes, but a decade later I do have a hard time understanding monads and category theory.
Compared to programming (even functional), "math" is a completely different way of thinking. Abstract properties are stated, from which some arbitrary (partial) structure is implied. Whereas with programming, the structure of "doing" is always concrete, regardless of paradigm (algorithmic complexity is everpresent, even when you're ignoring it). Compared to abstract math, school classes are more akin to programming in that you're mostly following algorithms and applying patterns with a slight intuition, even when you get into algebra and calculus.
My biggest hurdle to understanding is the use of completely different terminology for concepts that are the same, at least intuition wise. My thought process is based on intuitions first, rather than manipulating symbols in the abstract (which seems to be more common? or maybe it just seems this way?). So when confronted with terms like 'conjunction' and 'disjunction', it just throws me off that they're 1. the "same" as AND/OR, 2. the nuance of difference is rarely stated. So the way I cope is by directly thinking AND/OR, while being painfully aware that there is some distinction I'm not aware of. This leads to a lot of reading that's completely disconnected from anything until I find the gem that illustrates the actual difference.
I'd learned the workings of practical Haskell monads some time ago, but what made monads/category theory finally click is reading Moggi's "Notions of computation and monads". Going back to the "source" let me see the specific motivation and actually understand how Haskell objects differ from perfect ones from category theory. Apparently it's just non-termination, but I had to do a lot of searching to find where that was actually stated, rather than assumed and vaguely referenced. It feels like to someone who thinks "more mathematically" this is just a small detail, since they deal with each type of structure in isolation. But until I can relate them, I'm just out in the weeds.
Of course now that I understand this it seems like quite a simple concept. But I had to do an awful lot of work to get to the point where my thought gamut was "expanded" to be able to include this one concept.
So I don't really know what my exact point is. But to connect to something you said, it seems like experience from writing code and desire to understand abstract concepts ("monads are burritos") come from two different places, and the latter isn't necessarily served by increasingly outlandish metaphors, but by making the details accessibly explicit.
Compared to programming (even functional), "math" is a completely different way of thinking. Abstract properties are stated, from which some arbitrary (partial) structure is implied. Whereas with programming, the structure of "doing" is always concrete, regardless of paradigm (algorithmic complexity is everpresent, even when you're ignoring it). Compared to abstract math, school classes are more akin to programming in that you're mostly following algorithms and applying patterns with a slight intuition, even when you get into algebra and calculus.
My biggest hurdle to understanding is the use of completely different terminology for concepts that are the same, at least intuition wise. My thought process is based on intuitions first, rather than manipulating symbols in the abstract (which seems to be more common? or maybe it just seems this way?). So when confronted with terms like 'conjunction' and 'disjunction', it just throws me off that they're 1. the "same" as AND/OR, 2. the nuance of difference is rarely stated. So the way I cope is by directly thinking AND/OR, while being painfully aware that there is some distinction I'm not aware of. This leads to a lot of reading that's completely disconnected from anything until I find the gem that illustrates the actual difference.
I'd learned the workings of practical Haskell monads some time ago, but what made monads/category theory finally click is reading Moggi's "Notions of computation and monads". Going back to the "source" let me see the specific motivation and actually understand how Haskell objects differ from perfect ones from category theory. Apparently it's just non-termination, but I had to do a lot of searching to find where that was actually stated, rather than assumed and vaguely referenced. It feels like to someone who thinks "more mathematically" this is just a small detail, since they deal with each type of structure in isolation. But until I can relate them, I'm just out in the weeds.
Of course now that I understand this it seems like quite a simple concept. But I had to do an awful lot of work to get to the point where my thought gamut was "expanded" to be able to include this one concept.
So I don't really know what my exact point is. But to connect to something you said, it seems like experience from writing code and desire to understand abstract concepts ("monads are burritos") come from two different places, and the latter isn't necessarily served by increasingly outlandish metaphors, but by making the details accessibly explicit.