Hacker Newsnew | past | comments | ask | show | jobs | submit | pgustafs's commentslogin

The definition of bijection is much more interesting than comparing cardinals. Many everyday use cases where (structure-preserving) bijections make it clear that two apriori different objects can be treated similarly.

More generally, mathematics is experimental not just in the sense that it can be used to make physical predictions, but also (probably more importantly) in that definitions are "experiments" whose outcome is judged by their usefulness.


Nah, just study linear algebra (Shilov or Hoffman & Kunze) and baby Rudin. Then read the most famous books in geometry, analysis, and algebra (do proofs + get a mentor). All these roadmap things are meaningless. It’s like “how to join the NBA.” Lift weight, condition, and practice fundamentals. Nothing else matters.


Getting a good mentor is the most difficult part for most people following this list.



The books are good, but way too many and wildly varying in difficulty. No one can read all that in 2 years starting without knowledge of linear algebra. just worry about the fundamentals first and then pick a couple good books in areas you’re interested in. The main thing is deep understanding, not superficial breadth.


Lol, Landau and Lifshitz as an intro


it's called "angel" investing because it's only one step removed from charity


Can I be a demon investor? That’d be a dream job.


I believe that's the sentiment the "Shark Tank" or "Dragon's Den" series are trying to hint at.


It’s just called investor at that point.


Investors are in for the money. I’d be in for the sadistc side.

But not really. I wish I could be that evil. I’d be a lot richer if I were willing to play that kind of game.


my charity donations are capped with the IRS. But my cap gains (and losses) live forever.


Almost everyone's charity donations are worth zero with the IRS since everyone takes the standard deduction now.


Your comment made me realize what a bubble of privilege I live in.

I raised my eyebrow at "everyone takes" standard deduction. How is that possible with home prices and interest rates? Even a modest 300-400k house at 5-6% interest, property taxes, local sales tax deductions and minimum charity would exceed the standard deduction. Where I live good luck finding anything more than a condo for less than a million.

Turns out 90% take standard deduction. This is another way to track the extreme "wealth" gap emerging. Only the wealthy itemize.


We have a house more expensive than that, an interest rate around the top of that range, live in a state with somewhat high property taxes, and had at least $4k in health care spending on top of insurance premiums (no big problems, that's just what a handful of minor kid-related issues in a year costs in the US) and still wouldn't have done better itemizing. We took the standard for 2024.

We're ~92nd percentile for household income.


It’s more of a reality when home prices are over a million, or in a state with high property taxes. Be married, live in a state with low property taxes, buy a house for 700k… and you’re taking the standard.


Married makes it huge, and many people own their homes on older loans.

I’ve never gotten close since it was raised.


There are three broad subareas of mathematics: geometry, algebra, and analysis. Geometry studies space, algebra studies time, and analysis studies infinity. They are not independent -- most professional mathematicians use some mixture of the three, and virtually every mathematician understands the basics of all three.

The most important object in modern geometry is the manifold. This is a space that looks locally like n-dimensional Euclidean space -- 1-dimensional manifolds are curves, 2-dimensional manifolds are surfaces, and higher dimensional manifolds are simply called n-manifolds. All of physics takes place on manifolds. Differential equations correspond to vector fields on manifolds. The manifold hypothesis says that much of the high-dimensional data we see actually lives on much lower-dimensional manifolds (partially explaining the unreasonable effectiveness of deep learning on very high-dimensional datasets).

The most important object in algebra is the group. The collection of symmetries of any object (e.g. a Rubick's cube, a piece of paper, or three-dimensional space) forms a group under composition. A group that is also a manifold is called a Lie group. These are everywhere -- n-dimensional rotations form groups, fundamental particles correspond to representations of Lie groups, invertible matrices form a group. Spherical harmonics and Fourier series are both naturally viewed in terms of representations of Lie groups.

The most important object in analysis is the limit. Limits first appear in the construction of the real line by adjoining limits of Cauchy sequences to the rational numbers. Using the real line, one can measure volumes, probabilities, and distances in geometric spaces such as manifolds, but also in spaces of functions, sequences, and more abstract objects. The proof of the fundamental theorem of calculus (that derivatives and integrals are roughly inverse operations) requires rigorous analysis of the definitions of derivative and integral as limits.

To learn math, you should begin by understanding what a proof is. All of mathematics is based on proving theorems. A mathematical proof is a sequence of statements that explains the logical steps required to use the assumptions of the theorem to verify the result. Just as a computer program cannot "almost output" the correct answer, there is no such thing as an "almost correct" proof. A proof either describes a correct chain of logic to reach the conclusion, or it does not. The reason math is based on proofs is because more advanced math and science builds upon more basic math. An error in a mathematical theorem or an imprecise definition will lead to bigger problems down the line, so every step must be carefully validated. For an individual student as well, only through proving theorems can one deeply understand a mathematical subject, and a solid understanding of basic subjects is required to understand more advanced topics.

Fortunately, you can learn to prove theorems at the same time as learning the foundations of math. The first books you should work through are "Principles of Mathematical Analysis" by Walter Rudin, and "Linear Algebra" by Georgi Shilov. This will be hard, not for an arbitrary reason, but because assimilating new math into your brain is intrinsically difficult, especially at the beginning. If possible, try to find a teacher.


Follow your innate curiosity, and respect the edifice of knowledge constructed by those who came before us (i.e., don't get sucked into quackery without a deep understanding of the SOTA).


I think problem solving math is definitely fun and can be a huge source of confidence, but I don't see why "racing" through the standard curriculum is a negative. Why should a smart kid do a million multiplication/division problems for 5 years when they would have a ton more fun and get a lot more long-term utility from learning some stat/algebra/geometry? If a kid demonstrates mastery of a concept, it's a lot more bizarre and potentially damaging to force them to relearn the same material over and over.


I think you misunderstand OP’s point and maybe don’t know what problem solving math the OP is referring to. “Art of problem solving” and contest math is not rote repetition or basic arithmetic at all. It’s a way of challenging students with difficult math problems that are approachable without comprehensive study of high level math but require combining concepts, applying logic and deduction, solving world problems in a way that don’t fit a “type” that’s covered in class. Try searching for USAMO problem sets for example.

The reason racing through a math curriculum can be problematic is… what’s the goal? If it’s not “look as advanced as possible to a non-mathematician to get into a tier 2 college” and instead something like “expose kid to as much math as possible because they enjoy it/find it challenging” or “be a top mathematician for their age so they get into a tier1 college because mathematicians see promise in them” you don’t actually want to cram in subjects like typical community college or undergrad calculus, stats, and linear algebra at all. Those are not nearly as helpful for pursuing advanced mathematics as learning how to prove things, apply theorems to problems/reduce problems to theorems, or just generally becoming excellent at “lower” math like in contest math. In fact it might turn a kid off of math to get that far and still be doing mostly rote computational problems, and it won’t help that much in becoming a mathematician because those classes typically focus on the applied aspect (outside of particularly selective math courses at certain universities).


I have nothing against contest math (I was a USAMO qualifier in high school), but contest math isn’t enough if a kid has to sit through years of tedium during regular classroom hours. Also, there is a large difference between contest problems which focus on cleverness and real-world problems which focus on conceptual understanding. Many kids prefer one or the other, and I think it’s a mistake to assume that contest math works for all kids who might be mathematically inclined.

Re: goals —- the goal is to let the kid learn as fast as they want assuming they have solid foundations. If they like proofs let them do proofs, if they like applications let them do that. Just don’t force them to sit in a classroom doing busywork for the most formative years of their lives.


I think it's more that there are massive large fields of math that are just left out in the race to calculus.

I had a friend who grew up in Japan and he said he learned a lot more number theory-style stuff.


Agree, but I don't like the framing of "accelerating." Math in school is for the median student. If you want a quantitative career or just want to have quantitative skills, you should be aiming for way above median. Aiming for median outcomes makes zero sense in the current world. Find your niche and hit it hard.

Kids intuitively understand this -- they like doing what they're good at. Unfortunately, most schools are not good at serving this need. A very important part of being a parent is to encourage kids to start compounding positive habits/learning early, and to prevent the schools from dragging them back to the median.


Right! No one talks about basketball acceleration or football acceleration.


Adding to your point, most successful markets are positive-sum -- hedgers gain value from mitigating their structural risks, and speculators get paid to assume price risk. For example, the wheat futures market has two natural participants -- farmers and bakers. Farmers can sell future produce to buy seeds right now. Bakers can hedge their wheat price exposure to reduce their chance of getting ruined by a bad harvest. Speculators get paid to hold onto wheat futures contracts if a farmer wants to sell a future when the bakers aren't around to buy (presumably baking), selling to a baker later for a higher price reflecting the price risk assumed. All of these participants derive value from the market.

It's not clear to me that prediction markets usually have natural hedging participants (maybe political operatives, but the tx costs are probably too high relative to the value at stake).


Excellent point.

Prediction markets have been a thing for at least 25 years. I get the intellectual appeal, and they may be useful in certain niches. But I think their lack of significant uptake or impact is telling.


Prediction markets have huge positive externalities, as they help non-PM participants predict the future! One problem PMs have is that non-PM participants often feel that people betting short (or in some cases long) are 'hoping for failure'.


One point that seems to be missing from this discussion is climate.

Walkable cities: NYC, Chicago, SF, Boston, Seattle

Urban hellscapes: LA, Houston, Atlanta, Phoenix

Simple explanation -- people don't want to walk around in the heat.


Great post, one nitpick -- I wouldn't say that a matrix is a "sparsely defined" function, but rather a function defined on a finite grid. It might also be worth pointing out that same approach works for any graph, not just a grid.


Also, what's confusing is that algebra usually uses matrices to describe linear functions from n-dimensional to m-dimensional vector spaces. Matrix has n rows, m columns, you give it an n-dim vector and after matrix multiplication you get back an m-dim vector.

The author uses a matrix quite differently. You give it two integer coordinates i and j and it gives you the value at position (i, j) back. That's a valid use, but not quite what you'd expect in a math-oriented article.


Thanks for calling this out, I thought it might cause confusion. Matrices are super weird objects because they don't fit nicely into the {scalar, vector, function, operator} classes that maybe we're used to. A matrix is a function in that it can take in a vector and map it to a new vector. It is also an operator in that it can take in some other matrix (a function!) and give you a transformed matrix (a new function). It is also a function in the sense that it can map vectors to scalars, where the input vectors (x, y) are the coordinates and the scalar stored there is the output. All of this gets further complicated by the fact that the elements of a matrix can be scalars, complex numbers, or even matrices! They are really strange objects and maybe I'll write up a whole post just about that strangeness.


Just writing down the connections here for myself and maybe others:

A matrix represents a linear function taking a vector and returning a vector, written as

w = M v

Matrix multiplication corresponds to function composition.

Vectors can be indexed, and we can view them as a function i -> v[i] defined on the indexing set. We can also define basis vectors b_i, such that b_i[j] is 1 at index j=i and 0 otherwise. Any vector can be written as a weighted sum of basis vectors, with the vector components as coefficients:

v = Σ_i v[i] b_i

where Σ_i represents summation over the index i.

Matrices can be indexed with two indices, and this is closely related to vector indexing: For a matrix M, we have

M[i, j] = (M b_j)[i]

Each column of the matrix represents its output for a certain basis vector as input. By writing a vector as a sum of basis vectors and using linearity, we get the well-known matrix-vector multiplication formula:

(M v)[i] = Σ_j M[i, j] v[j]


Can you link to context for this? I learned both in linear algebra, so it seems like either would be just as 'expected'.


Here's a concrete example. The first matrix in the post is f = [[1, 1, 1], [1, 1, 1], [1, 1, 1]].

In linear algebra, we would interpret this as a linear map. A true equation would be f([1, 2, 3]^T) = [6, 6, 6]^T (where I'm using ^T to mean "transpose to a column vector").

But here, the author means f(1, 2) = 1, i.e. the (1,2) coordinate of the matrix is 1.


Thank you! Yes, I agree, thank you for explaining that to me.


And interestingly, both are connected. If d_i somewhat hand-wavingly expresses the vector d_i = (0, ..., 0, 1, 0, ... 0) with the 1 at position i, then given matrix M you can do

f(i, j) := d_i^T * M * d_j

The RHS is using classical matrix multiplication, and the function value will be the matrix' entry at column i, row j.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: