Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

After reading the article, I have but one question: What are eigen values?

A quick lookup on Wikipedia reveals that they are used in linear algebra and, for example, matrix transformations. I think this article failed by trying to relate eigenvalues to everyday scenarios, similar to when beginning calculus books give examples involving total pressure on an underwater dam door or mass of a rod as examples for the usefulness of definite integrals. In those cases, you end up calculating the area under the curve, but the curve happens to be the graph of a constant function or a first-order polynomial. Calculating such areas can easily be done by using the elementary formula for the area of a trapezoid.



In linear algebra, for a given linear transformation (a certain kind of matrix which generally represents some operation) M, an eigenvalue of M is a scalar c such that given a non-zero vector v (called the eigenvector), Mv = cv, where multiplication on the left is matrix multiplication and multiplication on the right is multiplication of v by the scalar c.

The definition is significant because it says that for a certain vector v, the transformation M, no matter how complicated it may be, just scales v by a factor of c. This is useful, for instance, if you want to determine an axis by which to evaluate the range of the transformation, because by choosing an eigenvector, you are choosing a "simple" or "natural" perspective from which to evaluate the range.


Look, I am good at Math. I even love Number Theory. But what you wrote scares me. Can I run away now?

(I hope to one day be able to look at it and say 'my, that is so simple...' like I do with high school math)


Why do you think you're good at math, when confronted with evidence to the contrary? :)

Is there something specific about the parent post that you don't understand that I can try to explain?


This is the problem I had with Linear Algebra. The first half of the class felt like I was just being drilled definitions. But once all the definitions click, it's rather simple.


Honestly Linear Algebra is the next time you have to do a 'jump' in the regular sequence of math.

You hit algebra, new concepts, you have to jump a little. Same with calculus. Linear algebra is that next time.


And it gets easier. Linear Algebra is no more complicated than calculus --- rather less so. But it tends to be more abstract.


The best thing about my linear algebra class was that the professor warned us about this up front. Yet, I still was not prepared for the onslaught of new words, and the bad part of the class is they were all defined in terms of more mathematical words, not concepts like this rubber band example.


That was the point of the "What are eigen values?" article. It was to give the intuition behind the scary mathematical definition.

I find this helpful, as it answers the all important questions 1) Why should I care? and 2) What is the basic problem that the scary math is trying to solve?

With answers to those questions in hand, you can return to the scary math and work out how it maps to those answers.


Ok, others have tried a little bit of the math, but this is a difficult form for that. So here is a very intuitive (i.e. hand-wavy) way to think about the type of relationships we are talking about.

Suppose you have a (finite) cloud of points in three dimensions. For example, a bunch of GPS measurements. Lets just imagine that they are roughly elliptical, stretched out like a football. For simplicity we'll subtract the mean from this, so our co-ordinates come from the middle of this cloud.

I can take this set of (x,y,z) points and and find the eigen (which means "characteristic") vectors and values for this set of points. The first eigenvector will point along the long axis of the "football" (right through the pointy bits) because this is the strongest, in some sense, single direction that is represented. It won't be perfectly aligned with a finite number of points...

The second and third eigenvectors will between them form a plane perpendicular to this. So essentially what you've done is found a co-ordinate system that is aligned with the data (modulo some details, this is essentially what a procedure called Principal Component Analysis (PCA) does.

Now that's the vectors, what about the values? The eigenvalues tell you the relative importance of each direction. If you stretch the "football" out, the first eigenvalue, which was associated with the first eigenvector pointing along the long axis, will be larger relative to the others. If you use a basketball, they'll all be the same magnitude.

Does that help?

The idea generalizes a lot from what I've told you, but there are a ton of places you can look up the details if you'd like.


So there's one part of the explanation on the page that I don't quite get: she uses the example of a coin being turned 360 degrees along some axis as leaving all possible vectors as eigenvectors:

"If you rotate a coin by 360 degrees you preserve all directions and so each direction is an eigenvector. Because no stretching has occurred, all of these eigenvectors have eigenvalue 1. Rotating the coin by 60 degrees destroys all directions and this transformation has no eigenvectors or eigenvalues at all."

But in the 60-degree case, why isn't the axis of rotation an eigenvector? Its direction isn't "destroyed," is it?...


Because the original vector is not pointing in the same direction as the final vector (it is at an angle of 60 degrees to it). If you do a 360 degree rotation you get back to where you started. So any number of successive full rotations will have eigenvectors, each with eigenvalue 1. If you do a 180 degree rotation, the resulting vector will be pointing in the opposite direction, and will have eigenvalue -1.


Because the original vector is not pointing in the same direction as the final vector (it is at an angle of 60 degrees to it).

Mmm, no, the axis around which I rotated the coin didn't change at all, by definition.


You are doing a rotation in 3 dimensions. Surely the original example meant two dimensions. In 3 dimensions, the axis of rotation is indeed an eigenvector.


Eigenvectors are "important" directions with respect to some linear transform (operator). The eigenvalues are constant scaling factors that relate input vectors (inputs to the operator) to output vectors.

Knowing that an operator merely scales (but doesn't rotate) certain input vectors is an important property to characterize it.

Imagine some matrix describing the stresses (strains?) in a continuous medium of material, about a certain point. If you can find special directions from that point where the forces are just scaling, then you know you're just dealing with tension/compression in those directions, not bending.

Same thing applies for fluid flows, it can be useful to find out where the flow is just speeding up / slowing down, but not twisting.

... helpful?


Here's another intuitive introduction in this article explaining the SVD:

http://www.ams.org/featurecolumn/archive/svd.html


Thanks, that one's really good.


If you're doing graphics or physics, calculating the eigen vector for the eigen value 1 gives you the axis of rotation for a rotation matrix.

That's a pretty straight forward application that's also obvious from the definition of eigen values and eigen vectors.


Do you mean, what are eigenvalues used for?

I found this to be an incredibly clear and simple description of what eigenvalues are. I completely missed the point in University. I agree that it was missing a discussion of what they're used for.

You might also want to look at the eigenvalue section of the paper "An Introduction to the Conjugate Gradient Method without the Agonizing Pain" http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradi...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: