Very beautiful graphs, but i don't think it's going to make people understand anything. I would start with the problem : easily compute sums of dependant values, then show a naive computation, then use matrices, vectors and eigenvalues to come to a solution, and only then, show a graphical representation of the steps performed.
I'm surprised that this post isn't following this method, because i've come to think it's the standard way of explaining scientific things in the US.
Very cool, but as a layman I was a very confused by the description of eigenspaces and the S1/S2 lines. I'm just guessing here (reasoning below) but I'd like to suggest phrasing like:
"Eigenspaces are special lines, where any starting-point along them yields an eigenvalue that lands back on the same line. In these examples two exist, labeled, S1 and S2."
"Eigenspaces show where there is 'stability' from repeated applications of the eigenvector. Some act like 'troughs' which attract nearby series of points (S1) while others are like hills (S2) where any point even slightly outside the stable peak yields eigenvalues further away.
______
Original post / detailed-reaction:
> First, every point on the same line as an eigenvector is another eigenvector. That line is an eigenspace.
At first I though this statement-of-fact meant that the whole tweakable quadrant of the X/Y plot (at a minimum) is an unbroken 2D Eigenspace, because every point within it can be "covered" by a dashed line (a 2D "vector") if I pick the appropriate start-point.
However, the last sentence also says eigenspaces are (despite the "space" in their name) lines, which throws the earlier interpretation into doubt.
> As you can see below, eigenspaces attract this sequence
S1 and S2 were displayed earlier, but not explained, now this section implies that those lines are the Eigenspaces? If so, what is the difference between S1 and S2? Playing with the chart, I assume they are the "forward" and "reverse" for repeat-applications of the transformation.
I have no idea what eigenvectors or eigenvalues are, so this just confused me more. To be fair, I think the author does assume some basic math understanding before hand though.
I like the visualization. But there seems to be an error: the non-diagonal elements of the Markov matrix need to be interchanged. You can see this by setting p=1 and q=0. Their formula would result in a total population of 2*California after one step, which is clearly larger than California+New York.
The interactive graph in the section "Complex eigenvalues" has a repeatable crash bug in Chrome 39 on Win 7. There are a number of was to trigger it, the easiest of which is to adjust a1 and a2 such that both have positive x and y values and the resulting line from v to Av has a slope of approximately 1.
This wikipedia graphic gives a pretty good graphical explanation of what eigenvalues do and what eigenvectors are:
http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors#me...
First time I've actually sort of understood eigenvectors. Linear algebra was actually the class that made me hate math, after years of loving it in secondary education. Not everyone has the benefit of a good teacher, and the tools that exist now don't help you to self-learn much.
As a counterpoint to most of the comments, let me just say: this is fantastic.
(Nothing wrong with constructive criticism, which most comments are, but it's also nice to just say thanks as well.)
I have to admit I hated the term Eigenvector for two semesters of college and it nearly caused me to drop mathematics altogether. This explanation is very good and helps visualize some of the things I was missing. Apologies to the fantastic professors I had who were talking over my head for 16 weeks.
>> "It turns out that a matrix like A, whose rows add up to zero (try it!), is called a Markov matrix, ..."
Oops, you mean the rows add to one.
I hate to nitpick, but, additionally, numbers in the matrix can't be negative.
Also, it's not just that 1 is an eigenvalue, it is that 1 is the largest eigenvalue. This is significant, because it implies that all other components will die out in time.
Wow, almost no positive feedback here? I think the article assumes certain audience, and for me, this brought a great insight I did not ever get in our college courses.
The graphics is nice but the explanation is just terrible. There is a huge gap between the first part explaining vectors and then the part explaining eigenvectors.
"If you can draw a line through (0,0), v and Av, then Av is just v multiplied by a number λ; that is, Av=λv."
That makes no sense. How do you draw a line through a point to "v and Av"? What does "v and Av" even mean in that context?
When I see similar faces that look a like from different people I wonder if they have similar eigenfaces?
Did you send this to Malcolm Gladwell? :)
Igon send it to him if you can't :)
It takes me an enormous amount of effort to read this font. I need to squint my eyes and had to zoom my browser window to about 200% and then scroll horizontally to make my way through the paragraphs.
I just tried to figure out the simplest rigorous explanation of linear transformations. Here's one in terms of straight lines. Let's say we have a transformation of the 2D plane, i.e. a mapping from points to points. We will call that a "linear transformation" if these conditions are satisfied:
1) The point (0, 0) gets mapped to itself.
2) Straight lines get mapped to straight lines, though maybe pointing in a different direction.
3) Pairs of parallel straight lines get mapped to pairs of parallel straight lines.
Hence the name "linear transformation" :-) We can see that all straight lines going through (0, 0) get mapped to straight lines going through (0, 0). Let's consider just those straight lines going through (0, 0) that get mapped to themselves. There are four possibilities:
1) There are no such lines, e.g. if the transformation is a rotation.
2) There is one such line, e.g. if the transformation is a skew.
3) There are two such lines, e.g. if the transformation is a stretch along some axis.
4) There are more than two such lines. In this case, you can prove that in fact all straight lines going through (0, 0) are mapped to themselves, and the transformation is a scaling.
Now let's consider what happens within a single such line that gets mapped to itself. You can prove that within a single such line, the transformation becomes a scaling by some constant factor. (That factor could also be negative, which corresponds to flipping the direction of the line.) Let's call these factors the "eigenvalues", or "own values" of the transformation.
Now let's define the "eigenspaces", or "own spaces" of the transformation, corresponding to each eigenvalue. An eigenspace is the set of all points in the 2D plane for which the transformation becomes scaling by an eigenvalue. Let's see what happens in each of the cases:
1) In case 1, there are no eigenspaces and no eigenvalues.
2) In case 2, there is only one eigenspace, which is the straight line corresponding to the single eigenvalue.
3) In case 3, it pays off to be careful! First we need to check what happens if the two eigenvalues are equal. If that happens, it's easy to prove that we end up in case 4 instead. Otherwise there are two different eigenvalues, and their eigenspaces are two different straight lines.
4) In case 4, the eigenspace is the whole 2D plane.
In this way, eigenvalues and eigenspaces are unambiguously geometrically defined, and don't require coordinates or matrices.
Now, what are "eigenvectors", or "own vectors" of the transformation? Let's say that an "eigenvector" is any vector for which our transformation is a scaling. In other words, an "eigenvector" is a vector from (0, 0) to any point in an eigenspace. The disadvantage is that it involves an arbitrary choice. The advantage is that eigenvectors can be specified by coordinates, so you can find them by computational methods.
Does that make sense?
The pretty animations are nice, and the ability to manipulate the vectors is very nice; however, I am sorry to say (and I do not mean this negatively) that there's not much "explanation".
The first sentence just describes the utility of the Eigens (so no explanation there). The next lays out the setting for the diagram. And the third says, "if we can do X, then v is an eigenvector and \lambda an eigenvalue". But... what if you can't do "X" ? What if v, (0,0) and Av are not colinear?
The skeleton of a great explanation is there, but the meat isn't there yet. A few more sentences would go a long way in making this better.
I appreciate the OP's effort, and I hope this will come across as constructive criticism.