Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The way I understand this is that adding of random variables is a smoothening operation on their densities (more generally the distributions, but let me speak of densities only).

A little more formally, additions over random variables are convolutions of their densities. Repeated additions are repeated convolutions.

A single convolution can be understood as a matrix multiplication by a specific symmetric matrix. Repeated convolutions are therefore repeated matrix multiplications.

Anyone familiar with linear algebra will know that repeated matrix multiplication by a non degenerate matrix reveals it's eigenvectors.

The Gaussian distribution is such an eigenvector. Just like an eigenvector, it is also a fixed point -- multiplying again by the same matrix wil lead to the same vector, just scaled. The Gaussian distribution convolved is again a Gaussian distribution.

The addition operation in averaging is a matrix multiplication in the distribution space and the division by the the 'total' in the averaging takes care of the scaling.

Linear algebra is amazing.

Pagerank is an eigenvector of the normalised web adjacency matrix. Gaussian distribution is the eigenvector of the infinite averaging matrix. Essentially the same idea.

 help



> Linear algebra is amazing.

The entire control systems theory is basically various applications of linear algebra. Like Kalman Filter that got us to the moon. Simply amazing.


Convolution alone does not smooth. Eg consider a random variable supported on the pts 0 and 1 (delta masses at 2 pts.) No matter how many convolutions you do, you still have support on integers - not smooth at all. You need appropriate rescaling for a gaussian.

Also, convolving a distribution with itself is NOT a linear operation, hence cannot be described by a matrix multiplication with a fixed matrix.


You are absolutely right. Even edge detection can be written as a convolution. That's why I mention averaging.

I address scaling, very peripherally, towards the end. Of course, depending on how you scale you end up with distinctly different limit laws.


> Anyone familiar with linear algebra will know that repeated matrix multiplication by non degenerate matrices reveals it's eigenvectors.

TIL that I'm not "familiar" with linear algebra ;)

But seriously, thanks for sharing that knowledge.


If you are not speaking in jest (I strongly suspect you are), knowledge of linear algebra is one of the biggest bang for buck one can get as an investment in mathematical knowledge.

So humble and basic a field. So wide it's consequences and scope.


Their point was that "familiarity" apparently means different things for different people :P Someone using linalg in computer graphics applications may say they're familiar with it even though they've never heard the term "eigenvector". I'm not actually sure about what you mean – how does repeated multiplication reveal eigenvectors?

Consider a diagonalizable matrix A. For example, a real symmetric matrix. Start with any vector b and keep multiplying it with A.

    A A A ... A b
The vector that the result will converge to is a scaled version of one of the eigenvectors of the matrix A.

But which one ? The one with the largest eigenvalue among all eigenvectors not orthogonal to b.

https://en.wikipedia.org/wiki/Power_iteration


Ah… that "diagonalizable" is doing some heavy lifting there! I was wondering how exactly you’re going to make, say, a rotation matrix to converge anything to anything that’s not already an eigenvector. And rotation matrices certainly aren’t degenerate! Though apparently non-diagonalizable matrices can be called defective which is such a dismissive term :( Poor rotation matrices, why are they dissed so?!

Love them, those rotation matrices.

Take logarithm of the eigenvalues and you get back the angle. This to me had solidified the notion that angles are essentially a logarithmic notion ... Made more rigorous by the notion of exponential maps


My first sentence was in jest. I've used LA for various things, but haven't had many dealings with eigenvectors. So that information was genuinely new to me.

My expression of gratitude was sincere.


Understood and thanks for the opportunity of sharing together in the joy of something so amusing.

You're doing this multiple times, but it's can only mean "It Is" or "It Has".

Thanks for the heads up. I meant 'its'.

Phone autocorrect always interferes and I get tired and lazy about correcting it back. It does get it right most of the time.


Yeah, I don't think this was revealed on my undergrad linalg course, and neither during all my years of using linalg in computer graphics =D

I remember my professor talking about eigenvectors in Linear Algebra and it's been 50 years - though I barely remember anything else from that class. It was taught very early on in the course and eventually we used them all the time to solve problems.

Yes, I was taught about eigenvectors but not that they’re a fixpoint of matmul. At least I don’t think so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: