Hi Terry,

Interesting question and a hard one to answer without more specifics. In general, PCA is used when you have so much data, you don’t even know what you are looking at. For instance, if I had a dataset of customers, and each customer had 50 features associated with them, their longitude, their latitude, their total purchase history, how frequently they purchase. I might be looking for insight into customer behaviour, but with so many features it is very hard to make sense of the data. We could run the PCA on the data and plot PC1 vs PC2 and see if we see any clusters that would strongly suggest that we might have two groups of customers that we could cater two specifically, when could then look into the loadings of the of PC1 and PC2 to try to figure out which are the important features that separate our clusters.

Honestly I could do a whole post of interpreting PCAs. I’ve got three videos on my youtube channel which might help https://www.youtube.com/watch?v=KuUqOA2LMXc&ab_channel=Bill%27sNeuroscience

]]>Thanks for to comment. The post is like a decade old now. There are much better implementations available now on instructables.com and a nice paper in HardwareX.

]]>I really Like the way you have explained everything. It would be great if the video could be shared to have an idea how does it work.

Thank you.

]]>Yes, filtfilt filters the signal first in the normal direction, going forward in time (which delays signals in time), the it takes that filtered signal and filters it again in the reverse direction, going backwards in time (which advances signals in time). This second filtering essentially undoes the time delay caused by the first filter. It also means the filter has double the number of poles of the filter you specified.

Actually, not a bad idea for a blog post seeing as how common filtfilt is just used by default.

]]>Hi Ruth, sorry I didn’t see your comment earlier. I created the animated plots in Python. This tutorial shows how things are done.

https://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/

I’m not an expert in linear algebra, but to my understanding, if I say a matrix M and I calculate its eigenvectors v, and I arrange them in columns, to make a matrix V, and I get the matching eigenvalues, and put them in the diagonals of a matrix Λ, then we can say M = VΛV-1 . And this is eigen decomposition. So what I have described in neither singular value, nor eigen decomposition, but it is very related to eigen decomposition (and of course, learning eigen decomposition is a good first start to learning SVD).

]]>