
Neurotech Pub Trading Spaces // Dimensionality Reduction for Neural Recordings
Mar 18, 2021
Carson Stringer, Janelia group leader focused on large-scale neural data and dimensionality reduction. Chethan Pandarinath, professor developing latent dynamical models for neural decoding. Konrad Kording, UPenn professor in computational neuroscience and education. Vikash Gilja, UCSD professor working on neural prostheses and translational neuroengineering. They dive into dimensionality reduction, PCA, noise and reliability, random projections, LFADS, and scaling recordings.
AI Snips
Chapters
Transcript
Episode notes
PCA Finds Axes Of Maximal Shared Variance
- Principal component analysis (PCA) ranks orthogonal axes by shared variance so top components maximize signal-to-noise.
- Konrad Kording explained the first PC captures coordinated up/down covariance across neurons and later PCs carry less signal and more noise.
PCA Can Be Biased By Poisson Spike Variance
- High firing-rate neurons show larger variance under Poisson-like spiking, so naive PCA can bias toward those 'noisier' neurons.
- Vikash Gilja warned that variance may reflect firing-rate scale, not necessarily meaningful signal.
Cross-Validate To Measure Reliable Shared Variance
- Use cross-validated approaches to estimate how much reliable shared variance exists across neurons.
- Carson Stringer described splitting data in time and across neuron groups to measure covariance reproducibility and upper-bounds on explainable variance.




