AI Snips
Chapters
Transcript
Episode notes
Networks Need Not Imply Mental Representations
- Autoencoders and neural nets need not be framed in representational terms; they can be seen as structural adjustments to inputs.
- Representational language is optional for describing what artificial networks do and may mislead when mapping to minds.
Representations As Later Add-Ons
- Anderson accepts mental representations may exist but treats them as late-evolving add-ons built from sensorimotor traces and language scaffolding.
- The cognitive core remains sensory-motor coupling; detached representations arise later for complex tasks.
Rejecting Strict Hierarchies In The Brain
- The brain is heterarchical with ongoing activity and mutual constraints, not a simple feedforward hierarchy.
- Dynamics, background activation, and context profoundly shape how inputs are processed in real-time.

