
The Bayesian Conspiracy 26 – Concept Networks and Hanging Nodes
11 snips
Jan 18, 2017 They explore concept networks and how attributes form mental categories like birds or planets. The conversation covers fringe cases, lumping versus splitting, and why labels persist for speed and signaling. They discuss category-driven emotions, sporting fairness measures, and how deep learning finds emergent categories like cats. The podcast also touches on signaling in gifts, lab-grown gems, and cultural norms around pronouns.
AI Snips
Chapters
Books
Transcript
Episode notes
Labels Are Shorthand For Attribute Networks
- Concept networks explain how labels (bird, planet, boy) are shorthand for many measurable attributes rather than intrinsic properties.
- Eneasz uses ostrich and Pluto examples to show a lingering “is it X?” node despite all measurable attributes being answerable.
The Feeling Of An Algorithm Running Inside
- When you've specified all underlying attributes, asking the label question still feels unresolved — that's what 'an algorithm feels like from the inside.'
- Eneasz calls the residual need to define a label the mental sensation of a running algorithm requiring a final verdict.
Ask Practical Measurable Questions Not Labels
- Ask the practical question you actually need (e.g., bone density for MMA), not simply 'is this person a boy or a girl.'
- Steven and Katrina suggest using measurable criteria like bone mass or weight classes for fairness.


