
Thoughtforms Life An Active Inference Model of Collective Intelligence by R. Kaufman, P. Gupta, and J. Taylor
Aug 7, 2024
Join Rafael Kaufmann, a mathematician at Google focusing on collective intelligence, and Pranav Gupta, a business researcher studying human-machine teams, as they explore fascinating concepts in active inference. They discuss how agent interactions influence collective behavior and the role of goal alignment in team success. Rafael shares insights on modeling dynamics between strong and weak agents, while Pranav highlights the importance of collective memory and attention. They even touch on how motivation can elevate performance in both human and AI collaborations.
AI Snips
Chapters
Transcript
Episode notes
Two-Agent Grid-World Simulation
- The team implemented a toy grid world with two agents that share a common goal and also have private goals.
- They observed how adding a partner-modeling loop (theory of mind) and goal-alignment weighting changed behavior in simulations.
Theory Of Mind Has A Goldilocks Effect
- Theory of mind alone can produce misleading coordination when environments are ambiguous.
- Combining sufficient theory of mind with goal alignment avoids 'blind-leading-the-blind' and yields better individual performance.
Collective Free Energy Needs Both Mechanisms
- Collective intelligence was operationalized as an ensemble's ability to minimize variational free energy across many two-agent subsystems.
- Only the combination of theory of mind and goal alignment produced near-exact solutions in their experiments.
