
Causal Bandits Podcast Causality, LLMs & Abstractions || Matej Zečević || Causal Bandits Ep. 000 (2023)
Nov 6, 2023
Matej Zečević, an AI and causality researcher who co-organized the NeurIPS causal workshop, dives into the fascinating relationship between large language models (LLMs) and causality. He challenges the assumption that LLMs can genuinely understand causal structures, posing thought-provoking questions about their capabilities. Matej shares insights from his diverse journey, the role of transparency in AI, and emphasizes the importance of collaboration in advancing the field. His passion for literature and its influence on his work adds a delightful touch to the discussion.
AI Snips
Chapters
Books
Transcript
Episode notes
Caution on LLM Causal Benchmark Results
- High accuracy of LLMs on causal benchmarks like the Tübingen dataset is questionable due to obscure variable pairs and possible data leakage.
- Matej urges caution interpreting results solely based on simple metrics without accounting for dataset quality and relevance.
Scaling and Symbolic AI Combined
- Scale and connectivity both matter for intelligence, like in the human brain’s neural network.
- Matej believes scaling deep learning is necessary but must be combined with conceptual advances and symbolic methods for true progress.
White Box Does Not Ensure Explainability
- White box models don't guarantee explainability, as complexity can overwhelm human understanding.
- Matej proposes explaining causality using recursive algorithms combining graph structure and causal effects for practical actionable insights.









