What if the real danger of AI isn’t super intelligence — but the slow erosion of attention, trust, and human connection?
In this deep-dive conversation with Iliana Grosse-Buening — a global leader in AI ethics, digital well-being, and a World Economic Forum speaker — we explore how to keep AI pro-human in a world that increasingly rewards speed, scale, and engagement over flourishing. 
While many conversations about AI focus on existential risk or productivity, Iliana argues that the deeper question is human flourishing.
How do we design AI systems that protect cognition, relationships, agency, and shared reality — rather than quietly degrading them? 
We discuss:
🔹 Why the current measure of AI “success” may be based on deception 
🔹 How AI and social platforms may be rewiring attention, memory, and critical thinking 
🔹 Why students are already feeling powerless in a world shaped by a small number of decision-makers 
🔹 Why AI literacy and digital well-being must go hand in hand 
🔹 How metrics like engagement and efficiency can quietly undermine human well-being 
🔹 Why a more global, cross-disciplinary movement is needed to keep AI pro-human 
In this episode, we also explore:
• The IEEE initiative focused on flourishing — not just harm prevention 
• Why different regions of the world are developing radically different AI narratives 
• The cognitive cost of offloading too much thinking to generative AI tools 
• Three practical ways to improve your well-being through better use of technology 
• Why one of the most important questions in AI may simply be: what does good look like? 
This conversation isn’t about rejecting AI.
It’s about refusing to sleepwalk into a future designed around shareholder value, addictive engagement, and passive dependence.
If AI is going to shape humanity, humanity has to shape AI back.