
Human Centered Better AI Through Social Science
Jul 5, 2022
This discussion features Jennifer Logg, an expert in judgment and decision-making, Daniel Ho, a legal scholar focused on AI's social context, and Kristian Hammond, who researches AI and human interaction. They dive into the ethical implications of AI technologies, emphasizing the need for transparency and accountability. The trio explores the disconnect between data insights and real-world applications, the importance of addressing biases in algorithms, and the role of social sciences in creating responsible AI systems. A fascinating look at the marriage of AI, ethics, and human behavior awaits!
AI Snips
Chapters
Books
Transcript
Episode notes
Require Automated-System Disclosure
- Disclose when a system is automated to prevent deception and anthropomorphism.
- Prefer transparency about automation to avoid user backlash and confusion.
Teach Evidence Literacy To Executives
- Teach decision makers to ask critical data questions like sample size and variable definitions.
- Use evidence-literacy so executives can better evaluate analytics outputs.
Harm Is Defined Culturally
- Societal definitions of harm shape how technology is regulated and used in practice.
- The same algorithmic tools can be deployed for protection (Canada casinos) or exploitation (U.S. open gambling).





