
Super Data Science: ML & AI Podcast with Jon Krohn 668: GPT-4: Apocalyptic stepping stone?
Apr 7, 2023
Expert Jérémie Harris discusses AI risks with GPT-4, inner alignment, and potential dangers of utilizing a tool with unknowable means. The conversation covers the importance of understanding the impact of inner alignments in achieving goals, the transition to the US for AI risk and policy work, advancements in GPT-4 through reinforcement learning, ensuring AI systems adhere to goals without deception, evaluating safety adjustments in GPT-4 development, and exploring the intersection of quantum physics, AI policy, and consciousness.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7
Intro
00:00 • 2min
Transition and Collaboration in AI Risk and Policy
02:23 • 8min
Enhancements and Risks of GPT Models with Reinforcement Learning
10:40 • 5min
Inner and Outer Alignment in AI Systems
15:52 • 19min
Evaluating GPT-4 Safety Adjustments and AI Development Risks
34:52 • 15min
Exploring the Intersection of Quantum Physics, AI Policy, and Consciousness
50:05 • 3min
Exploring Risks in the AI Space and Advancements in AI Safety
52:37 • 3min
