

The AI Daily Brief: Artificial Intelligence News and Analysis
Nathaniel Whittemore
A daily news analysis show on all things artificial intelligence. NLW looks at AI from multiple angles, from the explosion of creativity brought on by new tools like Midjourney and ChatGPT to the potential disruptions to work and industries as we know them to the great philosophical, ethical and practical questions of advanced general intelligence, alignment and x-risk.
Episodes
Mentioned books

Aug 17, 2023 • 18min
Snapchat Users Freaked Out as My AI Goes Rogue
Snapchat users were alarmed when the My AI chatbot unexpectedly posted a story, raising privacy issues. The podcast explores OpenAI's acquisition of Global Illumination and its implications for talent and competition in AI. It also discusses McKinsey's new generative AI tool and IT leaders' hopeful views on AI adoption. In addition, the Berkeley study on decoding music from brain activity prompts ethical discussions on advanced AI technologies. Finally, the ongoing debate about open-source AI safety takes center stage.

Aug 16, 2023 • 17min
Why AI Hype Has Peaked (And Why That's A Good Thing)
Discover the intriguing decline of AI hype and what it means for the future. Explore the impact of influencer fatigue and the overwhelming number of tools available. Despite the dip in excitement, ongoing developments hint at a more practical approach to AI. This shift signals a potential for sustained growth rather than fleeting trends.

Aug 15, 2023 • 19min
AGI By 2032? The Most Interesting AI Predictions
Dive into the exciting world of AI predictions and forecasts. Discover how Gulf states are racing for computing power. Learn about an Iowa school district's unusual use of AI for book bans. Explore NVIDIA's innovative project turning 2D videos into 3D models. Hear about Metaculus prediction markets and intriguing forecasts, including Elon Musk's next big move in AI. This conversation captures the rapid developments and implications of technology in our lives!

Aug 14, 2023 • 23min
How The AI Backlash Killed This Literary Startup
The podcast dives into Amazon's new generative AI tools for enhancing customer reviews and its strategy in the AI chip market. It highlights Anthropic's significant investment from SK Telecom and showcases PlayHT's innovative voice cloning technology. A key discussion focuses on the controversy surrounding Prosecraft, a literary startup that faced backlash for using AI to analyze authors' works without permission, raising questions about copyright and creator rights in the age of technology.

Aug 13, 2023 • 10min
As WormGPT Goes White Hat, Evil-GPT Emerges
The podcast dives into the intriguing shifts in AI chatbot development. Initially branded for malicious use, WormGPT is pivoting to ethical applications, sparking a conversation about responsibility in tech. Meanwhile, a new threat surfaces with Evil-GPT, explicitly designed for nefarious purposes. This clash of intentions highlights the ethical dilemmas faced by creators and the ongoing battle between beneficial and harmful AI.

Aug 12, 2023 • 13min
AI and the Turning Point Moment in Human History
The discussion dives into an economist's take on AI, challenging common fears with historical insights. It highlights the transformative power of AI, likening it to the printing press, and emphasizes the need to embrace change while managing risks. By examining past technological shifts, the conversation underscores the importance of a long-term perspective on innovation. It also draws parallels between historical upheavals and modern concerns about AI safety, advocating for balanced regulatory policies to better navigate these challenges.

Aug 11, 2023 • 20min
The Alignment Problem: How To Tell If An LLM Is Trustworthy
New research seeks to define trustworthiness in large language models, highlighting its importance in sectors like healthcare and finance. The podcast also discusses the Federal Election Commission's deliberations on deepfake regulations and the approval of self-driving cars in San Francisco. Furthermore, it touches on the challenges faced by authors due to unauthorized AI-generated books, while innovations like Claude Instant 1.2 raise questions about creativity in the digital age. The event at DEF CON 31 emphasizes the need for robust safety testing in AI.

Aug 10, 2023 • 20min
76% of Americans Think AI Might Kill Us
A recent survey reveals that many Americans are worried about AI's potential dangers and demand stricter regulations. Meanwhile, DARPA kicks off a $20 million cybersecurity challenge to enhance critical infrastructure. Disney delves into AI innovations, while a grocery app's AI meal planner has unexpected failures. Additionally, there's public outcry over Zoom's new terms of service, showing heightened concerns about data privacy in the tech realm. With growing fears of job displacement, Congress faces pressure to establish responsible AI regulations.

Aug 9, 2023 • 20min
The LLM for Coding Competition Heats Up!
Stability AI launches StableCode as Google unveils its AI-powered coding platform. The podcast examines Google's innovative Project IDX aimed at enhancing app development through generative AI. It also tackles the ethical complexities of AI in music and healthcare, from artist rights to medical accuracy. Additionally, NVIDIA reveals its powerful Grace Hopper superchip, while the competitive landscape of AI chips heats up with insights on AMD and emerging startups. Finally, there's a breakthrough in asteroid detection showcasing AI's promising applications.

Aug 8, 2023 • 22min
GPTBot AI Data Controversy and the Remaining Challenges of LLMs
A fresh look at the ethics behind AI and web scraping shines light on OpenAI's GPTBot and its controversial data practices. User privacy concerns are at the forefront as Zoom navigates its updated terms of service. Meanwhile, alarming news about acoustic attacks reveals how hackers can eavesdrop on what you type. The discussion also dives into the ongoing challenges facing large language models, emphasizing the need for technical improvements to pave the way for future AI policies.


