Software Engineering Institute (SEI) Podcast Series

Members of Technical Staff at the Software Engineering Institute
undefined
Apr 6, 2026 • 19min

Leadership, Legacy, and the Power of Mentors: Insights from Dr. Paul Nielsen

In February 2026, Paul Nielsen announced that he will transition out of his role as director and chief executive officer of the Software Engineering Institute (SEI) at Carnegie Mellon University. During Nielsen's tenure, the SEI has marked major institutional milestones that underscore its enduring role in strengthening the security, resilience, and reliability of the nation's software- and AI-intensive systems. The institute recently celebrated 40 years of innovation and saw its contract renewed, which paved the way for CMU to operate the SEI for another five years. In our latest SEI podcast, Nielsen recently sat down with Matthew Butkovic, technical director of Risk and Resilience in the SEI's CERT Division, to discuss his legacy at the SEI, the impact of mentors, and the importance of encouraging scientists and engineers to do their best work.
undefined
Mar 20, 2026 • 49min

With a Little Help from Our Civilian Friends: Cybersecurity Reserve Is Both Feasible and Advisable

Chris May, technical director focused on cyber mission readiness and former Air Force communications officer, and Marie Baker, SEI technical manager with decades in cyber workforce research, discuss a civilian cybersecurity reserve. They cover the NDAA request, study methods with surveys and interviews, security clearance needs, logistical and jurisdictional challenges, surprising willingness to serve, and recommendations to pilot a reserve.
undefined
Mar 2, 2026 • 26min

Maturing AI Adoption: From Chaos to Consistency

Ipek Ozkaya, technical director for AI-native software engineering at SEI, leads research on AI adoption and modernization. She discusses why AI investments often fail, how an AI adoption maturity model provides a roadmap, assessing fit-for-purpose AI and reengineering workflows, and measuring adoption and ROI beyond simple usage metrics.
undefined
Feb 9, 2026 • 24min

Temporal Memory Safety in C and C++: An AI-Enhanced Pointer Ownership Model

In October 2025, CyberPress reported a critical security vulnerability in the Redis Server, an open-source in-memory database that allowed authenticated attackers to achieve remote code execution through a use-after-free flaw in the Lua scripting engine. In 2024, another prominent temporal memory safety flaw was found in the Netfilter subsystem in the Linux kernel: CVE-2024-1086. Bugs related to temporal memory safety, such as use-after-free and double-free vulnerabilities, are challenging issues in C and C++ code. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Lori Flynn, a senior software security researcher in the SEI's CERT Division, and David Svoboda, a senior software engineer, also in CERT, sit down with Tim Chick, technical manager of CERT's Applied Systems Group, to discuss recent updates to the Pointer Ownership Model for C, a modeling framework designed to improve the ability of developers to statically analyze C programs for errors involving temporal memory.
undefined
Jan 29, 2026 • 25min

AI for the Warfighter: Acquisition Challenges and Guidance

On November 7, the Department of War released an acquisition transformation strategy that seeks to remove bureaucratic hurdles and streamline acquisition processes to enable even more rapid adoption of technologies, including artificial intelligence. Getting AI into the hands of warfighters requires disciplined AI Engineering. In this podcast from the Carnegie Mellon University Software Engineering Institute, Carol Smith, lead of human-centered research in the SEI's AI Division, and Brigid O'Hearn, the SEI's lead of software modernization policy for the Department of War, sit down with Eileen Wrubel, the SEI's technical director of Transforming Software Acquisition Policy and Practice, to discuss AI Engineering challenges and guidance in the defense acquisition space.
undefined
Jan 15, 2026 • 36min

Visibility Through the Clouds with Network Flow Logs

Organizations, including the U.S. military, are increasingly adopting cloud deployments for their flexibility and cost savings. The shared security model utilized by cloud service providers removes some of the adopting organization's responsibility for system administration and security. But it leaves them on the hook for monitoring hosted applications and resources. Cloud flow logs are a valuable source of data for supporting these security responsibilities and attaining situational awareness. The SEI has a long history of supporting flow log collection and analysis, including tools for collection in Azure and AWS. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), two leading researchers in this area, principal researcher Tim Shimeall and security data analyst Ikem Okafo, both with the SEI's CERT Division, sit down with Dan Ruef, technical manager of the CERT Division's Network Situational Awareness Group, to discuss how to enhance security with cloud flow analysis as well as available tools and resources.
undefined
Dec 2, 2025 • 37min

Orchestrating the Chaos: Protecting Wireless Networks from Cyber Attacks

From early 2022 through late 2024, a group of threat actors publicly known as APT28 exploited known vulnerabilities, such as CVE-2022-38028, to remotely and wirelessly access sensitive information from a targeted company network. This attack did not require any hardware to be placed in the vicinity of the targeted company's network as the attackers were able to execute remotely from thousands of miles away. With the ubiquity of Wi-Fi, cellular networks, and Internet of Things (IoT) devices, the attack surface of communications-related vulnerabilities that can compromise data is extremely large and constantly expanding. In the latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI) Joseph McIlvenny, a senior research scientist, and Michael Winter, vulnerability analysis technical manager, both with the SEI's CERT Division, discuss common radio frequency (RF) attacks and investigate how software and cybersecurity play key roles in preventing and mitigating these exploitations.
undefined
Nov 10, 2025 • 27min

From Data to Performance: Understanding and Improving Your AI Model

Modern data analytic methods and tools—including artificial intelligence (AI) and machine learning (ML) classifiers—are revolutionizing prediction capabilities and automation through their capacity to analyze and classify data. To produce such results, these methods depend on correlations. However, an overreliance on correlations can lead to prediction bias and reduced confidence in AI outputs. Drift in data and concept, evolving edge cases, and emerging phenomena can undermine the correlations that AI classifiers rely on. As the U.S. government increases its use of AI classifiers and predictors, these issues multiply (or use increase again). Subsequently, users may grow to distrust results. To address inaccurate erroneous correlations and predictions, we need new methods for ongoing testing and evaluation of AI and ML accuracy. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Nicholas Testa, a senior data scientist in the SEI's Software Solutions Division (SSD), and Crisanne Nolan, and Agile transformation engineer, also in SSD, sit down with Linda Parker Gates, Principal Investigator for this research and initiative lead for Software Acquisition Pathways at the SEI, to discuss the AI Robustness (AIR) tool, which allows users to gauge AI and ML classifier performance with data-based confidence.
undefined
Oct 31, 2025 • 36min

What Could Possibly Go Wrong? Safety Analysis for AI Systems

How can you ever know whether an LLM is safe to use? Even self-hosted LLM systems are vulnerable to adversarial prompts left on the internet and waiting to be found by system search engines. These attacks and others exploit the complexity of even seemingly secure AI systems. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Schulker and Matthew Walsh, both senior data scientists in the SEI's CERT Division, sit down with Thomas Scanlon, lead of the CERT Data Science Technical Program, to discuss their work on System Theoretic Process Analysis, or STPA, a hazard-analysis technique uniquely suitable for dealing with AI complexity when assuring AI systems.
undefined
Oct 23, 2025 • 23min

Getting Your Software Supply Chain In Tune with SBOM Harmonization

Software bills of materials or SBOMs are critical to software security and supply chain risk management. Ideally, regardless of the SBOM tool, the output should be consistent for a given piece of software. But that is not always the case. The divergence of results can undermine confidence in software quality and security. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Jessie Jamieson, a senior cyber risk engineer in the SEI's CERT Division, sits down with Matt technical director of Risk and Resilience in CERT, to talk about how to achieve more accuracy in SBOMs and present and future SEI research on this front.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app