Software Engineering Institute (SEI) Podcast Series

Members of Technical Staff at the Software Engineering Institute
undefined
Sep 16, 2024 • 45min

Using Role-Playing Scenarios to Identify Bias in LLMs

Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI's AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.
undefined
Sep 9, 2024 • 38min

Best Practices and Lessons Learned in Standing Up an AISIRT

In the wake of widespread adoption of artificial intelligence (AI) in critical infrastructure, education, government, and national security entities, adversaries are working to disrupt these systems and attack AI-enabled assets. With nearly four decades in vulnerability management, the Carnegie Mellon University Software Engineering Institute (SEI) recognized a need to create an entity that would identify, research, and identify mitigation strategies for AI vulnerabilities to protect national assets against traditional cybersecurity, adversarial machine learning, and joint cyber-AI attacks. In this SEI podcast, Lauren McIlvenny, director of threat analysis in the SEI's CERT Division, discusses best practices and lessons learned in standing up an AI Security Incident Response Team (AISIRT).
undefined
Aug 22, 2024 • 19min

3 API Security Risks (and How to Protect Against Them)

The exposed and public nature of application programming interfaces (APIs) come with risks including the increased network attack surface. Zero trust principles are helpful for mitigating these risks and making APIs more secure. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), McKinley Sconiers-Hasan, a solutions engineer in the SEI CERT Division, discusses three API risks and how to address them through the lens of zero trust.
undefined
Jul 25, 2024 • 43min

Evaluating Large Language Models for Cybersecurity Tasks: Challenges and Best Practices

How can we effectively use large language models (LLMs) for cybersecurity tasks? In this Carnegie Mellon University Software Engineering Institute podcast, Jeff Gennari and Sam Perl discuss applications for LLMs in cybersecurity, potential challenges, and recommendations for evaluating LLMs.
undefined
Jul 18, 2024 • 34min

Capability-based Planning for Early-Stage Software Development

Anandi Hira, a data scientist, and William R. Nichols, an initiative lead for SEI, discuss Capability-Based Planning (CBP) in software acquisition. They highlight its application in business and government domains, emphasizing the importance of defining capabilities for success. The podcast explores transitioning from physics to software engineering, tracking progress, and the significance of effective decision-making in CBP.
undefined
Jul 1, 2024 • 26min

Safeguarding Against Recent Vulnerabilities Related to Rust

What can the recently discovered vulnerabilities related to Rust tell us about the security of the language? In this podcast from the Carnegie Mellon University Software Engineering Institute, David Svoboda discusses two vulnerabilities, their sources, and how to mitigate them.
undefined
Jun 21, 2024 • 31min

Developing a Global Network of Computer Security Incident Response Teams (CSIRTs)

Cybersecurity risks aren't just a national concern. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), the CERT division's Tracy Bills, senior cybersecurity operations researcher and team lead, and James Lord, security operations technical manager, discuss the SEI's work developing Computer Security Incident Response Teams (CSIRTs) across the globe.
undefined
May 31, 2024 • 27min

Automated Repair of Static Analysis Alerts

Developers know that static analysis helps make code more secure. However, static analysis tools often produce a large number of false positives, hindering their usefulness. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Svoboda, a software security engineer in the SEI's CERT Division, discusses Redemption, a new open source tool from the SEI that automatically repairs common errors in C/C++ code generated from static analysis alerts, making code safer and static analysis less overwhelming.
undefined
Apr 4, 2024 • 38min

Developing and Using a Software Bill of Materials Framework

With the increasing complexity of software systems, the use of third-party components has become a widespread practice. Cyber disruptions, such as SolarWinds and Log4j, demonstrate the harm that can occur when organizations fail to manage third-party components in their software systems. In this podcast from the Carnegie Mellon University Software Engineering Institute, Carol Woody, principal researcher, and Michael Bandor, a senior software engineer, discuss a Software Bill of Materials (SBOMs) framework to help promote the use of SBOMs and establish a more comprehensive set of practices and processes that organizations can leverage as they build their programs. They also offer guidance for government agencies who are interested in incorporating SBOMs into their work.
undefined
Feb 16, 2024 • 35min

Using Large Language Models in the National Security Realm

At the request of the White House, the Office of the Director of National Intelligence (ODNI) began exploring use cases for large language models (LLMs) within the Intelligence Community (IC). As part of this effort, ODNI sponsored the Mayflower Project at Carnegie Mellon University's Software Engineering Institute (SEI) from May 2023 through September 2023. The Mayflower Project attempted to answer the following questions: How might the IC set up a baseline, stand-alone LLM? How might the IC customize LLMs for specific intelligence use cases? How might the IC evaluate the trustworthiness of LLMs across use cases? In this SEI Podcast, Shannon Gallagher, AI engineering team lead, and Rachel Dzombak, special advisor to the director of the SEI's AI Division, discuss the findings and recommendations from the Mayflower Project and provides additional background information about LLMs and how they can be engineered for national security use cases.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app