Business Karaoke Podcast with Brittany Arthur

Brittany Arthur
undefined
Mar 30, 2026 • 29min

011: Future Signals 2026: The Architecture of Intent

Send us Fan MailYou went into 2025 excited about AI. You ended it exhausted. You weren't alone. And this conversation knows exactly what that cost you.There's a moment in this episode where everything clicks. It happens when Brittany and Adal land on the question most executives are quietly carrying: if everyone now has access to the same AI tools, who am I going to be? That question is the heartbeat of this entire conversation. And it might be the most important leadership question of 2026.The Future Signals 2026 report isn't a list of predictions. It's a map built on patents, VC reports, financial data, and a two-layer signal system that separates what is happening now from where things are cooking. Last year's signals? 85% became reality. So when the signals show up in your industry, you'll already know what they mean.THE FIVE SIGNALSSignal 01 — The agent that handles itAI isn't a new hire. It's infrastructure, like electricity. The question isn't who to replace. It's how to wield it.Signal 02 — A fragmented worldBorders, regulations, and restrictions are multiplying. The companies that prepared for flexibility won't just survive. They'll sprint.Signal 03 — Programmable biologyAI isn't just accelerating research. It's redesigning from the atomic level up. Every industry that touches living things is affected.Signal 04 — AI beyond the screenAI is developing a physical language. It's not smarter robots. It's AI that can understand and navigate the world around you.Signal 05 — The human premiumWhen everything can be built, what you choose to build and why becomes the ultimate differentiator. Human intent is the scarce resource.THE NUMBER THAT WILL STAY WITH YOU88% of companies are just accelerating old workflows with AI.6% are fundamentally redesigning how they work and who they are.The conversation gets honest about something you've probably seen on LinkedIn. Influencers claiming their 37 agents made them millionaires overnight while you were just having breakfast. It's almost funny. Until you realize how much that noise is quietly eroding your confidence.The point was never the number of agents. It's the right technology, in the right hands, pointed at something that actually matters."When execution becomes abundant, intent becomes scarce."This episode ends with a question worth sitting with: where are you on the maturity map? Just starting? Building? Scaling? The signals meet you there, designed for the stage you're actually in. Not the stage someone's performing on LinkedIn.GO DEEPER→ Download the Future Signals 2026 report: https://designthinkingjapan.com/thinking→ Explore the open methodology on GitHub : https://github.com/DesignThinkingJapan/future-signals-2026-report→ AI Leadership Academy, executive fluency for real decisions: https://designthinkingjapan.com/academyAI is a marathon but it can feel like a sprint. That's why it's important to go with intent.
undefined
Mar 17, 2026 • 1h 10min

S3 E10 | The Lamont Lens: Society, Trust and the AI Age

Send us Fan MailDr. Christopher Lamont is Professor of International Relations at Tokyo International University and Deputy Head of Program for AI and Global Governance at the Global Governance Institute in Brussels. His research career has been built around transitional justice, international criminal law, and how societies rebuild after institutional collapse. In this conversation, Brittany brings that experience to the AI age.This is a conversation about trust. What destroys it, how societies rebuild it, and why it may be the most consequential word in the AI era that nobody is taking seriously enough.IN THIS CONVERSATION→ Whether societal change really has a before and after, and what that means for how we approach AI transformation→ What digital sovereignty actually means in 2026, and why it is often being pursued with the wrong tools→ Why which foundation model your company adopts is a values decision→ What thriving post-crisis societies have in common, and how that applies to building AI-ready organisations→ The accountability question that policymakers and business leaders are not asking FIND US→ Instagram: @businesskaraokepodcast
undefined
Dec 10, 2025 • 46min

010 | Human Centered AI: Why Netflix is paying $83 billion for the stories we watched before school

Send us Fan MailHuman Centered AI | Ep. 010 | Why Netflix is paying $83 billion for the stories we watched before school.Netflix has led AI in entertainment for over a decade. Personalized thumbnails, recommendation engines, rapid production. They're exceptional at it.So why spend $83 billion on Warner Brothers Discovery?Because they noticed something interesting about their own catalog.Netflix makes series you watch once. Warner Brothers made the ones you watch with your kids because your parents watched them with you.Emotional compound interest, built over 80 years. That's not a technology problem. It's a time problem. And Netflix decided it was easier to buy than to wait.This got us thinking about what it means for everyone else.Warner Brothers didn't have the best AI. They had something AI needed. Decades of stories, characters, and trust that couldn't be built faster with better technology.Most organizations have a version of this. Customer relationships measured in decades. Institutional knowledge that lives in people, not systems. A reputation earned by showing up consistently.That's not legacy to modernize away. That's your data. The real kind - built over years, not downloaded. And AI is only as good as what you feed it.In this episode we explore:✨ Human value in an AI world - what can't be replicated✨ Infrastructure reality - the gap between AI's promise and today's reality✨ Legacy as asset - reframing what "old" means✨ New roles emerging - how jobs are shifting✨ Shared responsibility - ethics and safety aren't one person's jobAI multiplies what exists. So let's ask, "what have you been building all this time that's about to become even more valuable?"
undefined
Dec 4, 2025 • 42min

009 | Human Centered AI: What's our AI Iwakura moment?

Send us Fan MailHuman Centered AI: Ep.009 - The Iwakura PrincipleSee comments on where to listen 🎧In 1871, Japan sent half its government overseas. For two years.This isn't ancient history. It's a blueprint for AI transformation.The Iwakura Mission included sitting cabinet ministers, future prime ministers, and a six-year-old girl who would later appear on the 5,000-yen note.They didn't send junior staff to "figure it out." They sent the decision-makers.The result? Mitsubishi. Mitsui. The Tokyo Stock Exchange. Nearly 500 companies founded by one mission member alone.Most organizations today respond to AI with a pilot program and a steering committee.The Japanese have two words for "leaving something to someone":→ 任せる (makaseru) - entrusting → 放置する (hōchi suru) - abandoningOne built modern Japan. The other builds slide decks nobody reads.Three Critical Insights:→ Send Decision-Makers, Not Researchers - The people who will implement change need to do the learning. Waiting for a summary is abdication, not delegation.→ Give It Real Time - The mission lasted nearly two years. Most AI initiatives get a quarter to show ROI. That's not transformation—that's a pilot.→ Study Systems, Not Just Technology - They visited factories, yes. But also schools, courts, prisons, slums. They learned what NOT to copy as much as what to adopt.Four Implementation Principles:→ Literacy Before Strategy. If leadership hasn't personally used the tools, they're not ready to set direction.→ Document With Intention. The mission produced a 5-volume, 2,000-page report. Most AI pilots end with a deck nobody reads.→ Filter Through Context. They studied multiple countries, then built something Japanese. "Best practices" from Silicon Valley won't work in Tokyo—or your organization.→ Build the Next Generation. The young officials on that mission led Japan for 50 years. Who are your future AI leaders? Not the consultants.The Iwakura Mission wasn't a project. It was a commitment.150 years later, we're still talking about it because it worked.In this episode, Nathan Paterson and Brittany Arthur explore what this 150-year-old voyage teaches us about taking AI transformation seriously and why most organizations are confusing delegation with abandonment.How you commit matters more than how much you invest.
undefined
Nov 25, 2025 • 36min

008 | Human Centered AI:Microsoft Partnership, Academy Updates, and Spatial Intelligence

Send us Fan MailMicrosoft Partnership, Academy Updates, and Spatial Intelligence | Human-Centered AI Ep. 008Three signals from the AI frontier this week. Each one reshapes how we think about AI readiness.SIGNAL 1: Microsoft PartnershipDTJ is now Microsoft's official training partner for AI education in Japan—working with government officials and policymakers. When governments invest in AI literacy (not just tools), it confirms: this skill is baseline now.SIGNAL 2: Academy ConfidenceOur graduates walk into interviews ready when asked "How do you think about AI trade-offs?" They've built conviction, not memorized answers. December and January cohorts now open.SIGNAL 3: Spatial Intelligence Is LiveDr. Fei-Fei Li's work on AI that understands 3D space just dropped. Take one photo, AI generates a navigable 3D environment. For manufacturing, logistics, robotics—the next wave isn't coming. It's here.THREE CRITICAL INSIGHTS:1. AI Literacy Moved From Vertical to Horizontal - This isn't specialized anymore. It's baseline. Every role. Every level.2. Confidence Is the Competitive Advantage - Technical knowledge is optional. Strategic conviction about AI is not.3. The Frontier Keeps Revealing Itself - While most organizations are figuring out ChatGPT, AI just moved from digital to physical.We watch the frontier so you don't get blindsided. 37 minutes that translate what's coming into what it means for your work.LINKS:🔗 Human-Centered AI Leadership Academy: https://www.designthinkingjapan.com/ai-leadership🔗 DTJ Website: https://www.designthinkingjapan.com/🔗 Microsoft Elevate Japan: https://www.microsoft.com/en-us/elevate🔗 World Labs (Spatial Intelligence): https://www.worldlabs.ai/ABOUT HUMAN-CENTERED AI PODCAST:Weekly insights on AI, leadership, and what's actually happening on the frontier. Hosted by Design Thinking Japan (DTJ), a human-centered AI company in Tokyo.We help leaders navigate AI strategy with clarity and confidence—not hype.
undefined
Nov 10, 2025 • 35min

S3E9 - The Third Way - The Intrapreneur's Path with Junichi Yamashita

Send us Fan MailThe Third Way - The Intrapreneur's Path from Inside with Junichi YamashitaWhen people talk about innovation, they present two choices: leave and start fresh, or stay and accept the status quo. But there's a third way—proven in Japanese organizations by someone who's walked this path multiple times.Junichi Yamashita built digital products used by millions—Coke ON (65M downloads), multiple Rakuten ventures—all from inside established companies. This isn't theory. It's how innovation actually happens in Japanese organizations, told by someone who's done it repeatedly.✅ What You'll LearnHow to create momentum when starting with nothingThe two types of "no" in Japanese business—and why it mattersGetting beyond inspiration: the actual logistics of innovationCreating scenarios that win stakeholder supportBeing different as an advantage in your organizationDX lessons that matter for AI transformation🎯 This Is For You If...You have ideas but unclear how to move them forwardYou're told "we need innovation" with no clear pathYou're wondering if you need to leave to create something newYou're leading transformation but struggling with stakeholder alignmentYou're preparing for AI and want to learn from digital transformation success💡 Key Insights"Start the ball rolling—once it starts, it doesn't stop easily"The hardest part isn't execution—it's creating the first moment of momentum. When asked if something is possible: "I think so... shall I take a look?" This creates the next conversation."You don't have to do something differently—share what people aren't aware of"Being different becomes valuable when you find the intersection between what's missing in your environment and what only you can offer."Japanese people are great at doing things right—few can show what the right things to do are"Excellence in execution exists. What's missing is identifying the path forward through uncertainty. Once clear, collective power becomes extraordinary."Talk about the value, not the technology"Don't explain what something is. Explain what it creates: "Coffee purchases go from 5 to 20 per month." People need to understand why it matters.👤 Junichi YamashitaSenior Director, Coca-Cola Japan | Coke ON Business LeaderLed: Coke ON (65M users, 500K+ vending machines), world's first vending machine subscriptionPreviously: Rakuten (Ecosystem Strategy, CEO direct report), Korean startup (Country Manager from zero), McKinsey & IBMLinkedIn: /junichiyamashita🎙️ Host: Brittany ArthurCo-founder, Design Thinking JapanHelping Japanese organizations innovate since 2012designthinkingjapan.com🎧 Business Karaoke PodcastAuthentic conversations with leaders navigating innovation in Japan. Real experience, not just theory.Subscribe for more conversationsConnect: Hello@DesignThinkingJapan.comThe future isn't built by choosing between leaving or staying. It's built by finding the third way forward.
undefined
Nov 8, 2025 • 49min

S3E8 - 社内で新規事業を立ち上げる方法 with Junichi Yamashita

Send us Fan Mail「イノベーションが必要」と言われても、実際どう進めればいいのか。多くの方が同じ悩みを抱えています。今回は、コカ・コーラでCoke ON(6500万DL)を育て、楽天で複数の新規事業を立ち上げてきた山下純一さんをお迎えし、社内で新しいことを始める「現実的な方法」について語っていただきました。理論ではなく、実際の現場で使える具体的な知恵。成功も失敗も含めた、率直な対話です。✅ この対談から得られることこの47分間の対談では、以下のような実践的な知見をお届けします:・新規事業を始める際の、最初の具体的なステップ ・社内で味方(スポンサー)を見つけ、巻き込んでいく方法 ・経営層の理解を得るための「30%ルール」の活用法 ・デジタル専門家でなくても、DXをリードできる理由 ・本業を続けながら、新しい挑戦を持続可能にする工夫 ・外部パートナーやコンサルタントとの効果的な協働の仕方すべて、山下さんご自身の経験に基づいたお話です。🎯 こんな方におすすめですこの対談は、特に以下のような状況にある方にお役立ていただけると思います:・組織の中で新しい取り組みを始めたいが、進め方に迷っている ・イノベーションプロジェクトを任されたが、なかなか前に進まない ・DXを推進する立場だが、周囲の理解や協力を得るのに苦労している ・起業するか、今の会社で挑戦を続けるか、選択肢を考えている ・アイデアや構想はあるが、承認プロセスや予算確保に課題がある ・伝統的な企業文化の中で、変革を起こしたいと考えているリーダーもちろん、上記以外の方にも、組織内でのイノベーションに関心がある方であれば、何かしらのヒントを持ち帰っていただけるのではないかと思います。📌 対談で触れているトピック・イントラプレナー vs アントレプレナー:それぞれのメリット ・ゼロからの始め方:2-3人から始める理由 ・30%プロトタイプの威力と使い方 ・ステアリングコミッティーの作り方 ・スピード vs マラソン思考のバランス ・デジタルリテラシーの壁を越えるコミュニケーション ・DXの成功パターンと失敗パターン ・パッションと個人的な興味の重要性 ・外部コンサルタントとの効果的な協働 ・Coke ON 開発の舞台裏💡 対談の中で印象的だった考え方山下さんが実践してきた中で、特に印象的だったいくつかのアプローチをご紹介します:「2-3人からスタートする」 最初から大人数を集めても、意見の集約に時間がかかってしまう。まずは本当に信じてくれる2-3人と始めて、徐々に広げていく方が結果的に早い。「30%の完成度で見せる」 完璧を目指して時間をかけるより、30%程度の完成度でも具体的な形にして見せる。会議で話すだけでは忘れられてしまうが、何か「モノ」があれば、人は反応してくれる。「社内の関係者こそ、最初のお客様」 最終的なエンドユーザーの前に、まず社内のステークホルダーに価値を理解してもらう必要がある。彼らが協力してくれなければ、プロジェクトは前に進まない。「How(どうやるか)ではなく、Why(なぜ)から」 DXは技術論から入ると失敗しやすい。なぜこれが必要なのか、どんな体験を実現したいのか、目的から考えることが大切。「お母さんに説明できるか」 デジタルの専門用語を使わず、誰にでも分かる言葉で説明できるか。これが、多様なステークホルダーを巻き込む上での鍵になる。これらは、山下さんが実際の経験を通じて体得されてきたことです。👤 ゲスト紹介:山下純一日本コカ・コーラ株式会社 シニアディレクター現在 ・Coke ON(自販機向けロイヤリティプログラム)事業責任者 ・6500万ダウンロード、50万台以上の自販機と連携 ・コカ・コーラ社で世界初の「自販機サブスク」Coke ON Passを2021年にローンチ経歴 ・楽天:エコシステム戦略、メンバーシップ戦略を担当(CEO直下) ・楽天市場:スマホ事業、ROOM(ショッピングSNS)責任者 ・韓国スタートアップ:日本カントリーマネージャー(n=1から立ち上げ) ・マッキンゼー、IBM:戦略・ITコンサルタントLinkedIn: https://www.linkedin.com/in/junichiyamashita💬 皆さまのご意見をお聞かせくださいコメント欄で、ぜひ以下のようなことをシェアしていただけると嬉しいです:・この対談の中で、特に印象に残ったポイント ・ご自身の組織で新しいことを始めるとしたら、どんなテーマに取り組みたいか ・社内イノベーションで、現在直面している課題皆さまの経験や考えを、ぜひお聞かせください。📧 お問い合わせ企業向けワークショップ、AI活用のご相談、イノベーション戦略に関するお問い合わせは、以下までお気軽にご連絡ください。Design Thinking Japan Email: Hello@DesignThinkingJapan.com Website: https://designthinkingjapan.com皆さまの組織の挑戦をサポートできることを、楽しみにしています。
undefined
Oct 22, 2025 • 51min

S3E7 - From Tech to Trust with Daryl Osuch

Send us Fan MailThe Legal Bridge: Technology to Trust | S3E8Guest: Darryl Osuch - Unit Manager, Legal Operations at JERA Co., Inc. | Host of The Legal Ops PodcastEpisode Length: 51 minutesEpisode Description"I feel like I'm fighting an education battle."Darryl Osuch identifies what many organizations are missing about AI adoption. Not a technology battle. Not a process battle. An education battle.In this conversation, Darryl shares what he's learning at the intersection of legal operations, AI implementation, and organizational trust. His perspective—both mechanic and driver of AI systems—reveals why the gap between capability and comprehension might be the real bottleneck.Microsoft's research shows 70% of AI transformation involves people, 20% workflows, and only 10% algorithms. Yet many organizations find their resource allocation tells a different story. Darryl brings rare expertise: implementing generative AI at JERA while building frameworks that help people actually trust and adopt it.Key ThemesThe Translation Gap Legal teams are discovering they're not gatekeepers—they're translators between technical capability and human comprehension. When technical concepts get explained but not understood, that's where adoption stalls.Trust as Architecture Trust operates in layers: data, algorithm, company. When one layer doesn't hold, the entire stack can struggle—regardless of technical capability.The Education Battle The real challenge isn't teaching people to use AI tools. It's making complexity accessible without losing truth. Translation capability is becoming strategic, not supplementary.Democratization with Guardrails "Vibe coding" enables people who've never coded to build solutions. The question becomes: How do you create frameworks that enable exploration while maintaining standards?The Soft Skills Advantage When everyone has access to similar AI tools, what creates distinction? Humanity, authenticity, judgment, empathy, wisdom—the entirely human elements.Key Insights from Darryl💭 "I feel like I'm fighting an education battle. That's literacy. It's not technical, it's not procedural."💭 "If a company does it right and allows democratization with simple guardrails, users have more autonomy, feel more in control, and stay connected to the process."💭 "I think one of the necessary roles and powerful functions of a lawyer is to be some kind of translator."💭 "The technology almost always outpaces regulation."💭 "People will start actively putting humanity and authenticity first when they are looking for something."The Reframing Question"What is the impact of the work you're doing right now, and how can you improve or magnify that impact?"Not "should we use AI?" but "what am I trying to accomplish, and could AI help me accomplish it better?"Purpose first. Tool second.About Darryl OsuchDarryl Osuch solves problems most organizations don't see yet. As Unit Manager of Legal Operations at JERA Co., Inc. in Tokyo, he automates workflows, implements generative AI, and helps legal teams understand what their technology actually does.Host of The Legal Ops Podcast and fluent in both law and code, Darryl's philosophy is simple: be both mechanic and driver. Know how it works, not just that it works.Connect with Darryl:The Legal Ops Podcast on SpotifyLinkedIn: linkedin.com/in/daryl-osuch
undefined
Oct 14, 2025 • 31min

007 | Human Centered AI: Validation Architecture, Not Validation Effort

Send us Fan MailValidation Architecture, Not Validation Effort | Human Centered AI Ep 007Deloitte Australia delivered a $440,000 AI-assisted report. The client discovered fake citations, non-existent authors, and books that were never written.This isn't about criticizing Deloitte - they're tackling what we're all facing. How do you validate AI output without destroying the speed advantage?The Speed ParadoxAI generates a 100-page report in 3 hours. Human validation takes 2 weeks.You can't slow back to human speed (defeats the purpose). You can't trust blindly (Deloitte proved that costs $440,000).So what's the answer?In This Episode:→ What actually broke at Deloitte (and why it's a process problem, not a technology problem)→ Why LLMs are eloquence engines, not truth engines→ The validation architecture we use for AI-assisted reports→ How to build checkpoints that preserve speed advantage→ Why transparency about AI use becomes competitive advantage→ Managing AI agents vs. managing humans (completely different principles)→ Four implementation guidelines you can use immediatelyKey Insights:The validation bottleneck is real. If you're reading every word, you're back to human speed with added risk.Transparency must come first. The AI conversation happens before the project, not after someone finds hallucinations.Speed without checkpoints is just risk. Build validation milestones throughout creation, not just at the end.Our Approach:- Declare sources first (set boundaries or you'll get books that don't exist)- Cross-validate patterns, not sentences- Build checkpoints throughout (like data packets - check key milestones, not every byte)- Human expertise where it matters (evaluate output quality, not proofread words)Three Questions for Your Practice:- What's your validation framework that doesn't require reading every word?- Have you told clients HOW you use AI before they discover it themselves?- Are you validating during creation or only after?How you validate matters more than how much you validate.Deloitte paid $440,000 for this lesson publicly. Learn it here for free.RESOURCES:📊 AI Future Signals 2025 Report (with full methodology): https://www.designthinkingjapan.com/#futuresignals
undefined
Sep 25, 2025 • 40min

006 | Human Centered AI: We tested the "AI Conbini" (Real x Tech Lawson) at Takenawa Gateway

Send us Fan Mail"This is the next generation AI-powered convenience store that will become the standard."When Lawson and KDDI made this bold claim about their Real Tech store in Tokyo, we had to see it ourselves. As AI implementation practitioners, we learn as much from ambitious attempts as we do from polished successes.The press releases promised, 14 AI cameras for personalized recommendations, intelligent avatars, robot food prep, adaptive shopping experiences.Unfortunately, what we found was No camera disclosure, "AI avatar" was a human on video call, branded Roomba, staff manually counting inventory beside computer vision equipment, non-interactive screens, zero personalization.We spoke English to the "smart avatar." It replied, "A little." That's when we realized we were talking to a person, not AI.This isn't about criticizing Lawson or KDDI, they tackled an impossible challenge. Japanese convenience stores are already efficiency masterpieces.The promise gap between AI marketing and AI reality is widening across industries.Three Critical Insights1. Marketing promises can sabotage good work. Customers felt misled by "AI-powered" experiences that were actually human-powered—even though human solutions might be better.2. Integration trumps innovation. 14 cameras don't automatically create personalized experiences. The hard work is connecting cameras to inventory systems, recommendation engines, and displays in ways that actually help customers.3. Expectation management matters. When you promise "the future," customers expect something genuinely different from everywhere else.The technology exists: facial recognition for greetings, real-time inventory tracking, gaze detection, automated checkout. The challenge isn't capability, rather it's system integration and user experience design.Here's Four Implementation Guidelines1. Start with specific friction. Not "AI-powered store," but "What customer problems can technology solve? Long lines? Product discovery? Language barriers?"2. Test quietly, announce loudly. Build it, validate it works, then tell people. Order matters.3. Be honest about automation. Customers handle knowing humans help remotely. They can't handle feeling deceived.4. Under-promise, over-deliver. Surprise beats disappointment every time.Lawson and KDDI deserve credit for pushing boundaries publicly. Most companies play it safe.But their experience reminds us that customer trust comes from honest, valuable experiences and not impressive press releases.The future of retail will involve AI. But it'll be shaped by companies solving real customer problems, not showcasing impressive technology.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app