

Law://WhatsNext
Tom Rice and Alex Herrity
How are leading practitioners leveraging emerging technologies and ways of working to pursue their passion and objectives, and as a by product what are the implications for the future of legal practice? Let’s explore this together. What to expect:
- Focused conversations with leading practitioners; technologists and educators
- Deep dives into the intersection of law, technology, and organisational behaviour
- Practical analysis and visualisation of how AI is augmenting our potential
- Insights from adjacent industries that might inform our own
- Focused conversations with leading practitioners; technologists and educators
- Deep dives into the intersection of law, technology, and organisational behaviour
- Practical analysis and visualisation of how AI is augmenting our potential
- Insights from adjacent industries that might inform our own
Episodes
Mentioned books

20 snips
Mar 27, 2026 • 34min
Legal Tech Trends with Peter Duffy (Q1 2026)
🎙️Peter Duffy is back for our quarterly deep dive into the biggest stories from his ever-popular Legal Tech Trends newsletter (celebrating its recent 50th edition 🎉). This time around, the conversation is dominated by one name: Anthropic. Between a legal plugin that spooked public markets, a viral tweet showcasing a "Claude-native" law firm, and a principled stand-off with the US Defense Department that sent millions of users switching sides — it's been quite the quarter.What we else dive into:Vibe coding hits legal — From weekend hackathons to working prototypes in 30 minutes. Peter explains why it's transforming ideation and prototyping, but flags the considerable leap from "amazing demo" to "enterprise-ready." Plus, Alex reveals his salmon regulation app “Branchly” is storming the charts over at vibecode.law.The privilege and compliance watch-outs — An SRA investigation into a solicitor uploading client docs to ChatGPT, a US ruling that use of consumer Claude waived attorney-client privilege, and judges struggling with where "AI" begins and ends. Shadow IT is alive and well.The LLM numbers blind spot — Peter's public service announcement: LLMs are not designed for numerical calculations and it's one of the easiest ways to trigger hallucinations. The McKinsey security incident — A security researcher accessing 45 million+ internal chatbot messages. Not an AI-specific problem per se, but a timely reminder that vibe-coded tools and internal chatbots need proper security scrutiny — especially when you have client data and a reputation on the line.Harvey, Legora, and the question you shouldn't be asking — "Which one should I buy?" Maybe start with your problems, not the product. Talk to your users, define your requirements, understand the commercial value — then go to market with a structured evaluation. ---Listen if: You want a grounded, hype-free take on the quarter that put legal AI firmly in the mainstream spotlight.---Rate, subscribe, comment, and share if you enjoyed this chat with Peter!---For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.

Mar 19, 2026 • 1h 28min
The Outsider Inside: Nick West on Rewiring the Law Firm
🎤 This week we sit down (for our first in-person episode) with Nick West — Partner and Chief Strategy Officer at Mishcon de Reya — who has spent two decades working at the intersection of law, technology and business model innovation.Nick’s path is one of the more unusual and instructive in the industry: competition lawyer at Linklaters, strategy consultant at McKinsey, product leader at LexisNexis, Managing Director of Axiom UK, and now the person responsible for technological transformation and R&D at Mishcon. He founded MDR Lab (one of the first legal tech startup incubators) and the MDR Group (collection of specialist consultancy businesses that sit alongside but separate from the core Mischon legal practice), built one of the industry’s first in-house data science teams, and has overseen the firm’s AI adoption journey from early experimentation through to commercial platform deployment. There are few people in the legal industry who’ve thought as deeply — or as practically — about how law firms actually work and how they might need to change.The conversation is wide-ranging — we cover the full arc of Nick’s career, the evolution of innovation culture inside a law firm, how Mishcon adopted AI (and what they got wrong along the way), the productivity question everyone’s asking, what happens when clients start sending genuinely good AI-drafted documents, and the early “signals” for where the business model of law might be heading.---Connect with Nick West Partner and Chief Strategy Officer at Mishcon de Reya---If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Nick)!---For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.

Mar 13, 2026 • 15min
Who Pays for the Truth? The UK's Copyright Battle with Big Tech with Matt Rogerson
🎙️This week Tom sits down with Matt Rogerson — Global Policy Director at the Financial Times and one of the more prominent and forceful voices in the UK press and publishing industry on the question of AI companies using copyrighted content without permission or payment.The timing could hardly be more significant. We recorded this conversation on the day the House of Lords Communications and Digital Committee published what may prove to be the most consequential UK report on AI and creative industries to date: AI, Copyright and the Creative Industries — an 85-page report drawing on testimony from Google, Meta, Microsoft, OpenAI and dozens of creative industry bodies, whose conclusions could not be clearer: the UK's copyright framework is not outdated, the problems stem from widespread unlicensed use, and the government should rule out a commercial text and data mining exception entirely.And just one week earlier, the FT helped launch SPUR — the Standards for Publisher Usage Rights coalition — alongside the BBC, The Guardian, Sky News and The Telegraph: a coalition not just defending the status quo, but getting on the front foot to build shared technical standards and licensing frameworks so AI developers can access quality journalism through rights-cleared channels.What provoked this conversation was a pamphlet published by Public First, a UK policy consultancy, titled "Text & Data Mining and its value to the UK economy" — which called for a broad commercial exception to UK copyright law, extending the argument to cover AI inference as well as training. Matt's reaction on LinkedIn was characteristically direct, and it got us talking.---During our conversation, Matt dismantles several of the core narratives being advanced by AI lobbyists — the anthropomorphisation of models to normalise unlicensed use; the claim that licensing infrastructure is too hard to build; and the idea that the UK must weaken copyright to remain competitive. He makes a compelling case that the real opportunity lies not in capitulating to US hyperscalers, but in building sovereign AI models with transparent training data and proper licensing — pointing to the Allen Institute, a US model co-funded by the government and Nvidia, as proof that this is already happening.Matt highlights the infrastructure already being built to support fair licensing: Microsoft's Publisher Content Marketplace, the FT's existing commercial API access, and emerging thinking from writers like Florent Daudens on what a post-browser, agentic news economy could look like. The claim that it's "too hard" for AI companies to pay for content is not just wrong — it's being actively disproved by the market.And we close on what may be the most consequential long-term argument of all: the slop spiral. If there is no economic incentive to produce high-quality journalism — because AI companies can take it for free — the supply of reliable information degrades. AI models trained on and retrieving from an increasingly polluted information environment produce worse outputs. Trust erodes. And we drift into a world where the information we consume is dependent wholly on the alignment of a particular model and the commercial interests of those administering it. Matt makes the case that secure news and information supply chains could become a national security issue if this dynamic starts to accelerate.---If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Matt)!---For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.

Mar 6, 2026 • 39min
AI Security, Agentic Risk & What lawyers need to understand with Rok Popov Ledinski
We sit down with Rok Popov Ledinski — an independent legal AI and data consultant whose background spans high-security enterprise engineering through to advising law firms on their AI and security strategy. Our initial interest in Rok's work was sparked by his YouTube channel, where he's been producing sharp, accessible breakdowns of the real risks underpinning today's AI tools.Within minutes, we're into a forensic dissection of Anthropic's Claude Cowork — the agentic tool pitched at non-developers that launched earlier this year. Rok walks us through the contradictions in Anthropic's own technical documentation: a tool demonstrated by its creators as a way to organise your desktop, while the same support pages advise against granting it access to sensitive local files. A tool marketed for running tasks autonomously in the background — while its activity isn't captured by audit logs. A tool whose safety guidance asks users to watch for "suspicious actions that may indicate prompt injections" — aimed at an audience that, as Rok points out, has largely never heard of prompt injections. Rok explains, in terms accessible to non-technical listeners, how hidden instructions embedded in an innocuous document can hijack an AI agent into exfiltrating sensitive client data. His hypothetical attack vector for law firms is disarmingly simple: find lawyers on LinkedIn who are openly using Cowork, send a document to their publicly available email address containing concealed instructions, and let the agent do the rest.But this isn't an anti-AI conversation. Rok is emphatic that these tools should be used — just not naively. Drawing on enterprise security frameworks from companies like Cisco, he advocates for a practical middle ground: map what your AI has access to, create sanitised copies of sensitive folders, scope permissions tightly, vet your MCP servers and plugins, and understand (physically, not just contractually) how data flows through your systems.Key TakeawaysThe Cowork Paradox: Anthropic's own documentation reveals a tension between how Cowork is marketed (autonomous, background task execution) and how it should be used (limited permissions, no sensitive files, manual monitoring for prompt injections). Security attacks are now a "When," Not an "If": Unlike traditional cybersecurity breaches, prompt injection attacks exploit a fundamental limitation of large language models — they can't distinguish instructions from data. Research shows success rates as high as 90% for some proprietary LLMs. Claude is among the more resistant, but not immune.Practical Security for Legal Teams: Rok's actionable advice for in-house teams and law firms includes: creating clean data environments separate from originals; using self-hostable workflow tools like n8n; scoping AI permissions to the minimum necessary; and conducting genuine due diligence on every plugin and MCP server before connecting it to your systems.Key ReferencesRok's YouTube Channel: where our interest in Rok's work began, and a recommended follow for anyone wanting to stay across the security dimensions of legal AI adoptionRok's LinkedIn — he hosts weekly live sessions every Saturday with a security expert specialising in air-gapped, offline AI deployments in regulated industriesThe Art of Modern Legal Warfare — Rok co authors with a former guest and friend of the show Anna Guo and Sakshi Udeshi a series of vulnerability types specific to legal AI use cases.If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Rok)!

Feb 25, 2026 • 46min
AI Governance: Ethics, Agents & the Human Question with Catie Sheret, Oliver Patel & Peter Lee
🎙️Alex and Tom step aside for this one — handing the mic to their friend Catie Sheret (General Counsel at Cambridge University Press & Assessment), who hosts a rich three-way conversation with Oliver Patel (Head of Enterprise AI Governance at AstraZeneca) and Peter Lee (Partner at Simmons & Simmons). Three very different vantage points — converging on the same question: how do you actually make AI governance work in practice?What begins with a definitional exercise (what is AI governance, anyway?) quickly evolves. Oliver draws a sharp line between AI ethics, responsible AI, AI governance and AI safety as related but distinct disciplines — and makes a passionate case that governance is fundamentally change management, not compliance theatre. Peter describes the "golden thread" he sees in the best organisations: corporate philosophy flowing from the boardroom right down into the tools people use every day. Catie grounds everything in context — arguing that your principles only stick when they're anchored to what your organisation actually does: content IP at Cambridge, medical ethics at AstraZeneca etc.The conversation builds through the practical mechanics — use case assessment, vendor oversight, committee structures, crisis preparation — before arriving at the question everyone's wrestling with: agentic AI. Peter frames it as a mindset shift from "can we trust the output?" to "what actions can this system initiate?" Oliver goes further: the fundamental logic of agentic AI, he argues, is to take the human out of the loop — and organisations need to confront that honestly rather than pretending otherwise.There's a wonderful thread on human flourishing running throughout — Peter's insistence that philosophers have never been more important, Oliver's pride in AstraZeneca's "Thriving in the Age of AI" literacy programme, and a closing round of book recommendations that ranges from Richard Susskind's How to Think About AI to Jenny O'Dell's How to Do Nothing (Oliver's brilliantly contrarian pick about the importance of stepping away from screens entirely) to Governing the Machine by Ray Eitel-Porter, Paul Dongha, Miriam Vogel.It's a masterclass in how to think about governance as something that enables rather than constrains — hosted with warmth and real expertise by Catie.If you enjoyed this episode, please do share it with another friend, team or community who might also enjoy it! Please do let us know what resonated (by comment) and rate the show (if you haven't already)! We appreciate your time, attention and support!For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) insights from leading practitioners, technologists, and educators; (ii) deep dives into the intersection of law, technology, and organisational behaviour; and (iii) practical analysis and visualisation of how AI is augmenting our potential.

Feb 11, 2026 • 41min
When will Legal vibe like code with Chris Bridges & Matt Pollins
The vibe coding conversation in legal has gone full culture war: one side says they've built a billion-dollar startup in 10 minutes, the other says don't bother. The truth — as usual — is far more interesting than either extreme.🎙️This week we sit down with Chris Bridges (Co-Founder & COO, Tacit Legal) and Matt Pollins (Co-Founder & CPO, Lupl) — two legal technologists who live in the same small town in West Sussex and who've channelled that proximity into building vibecode.law, an open-source platform where the legal community can share, discover and upvote vibe-coded legal tech projects.The platform launched just over a week before we recorded and already had 18 projects — from a SaaS inflation calculator for contract lawyers to a Harvey for Mongolian law to a tool that unlocks track changes when a passive-aggressive opposing lawyer has locked them down.During our chat, we explore:Why vibe coding's real value is compressing the feedback loop between idea and prototype — not replacing developers The structural gap: how 25 years of developer tooling (linting, testing, documentation, standards) gives engineering focussed AI tools a head start that legal tech can't shortcut Why the adversarial nature of law makes standardisation fundamentally harder than in software vibecode.law: what it is, the projects landing on it, and the product thinking behind building a two-sided community Responsible vibe coding and why we're probably 6–12 months from a data exposure incident The T-shaped lawyer: curiosity as the defining skill for the next generationConnect with our guests:Chris Bridges — tacit.legal | author of When will legal vibe like codeMatt Pollins — agents.law | lupl.comCheck out vibecode.law to explore or submit your own projects.---If you enjoyed this episode, please like, subscribe, comment, and share! For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/

Jan 28, 2026 • 32min
Vibe Lawyering with Artur Serov
🎙️ This week we sit down with Artur Serov — a Senior Commercial Counsel working in-house across corporate, commercial, and AI compliance — who has been quietly vibe coding legal tech solutions that rival features in commercial platforms. This is a practical, how-I-did-it episode. Artur walks us through his journey from first principles — the failed early experiments, the tools that unlocked progress, and the specific steps any curious lawyer could follow to start building. Artur shares his screen during our conversation to demo a Word add-in with features he couldn't find in commercial legal tech (party-aware context, risk appetite dials, AI-powered negotiation prep), and previews a more ambitious workspace prototype where AI retains memory across an entire transaction lifecycle. Since publishing this prototype has evolved, and you can read more about that here. Artur is candid about what's now possible: with Claude Opus 4.5 and Gemini 3, self-built solutions can get remarkably close to enterprise-grade. But he's equally honest about the remaining hurdles — deployment, maintenance, security — and his belief that a growing community of "vibe lawyers" will help solve them together.---What you might take from this conversation:The First Principles Path to Technical Fluency — How Artur went from zero coding experience to working prototypes, using Claude as a teacher and Google Antigravity as his development environmentWhat's Missing from Commercial Legal Tech — Why context is the killer feature, and how Artur built deal-aware AI that knows who you represent, what you're negotiating, and what risks you're willing to takeThe Workspace Vision — A prototype where AI memory persists across NDAs, partnership agreements, and every document in a transaction — with your playbooks and policies embedded as reference materialsWhy Building Makes You Better at Everything Else — From vendor negotiations to IT collaboration, how technical fluency transforms your effectiveness as in-house counselHow to Get Started — Artur's practical advice: a Claude subscription, Google Antigravity, and the willingness to ask "how do I do this?"---Connect with Artur: LinkedIn | Github---If you found this episode interesting, please tell us and do share it with a friend, colleague or community who might take something from it! For more, head to lawwhatsnext.substack.com for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis of how AI is augmenting our potential.

Jan 21, 2026 • 60min
Evals & Benchmarking Legal AI with Anna Guo
We sit down with Anna Guo — a Singapore-based lawyer, startup advisor, and founder of LegalBenchmarks.ai — who has quietly built one of the most rigorous practitioner-driven evaluation frameworks for legal AI tools in the industry. Her community now spans close to 900 legal and AI professionals. Her research has produced findings that challenge industry assumptions: that legal-specific AI tools don't always outperform general-purpose models, that accuracy isn't actually the top driver of lawyer adoption, and that in some drafting tasks, AI is already matching or exceeding human reliability.This is a watch-don't-only-listen episode. Anna shares her screen throughout — running us through a live, double-blind benchmarking exercise where we rank outputs from legal AI, general-purpose AI, and human lawyers without knowing which is which. She also demonstrates how prompt injection attacks can bypass AI guardrails using techniques as simple as low-resource languages (Vietnamese or ASCII code?), surfacing security risks that become particularly acute as we move closer toward widespread agentic AI adoption.What You'll Learn:The Three Dimensions of Tool Evaluation — Why measuring accuracy alone misses the point, and how Anna assesses output reliability, output usefulness, and platform workflow support as distinct layersWhat Actually Drives Adoption — Survey data revealing that lawyers prioritise context management and verification over raw accuracy when choosing AI toolsWhere Humans Still Win — High-judgment, context-sparse tasks requiring commercial reasoning remain firmly in human territory; routine, context-complete work is where AI excelsPrompt Injection in Practice — Live demonstrations of how attackers can trick AI models into revealing harmful information using low-resource languages and clever framing---Connect with Anna: LinkedIn | LegalBenchmarks.ai---If you found this episode interesting, please tell us and do share it with a friend, colleague or community who might take something from it! For more, head to lawwhatsnext.substack.com for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis of how AI is augmenting our potential.

Jan 7, 2026 • 30min
Vibe Coding a Doc Review Assistant with Anson Lai
Anson Lai, a commercial in-house counsel and independent developer, created a game-changing Word add-in for AI-assisted document review. He shares insights on his 'vibe coding' approach, using conversational AI to enhance development decisions. Anson demonstrates the tool's user-friendly interface that simplifies editing and reviewing without costly solutions. He discusses the importance of security with a bring-your-own-key design and reveals plans to open-source the add-in, inviting community collaboration for future innovations.

Dec 30, 2025 • 19min
Our First Year in Review 2025
Welcome to Law://WhatsNext - the show where we catch up with leading practitioners (lawyers; technologists; educators and more) who are leveraging emerging technologies to pursue their passion and objectives, and as a by product we get nerdy trying to understand the implications for the future of legal practice (and more broadly, knowledge work). To keep up with the pace of change and developments subscribe to this channel or to our newsletter at: https://lawwhatsnext.substack.com/----In this episode, we've distilled a year of extraordinary dialogue into one 20-minute highlights reel. We've spent 2025 in conversation with legal industry pioneers — the general counsels, technologists, and educators redefining how law is practised, learned, and delivered. These are some of our standout moments from a series of compelling global conversations. What made the reel (this could honestly be a multi-part series):Part 1: Hype vs. Reality — Is AI progress real?Kevin Cohn (the soon to be CEO of Brightflag) provokes that the trough of disillusionment is coming but that shouldn't blight the reality that the value in the skills and expertise we used to highly prize are dramatically eroding Part 2: Agency, authenticity & trustDana Rao (the former GC & Chief Trust Officer at Adobe) demonstrates that we can be the agents (rather than mere subjects) of positive change, and we loved learning more about the work he and his team at Adobe invested to build the Content Authenticity Initiative (to counter the ever increasing proliferation of deepfakes)Part 3: Leading in disruptive timesJessica Block (EVP at Factor) used a recent read (Notes on Complexity by Neil Theise) as the lens through which she explained the importance of cultivating the right environment (over systems) for the emergent properties of transformational change to "bubble" up. Part 4: Evaluating what's actually workingSigge Labor (President at Legora) explained for us the work that Legora performs to understand frontier model performance and how they react to new developments and assess leaps in capabilities. We anticipate that in 2026 more and more legal teams and firms will invest in their evaluation capabilities, and this conversation (that accompanied the release of GPT5 in the summer) is one to check out if you haven't already. Part 5: The skills we might loseDan Hunter (Executive Dean, The Dickson Poon School of Law, King's College London) talked of the "terrifying bind" we encounter as we offload more and more cognitive work to compute - the work may get easier and more efficient but our cognitive development doesn't replicate (in terms of resilience) the old training training pathway. He has immediate concerns in the classroom and anticipates a coming gap in law firm talent pipelines.These are just glimpses. Check out our Spotify, Apple Podcasts, or Substack pages for the full conversations. Thank you for listening, supporting, and championing the show. We wish you a happy new year — Series 2 is coming soon 👀


