
Law://WhatsNext Who Pays for the Truth? The UK's Copyright Battle with Big Tech with Matt Rogerson
🎙️This week Tom sits down with Matt Rogerson — Global Policy Director at the Financial Times and one of the more prominent and forceful voices in the UK press and publishing industry on the question of AI companies using copyrighted content without permission or payment.
The timing could hardly be more significant. We recorded this conversation on the day the House of Lords Communications and Digital Committee published what may prove to be the most consequential UK report on AI and creative industries to date: AI, Copyright and the Creative Industries — an 85-page report drawing on testimony from Google, Meta, Microsoft, OpenAI and dozens of creative industry bodies, whose conclusions could not be clearer: the UK's copyright framework is not outdated, the problems stem from widespread unlicensed use, and the government should rule out a commercial text and data mining exception entirely.
And just one week earlier, the FT helped launch SPUR — the Standards for Publisher Usage Rights coalition — alongside the BBC, The Guardian, Sky News and The Telegraph: a coalition not just defending the status quo, but getting on the front foot to build shared technical standards and licensing frameworks so AI developers can access quality journalism through rights-cleared channels.
What provoked this conversation was a pamphlet published by Public First, a UK policy consultancy, titled "Text & Data Mining and its value to the UK economy" — which called for a broad commercial exception to UK copyright law, extending the argument to cover AI inference as well as training. Matt's reaction on LinkedIn was characteristically direct, and it got us talking.
---
During our conversation, Matt dismantles several of the core narratives being advanced by AI lobbyists — the anthropomorphisation of models to normalise unlicensed use; the claim that licensing infrastructure is too hard to build; and the idea that the UK must weaken copyright to remain competitive. He makes a compelling case that the real opportunity lies not in capitulating to US hyperscalers, but in building sovereign AI models with transparent training data and proper licensing — pointing to the Allen Institute, a US model co-funded by the government and Nvidia, as proof that this is already happening.
Matt highlights the infrastructure already being built to support fair licensing: Microsoft's Publisher Content Marketplace, the FT's existing commercial API access, and emerging thinking from writers like Florent Daudens on what a post-browser, agentic news economy could look like. The claim that it's "too hard" for AI companies to pay for content is not just wrong — it's being actively disproved by the market.
And we close on what may be the most consequential long-term argument of all: the slop spiral. If there is no economic incentive to produce high-quality journalism — because AI companies can take it for free — the supply of reliable information degrades. AI models trained on and retrieving from an increasingly polluted information environment produce worse outputs. Trust erodes. And we drift into a world where the information we consume is dependent wholly on the alignment of a particular model and the commercial interests of those administering it.
Matt makes the case that secure news and information supply chains could become a national security issue if this dynamic starts to accelerate.
---
If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Matt)!
---
For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.
