PING

APNIC
undefined
Apr 1, 2026 • 27min

What does “BCP” really mean?

In this episode of PING, APNIC Chief Scientist Geoff Huston discusses an emerging concern about how the IETF interprets the concept of ‘Best Current Practice‘ (BCP).In his previous episode, Geoff explored emerging questions around DNS provisioning over IPv6, including experimental observations on its performance characteristics. Towards the end of the discussion, we turned to how the Internet Engineering Task Force documents standards and protocols through Best Current Practice (BCP) publications.In the traditional view, BCPs are intended to provide clear, implementation-focused guidance that remains neutral across different operational contexts. This includes networks of all scales — from home environments to enterprise, national, and international infrastructure. These systems rely on a shared understanding of protocol behaviour, and BCPs help define the set of practical, interoperable choices available for deployment today.To be designated as a BCP, an IETF RFC undergoes additional review and scrutiny. This is similar to the ‘Standard’ (STD) designation, which elevates an RFC beyond a general publication and gives it greater weight in operational and procurement contexts.While RFCs are not legally binding, they are frequently referenced in national regulations, procurement frameworks, and service specifications. In contrast, Experimental and Informational RFCs explicitly indicate that they are not intended for such use. BCPs, along with Proposed Standards and Standards, carry greater normative weight due to the more rigorous processes required for their approval, making them more suitable as references for real-world deployment and decision-making.An emerging perspective suggests that BCPs may be treated as aspirational as well as definitional — potentially describing behaviours that are not yet widely deployed, or in some cases not deployed at all. This represents a subtle shift from the traditional view of BCPs as documenting established, operational practice.This tension has surfaced in discussions around the DNS/IPv6 draft mentioned in the earlier PING episode. The conversation involves multiple parts of the IETF process, including the DNSOP Working Group, the DNS Directorate (which reviews DNS-related work across Working Groups) and the Internet Engineering Steering Group (IESG), responsible for final document review and status approval before publication as an RFC.These discussions reflect an evolving question: Should BCPs remain a record of established operational consensus, or can they also serve to guide and shape future practice?Advocacy is an important part of the IETF process. But it’s important to distinguish between advocating for a new path in systems and specifying what the current best choice is in the specification of systems behaviour.Read more about the RFC process and document status on the IETF website:RFC statusesThe Internet standards processThe role of the Internet Engineering Steering Group (IESG) in the standards process“Not All RFCs are Standards” RFC 1796
undefined
Mar 18, 2026 • 27min

bgproutes.io: A next-generation BGP data collection platform

This episode of PING features Thomas Alfroy and Thomas Holterbach from the University of Strasbourg, talking about bgproutes.io - A new approach to BGP data collection and analysis.We've featured bgproutes.io on PING before, when we discussed GILL and DFOH with Professor Cristal Pelsser from Louvain University. At that stage, the project was in an early stage and we focussed on the machine learning and approaches to selecting the "Most valuable Vantage Point" or MVP in the data sources available.This time, the two Thomases discuss the operational deployment of the service, and how they have designed the system to provide fast visibility to data in a 3 month window, and an API for selection of prefixes and origin-AS of interest, to show the BGP transactions seen in the wild. They've been designing "dashboards" to show both the data and a sense of what logic determined the inferences made about the data.bgproutes.io has been written to process the newer BGP Monitoring protocol (BMP) which provides visibility of the discrete states of the individual BGP speakers who peer at the BMP collection point. So, considering an IX this means that a single feed can supply 50 or more distinct views of BGP. This has permitted the project to grow to over 300 points of view worldwide.The service is complementary to those from University of Oregon routeviews or the RIPE RIS project, and includes data from these sources along with PCH and CGTFThomas Alfroy presented at the Sydney SIGCOMM meeting where the system was described in the 2024 SIGCOMM ‘best paper’ award-winning research: “The Next Generation of BGP Data Collection Platforms“. Thomas Holterbach is no stranger to the APNIC community having spend time at the IIJ Research Laboratory in Tokyo.Read more about bgproutes.io on the APNIC blog and on the web:bgproutes.io (project website) bgproutes.io: A next-generation BGP data collection platform (APNIC Blog) [Podcast] DFOH, MVP, and GILL: New ways of looking at BGP
undefined
Mar 4, 2026 • 53min

Measuring the use of DNS over IPv6

In this episode of PING, APNIC Chief Scientist Geoff Huston discusses running advertising-based experiments and a problem of interest in the modern DNS.DNS fundamentally requires end users, their chosen resolver provider, and the authoritative servers for the names they query to cooperate in a coordinated exchange over IP protocols to answer DNS questions. The specifics of how these queries are encoded and transmitted become complex very quickly, but a particular issue is emerging in how we define, in normative and strongly binding terms, the way the protocol is expected to behave. This will shape future deployment decisions, implementation choices, and operational dependencies.The question centres on the use of IPv6 within the DNS ecosystem as a whole. Can we yet say that IPv6-only DNS can be relied upon in operational practice? And if so, should that position be written into the guidance an RFC may define, were it to be elevated to the status of a Best Current Practice (BCP) or BCP document?Geoff is exploring how to measure this by exploiting a DNS model known as ‘glueless’. In this approach, the additional ‘glue’ records that are typically passed around behind the scenes to keep DNS resolution working are not provided by the authoritative server to the resolver. As a result, the resolver is forced to issue further queries, which can in turn be constrained to use IPv6 only.These queries are conducted without many of the usual measurement artefacts — such as error introduced when users close a browser session prematurely, or when attention drifts away from the web page that triggered an advertisement-based test.Some interesting variances are emerging when you look at this data by geographic region and origin-AS.Read about Geoff's measurement of DNS over IPv6 on the APNIC Blog:Measuring The Use Of DNS Over IPv6
undefined
Feb 18, 2026 • 21min

Internet measurement in Thailand

This episode of PING features two members of the Thai academic and research community and was recorded last year at IETF 122 in Bangkok. With a population of more than 70 million, Thailand has around 80 publicly funded universities and a further 70 or more private institutions, and undertakes substantial research in telecommunications and computing. A leading example is the Asian Institute of Technology (AIT), which has run the AINTEC conference across the Asia Pacific. Both of our guests on this episode are associated with AINTEC.First, we hear from Sukumal Kitisin of Kasetsart University, a state-funded institution. She has been working on an Internet measurement programme for the Thai Consumer Council (TCC). The project leverages well-known tools such as Ookla Speedtest and OpenSignal to capture end-user experiences of mobile Internet services in Thailand.These measurements support modelling of broadband speeds and service quality, which the TCC can then use to present consumer evidence to the national telecommunications regulator, the National Broadcasting and Telecommunications Commission (NBTC). While we are often accustomed to discussing measurement using Atlas probes and other ‘insider’ approaches, this work represents a consumer-friendly, simple, and defensible method for capturing the lived experience of real-world users.Secondly, we hear from Adisorn Lertsinsrubtavee, Director of the Internet Education and Research Laboratory (InterLab) at AIT. Adisorn describes a decade-long measurement effort known as HAZEMON, which focuses on low-bandwidth Internet of Things (IoT) devices used to measure suspended particulate matter such as smoke and fumes from diesel and charcoal fires, as well as carbon monoxide and carbon dioxide.Thailand sits at the heart of a South East Asian climate system that is heavily affected by forest fires and persistent smog, with serious health implications. PM2.5 particles can penetrate deep into the lungs and cause lasting respiratory and other health problems as they are absorbed into the body. Monitoring these levels enables the project to model pollutant density, identify significant events such as forest fires, and support on-the-ground responses by firefighting teams and health professionals.The measurwement system uses small, low-energy computing platforms with off-the-shelf particulate sensors, integrated with low-bandwidth radio systems and powered by solar or battery sources where mains electricity is unavailable. These units are deployed across forests and urban areas, with data federated back to AIT for analysis.For more information on these projects:Internet Speed Measurements in Thailand (ISOC Pulse presentation, 2025)SEA-HAZEMON (project website)SEA-HAZEMON (AINTEC22 conference paper)
undefined
Feb 4, 2026 • 58min

BGP in review for 2025

In this episode of PING, APNIC Chief Scientist Geoff Huston returns with his annual review of BGP, reflecting on developments across 2025. Geoff has been publishing this year-in-review analysis of BGP dynamics for more than a decade, and this time he has uncovered some genuinely surprising shifts. His 2025 analysis has been published in two parts on the APNIC Blog.Border Gateway Protocol (BGP) is the mechanism by which network operators announce their Internet address space to the rest of the world and, in turn, learn about the addresses announced by others. Operators participating in the global default-free zone receive all publicly announced routes, each expressed as an IP prefix and associated with its originating Autonomous System Number (ASN). Every BGP speaker has a unique ASN, and all routing information is exchanged and interpreted through this fundamental identifier. In effect, the ASN is the basic unit of interdomain routing.BGP also carries path information that describes how routing announcements traverse the network. This data informs routing policy decisions — which paths to prefer, and through which commercial or technical relationships. While the protocol itself is well understood, the system as a whole is anything but simple. When more than 100,000 ASes are continuously exchanging routing information, complexity is unavoidable.Speaking BGP is about telling things and learning things, but it’s also about deciding what to do with what has been learned. This is the work behind a router, and involves holding all the information and performing routing decisions on it, so the ‘size’ of the information shared and learned has a direct impact on the ‘cost’ of operating as a BGP speaker (cost here ultimately means memory and CPU). For most of the Internet’s history, BGP growth has been relentless, forcing operators to continually ask whether their current routing infrastructure can accommodate future growth.All technology adoption has a life cycle, and is often referred to as the ‘technology adoption curve’. New technologies start out expensive and scarce, become cheaper and widely adopted, and eventually reach a point of saturation where growth slows and replacement becomes the dominant driver. For much of its existence, the Internet has remained firmly in the rapid growth phase of this curve, with sustained increases in users, networks, and routing information.Geoff has detected changes in the pace of growth for both IPv4 and IPv6, which suggest the underlying economics behind investment in Internet, and growth in customers has reached it’s saturation point: We are entering a time where BGP growth may not have the same dynamics we’ve been used to, and questions about capital investment in BGP routing and underlying Internet Addressing are not the same.Read about BGP and Internet addressing in 2025 at the APNIC Blog:BGP Updates in 2025 (Geoff Huston, APNIC Blog, January 2025)BGP in 2025 (Geoff Huston, APNIC Blog, January 2025)IP addresses through 2025 (Geoff Huston, APNIC Blog, January 2025)
undefined
Jan 21, 2026 • 31min

NITK Students at IETF: Fresh Minds for standards development

Welcome back to PING for 2026 and season 6.This time on PING, we have a pair of interviews with students from the National Institute of Technology Karnataka, Surathkal (NITK), recorded last year at IETF 122. This is the second time we've heard from students from NITK. We previously heard from Vanessa Fernandes and Kavya Bhat when they attended IETF 119 in 2024.NITK is a large, technically focused university located on India’s south-western coast in the state of Karnataka. The state is home to major technology hubs, such as Bengaluru and Mangaluru, alongside institutions like NITK, which play a key role in developing technical talent.Against this backdrop, it is unsurprising that NITK students show a strong interest in network technologies and Internet protocol development. Dr Mohit Tahiliani, Associate Professor at NITK, has led a multi-year program involving undergraduate, postgraduate, and postdoctoral researchers to engage with emerging Internet standards. Through this program, participants explore new ideas, contribute code, and take part in IETF hackathons and Working Group activities.This work has been supported in part by the APNIC Foundation.Last time with Vanessa and Kavya, we explored NITK’s multi-year campus IPv6 deployment, which has been underway for some time. That work has included direct engagement with the IETF, with Dr Mohit Tahiliani’s students attending alongside Nalini Eklins, who is involved both in the IPv6 deployment at NITK and in IPv6 standards work within the IETF.Since then, both students have gone on to work in networking roles or to pursue further study, reflecting the longer-term impact of sustained involvement in operational and standards-based Internet engineering.This time, we've got two different projects and NINE students to hear from.The first group is Rati Preethi Subramanian, Shriya Anil, Mahati Kalale, Anuhya Murki and Supradha Bhat, who explored fair queuing disciplines, FQ_Codel, a derivative FQ_Codel++ and a new proposed model, FQ_Pie. They worked with the NS3 network simulator and CCPerf, exploring how these queueing disciplines compare, and discussed their project with me at IETF 122.The second group are Vartika T Rao, Hayyan Arshad, Siddharth Bhat and Bharadwaja Meherrushi Chittapragada, who looked at the YANG data model in the network management space, and more efficient ways to manage data coming out of networking systems using YANG. They wrote a producer-consumer model in Python code, and explored time-series databases using interface packet count collections as an example YANG dataset to explore, in the CBOR encoding.Finally, I spoke with Dr Mohit Tahiliani, who has been leading this project. He is strongly committed to bringing new and younger voices into IETF work, recognizing the value of exposing students to real-world protocol development early in their careers. This experience benefits participants by grounding their learning in practical standards work, while also helping the IETF engage with new contributors who may return to protocol development in the future.This sustained engagement has already had tangible outcomes: The students involved have gone on to roles in the ICT sector or to further academic study, demonstrating the long-term value of this collaborative model.Read more about the NITK IETF work on the Internet Society website, YouTube and the IETF archives:A Journey from Surathkal to the IETF (IETF Blog) IETF CCWG at IETF 122 Bangkok (YouTube video) IETF TSVWG at IETF 122 Bangkok (YouTube video) CBOR Encoding for HTTPS-Based Transport for YANG (IETF 122 Presentations Archive)ETF 122 Presentations Archive)
undefined
Dec 10, 2025 • 54min

Going Dark: measurement when the Internet hides the detail

In the final podcast for 2025, APNIC Chief Scientist Geoff Huston discusses the problem of independent measurement in an Internet which is increasingly “going dark”.Communications has always included a risk of snooping, and a matching component of work to enhance privacy, from the simplest ciphers used in ancient times, techniques of hiding and discovering messages, attempts to prevent and detect intrusion of the mail, to adoption of telegraph codes, the cutting of telegraph wires in wartime (to force messages into radio where they could be listened to) and the development of modern encryption algorithms typically using the public-private keypair model. There has always been a story of “attack” and “response” in how we communicate privately.Aside from matters of state security, banking and finance at large depend on a degree of privacy and now require it under legislation to enable use of creditcard information online. Many other contexts have an assumption of privacy, and use technology to try and preserve it. Fundamentally, individuals in their use of the Internet are entitled to expect a level of privacy where the state permits it.The publication of RFC7258 “Pervasive Monitoring Is an Attack in 2014 formalised a belief that the intrusion of third parties into a communication between two ends demanded a technology response to exclude them, where possible. Protocol designers and Internet Engineers took up the challenge.This position led over time to a marked increase in the adoption of privacy enhancing protocol features. For example, the web moved from HTTP: denoted URLs to HTTPS: where the content is protected by the Transport Layer Security (TLS) encryption protocol, which now overwhelmingly predominates in the at large.However, significant aspects of Internet communications “leak” information to third parties. Between an individual and a web service lies their provider, unknown numbers of intermediate providers, typically a content distribution system hosting the web site in a local copy, all of whom have opportunities to see and understand what is being done, and by whom. In particular the DNS typically exposes the name and address of the site being connected to across all kinds of protocols (not just the web) and exposes it to unknown intermediary systems as the DNS lookup is processed.In response to this, services are emerging which break down the DNS into dissociated queries: what is being looked for, and who is looking for it, and use intermediary services which may know one, but not both: Questions are seen to be asked, but by who is now hidden. If you know who is asking, you don’t know what they are asking for.Combined with newer network protocols like QUIC which imposes a strong end-to-end encryption model which even hides the inter-packet size and timing information (another form of leak which can be used to reconstruct what kind of traffic is flowing) it has become increasingly hard for an independent researcher to see inside the network: It’s going dark.Geoff explores the nature of privacy in the Internet at large, and how APNIC Labs gets round this problem with it’s measurement system.PING will return in January 2026 with another season of episodes. Until then, enjoy this final recording of 2025, and see you online, in the new year.
undefined
Nov 26, 2025 • 38min

Adjusting for data source bias in Internet Measurements

This time PING features Emile Aben from the RIPE NCC R&D Department. Emile is a Senior Research Engineer, and for over a decade and a half has been looking at Internet Measurement at RIPE in the Atlas system, and in the RIPE RIS BGP data collection.Emile and a collaborator Romain Fontugne from IIJ Labs in Tokyo have been exploring a model of the influence and effect on global connectivity in BGP for different AS, based on the impact they have on other AS’s transit choices. They call this “AS Hegemony” and Emile has been using it to adjust for sample bias in the data being collected in RIPE RIS and in the Atlas network. This approach to re-balancing the sources helps Emile to understand changes in network topology and routing under rapid shocks like cable cuts and he’s been applying this to the recent spate of cable outages in the Baltic, around Africa, and the power outage on the Iberian Peninsula.Emile has also been looking at new ways of holding data, and visualising data. His RIPE colleague Ties de Kok has explored use of “Parquet” as a data abstraction tool and this has allowed Emile to perform rapid analysis and experiment in new data visualisations in 3D, rendered in-browser.Read more about AS Hegemony, and the new data visualisations on the web:How RIS Saw the Iberian Power Outage (RIPE90 plenary presentation, May 2025)A Deep Dive Into the Baltic Sea Cable Cuts (RIPE Labs, December 2024)AS Hegemony: A Robust Metric for AS Centrality (SIGCOMM 2017 poster)
undefined
Nov 12, 2025 • 52min

the Realpolitik of undersea cables

In this episode of PING, APNIC Chief Scientist Geoff Huston explores the complex landscape of undersea cables. They have always had a component of strategic interest, communications and snooping on communications has been a constant since writing was invented, and the act of connecting two independent nation states by a telegraph wire invokes questions of ownership and jurisdiction right from the start.After the initial physics of running a long distance wire to make an electric circuit was worked out, telegraph services became a vital part of a states economic and information gathering processes. This is why at the beginning of world war 1 and again in world war 2 the submarine cables linking europe out into the world were cut by the British Navy: forcing the communications flows into radio meant it was possible to listen in, and with luck (and some smart people) decode the signals.Modern day fibre optic communications are no different in this regard. Many incidents of cable cutting have simple explanations, not all paths the subsea cables run through are especially deep and in shallow waters near landfall with lots of fish, trawlers cause a lot of damage. But there is now good reason to believe state actors are also disrupting fiber communications by breaking links, and a strong trend now to direct which sources of equipment (from the physical fibre up to the active routing systems) are used for a landfall into any given economy. This in turn is influencing the flow of capital, and the paths taken by subsea fibre systems, as a result of the competing pressures.
undefined
Oct 29, 2025 • 37min

Greasing the wheels

In this episode of PING, Shumon Huque from Salesforce discusses how protocols with extensible flag fields can benefit from regular testing of the values possible in the packet structure. This technique is known as "greasing" and has a strong metaphorical meaning of "greasing the wheels" to ensure future uses aren't blocked by mistaken beliefs about the possible values.Intermediate systems (so-called "middleboxes") have to try and determine "risky" packetflows, and one of the mechanisms they use is to consider unexpected values in the known packetflows as possibly dangerous. This is an over-simplistic approach, and risks "ossifying" a protocol into the range of values which are actively in use now. Protocols usually include extra potential values for flag-fields, settings, options and the like, and these frequently have a large range of "reserved" values which are held in trust in an IANA registry, for future use. Greasing is a proposed mechanism to test out some of these values, and see what happens "on the wire" for the protocol in question.Shumon and his co-author and collaborator Mark Andrews from ISC have been applying the greasing model to the DNS, and we talked about it's history in other protocols, and how in practice greasing can be applied on the global internet.Read more about Shumon, Mark and Roy Arends' greasing activity on the web:DNS Grease (IETF draft, in the IETF Datatracker)the TLS DNSSEC Chain Extension ( IETF DANE WG, IETF RFC):DELEG Testing Report (with Roy Arends, DNSOP WG interim meeting presentation, IETF)

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app