|8 min read

What Is MTBBC and Why Your Team Should Track It

Mean Time Between Breaking Changes is the missing engineering metric for API-first teams. Here's how to calculate it, what good looks like, and how to improve it.

Engineering teams have gotten remarkably good at measuring what happens after things go wrong. Mean Time To Recovery (MTTR) tells you how fast you bounce back from incidents. Mean Time To Failure (MTTF) tells you how long your systems run before they break. DORA metrics — deployment frequency, lead time, change failure rate, and MTTR — give you a comprehensive picture of your delivery pipeline's health. But there is a glaring gap in this measurement landscape: nobody is tracking the metric that most directly predicts integration stability.

How often do your API contracts break?

We call this metric MTBBC: Mean Time Between Breaking Changes.

Defining MTBBC

Mean Time Between Breaking Changes is the average time interval between commits, merges, or releases that introduce a breaking change to an API contract. It is measured in days for most teams, or in hours for high-velocity organizations shipping multiple times per day.

A "breaking change" in this context is any modification to an API's interface that would cause existing consumers to fail without updating their integration. This includes removed or renamed fields, type changes on existing fields, new required parameters, removed endpoints, changed response status codes, altered authentication requirements, and modified error response structures. If a consumer's working integration would stop working after the change, it counts.

MTBBC is not a measure of how many changes you ship. It is a measure of how often your changes break the contract your consumers depend on.

Why MTBBC Matters

It is a leading indicator, not a lagging one

Most reliability metrics are reactive. MTTR tells you how fast you recovered from the last incident. Change failure rate tells you what percentage of your deploys caused problems. These are valuable, but they describe the past. MTBBC is predictive. A declining MTBBC — breaking changes happening more frequently — tells you that integration failures are coming, whether or not they have materialized yet. Consumer teams may be silently absorbing the pain, papering over issues with defensive coding, or simply haven't deployed their side yet. The breaking changes are already in production, waiting to detonate.

It correlates directly with consumer trust

Every API is a promise. When you expose an endpoint, you are telling consumers: this is the contract, build on it. Partners, internal teams, and third-party developers build confidence when breaking changes are rare, well-communicated, and intentional. When breaking changes are frequent and accidental, trust erodes. Teams start pinning to old versions, wrapping your API in defensive layers, or — worst case — looking for alternatives. MTBBC gives you a number that reflects this trust dynamic.

It reveals process health

A declining MTBBC is a signal that something in your engineering process is degrading. It might indicate that code reviews are not catching contract-breaking changes. It might mean that developers lack awareness of which parts of the codebase constitute the API surface. It could signal growing technical debt — rushed refactors that inadvertently alter response shapes. Or it could reflect a team that has grown beyond its coordination capacity, where the left hand no longer knows what the right hand is shipping. Whatever the root cause, MTBBC makes the symptom visible.

It is actionable

A metric like "number of production incidents" conflates dozens of potential causes — infrastructure failures, configuration errors, capacity issues, application bugs, and yes, breaking API changes. MTBBC isolates one specific, improvable failure mode. If your MTBBC is declining, you know exactly what kind of problem to solve: contract discipline. That specificity makes it far more useful for driving targeted improvements.

How to Calculate MTBBC

The formula is straightforward:

MTBBC = Total time period / Number of breaking changes detected

For example, if your team detected 6 breaking changes over a 90-day period:

MTBBC = 90 days / 6 breaking changes = 15 days

This means that, on average, your API contract breaks every 15 days. A consumer integrating with your API can expect a breaking change roughly every two weeks.

For more granular tracking, you can calculate MTBBC per service or per endpoint. A platform-wide MTBBC of 20 days might mask the fact that your payments API has an MTBBC of 60 days while your user management API breaks every 5 days. Per-service MTBBC helps you identify which surfaces need the most attention.

The harder question is what counts as a breaking change. A practical checklist includes:

  • Removed fields from response bodies
  • Renamed fields in requests or responses
  • Changed data types on existing fields
  • Added required parameters to requests
  • Removed or restructured endpoints
  • Changed HTTP status codes for existing behavior
  • Modified authentication or authorization requirements
  • Changed error response formats
  • Narrowed accepted input ranges

Additive, backward-compatible changes — new optional fields, new endpoints, expanded input ranges — do not count. The distinction matters: MTBBC specifically measures contract-breaking changes, not overall API evolution.

What Good Looks Like

There is no universal standard for MTBBC, but based on observed patterns across teams of varying size and velocity, these benchmarks offer a useful starting framework:

  • MTBBC greater than 30 days: This indicates a stable, well-governed API surface. Breaking changes happen, but they are infrequent enough that consumers can plan around them. This is the target for any API with external consumers or cross-team dependencies.
  • MTBBC between 14 and 30 days: Typical for fast-moving teams, especially those in earlier stages of API design. Not alarming on its own, but worth monitoring for downward trends. If MTBBC is in this range and stable, the team likely has reasonable contract awareness but could benefit from more systematic detection.
  • MTBBC below 14 days: Contract drift is a real risk. Consumers are likely feeling it, even if they are not reporting it. At this frequency, integration maintenance becomes a meaningful drag on consumer teams' productivity.
  • MTBBC below 7 days: This signals a contract discipline problem that needs immediate attention. At this rate, breaking changes are likely accidental, unreviewed, or both. Consumer teams are either constantly firefighting integration issues or have given up on staying current with your API.

Context matters. An internal API consumed by one other team has different tolerance thresholds than a public API with hundreds of integrators. But in every case, knowing where you stand is better than operating blind.

How to Improve MTBBC

If your MTBBC is lower than you want it to be, there are four high-leverage improvements to consider:

Automated detection at the PR level. The single most effective intervention is catching breaking changes before they merge. When a developer opens a pull request that removes a field or changes a type, that change should be flagged during review — not discovered in production by a consumer. This shifts breaking changes from accidental to intentional, which is the entire goal.

Clear ownership of API surfaces. Every API endpoint should have an identifiable owner — a person or team accountable for the contract. When nobody owns the contract, nobody feels responsible for maintaining it. Ownership does not prevent breaking changes, but it ensures that someone is making a conscious decision when they happen.

Consumer awareness. Many breaking changes happen because the developer simply did not know anyone was using that field or endpoint. Maintaining visibility into who depends on what — even at a rough level — creates a natural check before modifying shared surfaces.

Deprecation-first culture. Teams with high MTBBC rarely remove things outright. They deprecate first, give consumers explicit migration windows, and only remove after confirming adoption of the replacement. This does not prevent breaking changes entirely, but it transforms them from surprises into planned transitions.

How MTBBC Fits Alongside Other Metrics

MTBBC is not a replacement for existing engineering metrics. It fills a specific gap in the measurement landscape.

DORA metrics measure deployment health — how efficiently your team ships software. MTBBC measures contract health — how reliably your API surface maintains its promises to consumers. These are related but distinct concerns. A team can have excellent DORA metrics — deploying frequently, with low lead time and fast recovery — while simultaneously having terrible MTBBC, breaking their consumers with every other release. From the perspective of the team shipping, everything looks great. From the perspective of the teams consuming their APIs, the experience is painful.

MTBBC is the metric that bridges internal engineering health with external integration reliability. It is the missing piece for any organization that describes itself as API-first.

You might also consider tracking MTBBC alongside a breaking change intentionality rate: what percentage of detected breaking changes were deliberate and communicated versus accidental and discovered after the fact. A high MTBBC with a high intentionality rate is ideal — you rarely break contracts, and when you do, it is on purpose. A low MTBBC with a low intentionality rate is the worst case — you break contracts often, and you do not even know you are doing it.

Start Measuring

Every engineering team that exposes an API — whether to external partners, internal services, or frontend clients — should know their MTBBC. It is a single number that captures a dimension of engineering quality that existing metrics miss entirely: how well you honor the contracts your consumers depend on.

If you do not know your MTBBC today, you are operating blind on one of the most consequential dimensions of API quality. The first step is simple: look at your commit or release history, count the breaking changes over the past 90 days, and divide. That number is your baseline. Whether it surprises you or confirms what you already suspected, you now have something concrete to improve against — and a metric that will tell you, unambiguously, whether your API contract discipline is getting better or worse over time.

Catch API breaking changes before they ship

RiftCheck monitors every commit and PR for API contract changes. Free to start.

Related posts