Blog Post

Observability isn’t about the tool. It’s about the truth

Updated
Published
July 3, 2025
#
 mins read

in this blog post

An enterprise client reports latency. Your dashboards say everything is fine. They blame you. You blame them. Nobody can prove it either way.  

This is where most monitoring efforts hit a wall. Too often, the conversation gets stuck on dashboards and tools instead of the one thing that really matters: truth.

Observability isn’t about collecting metrics or building pretty dashboards. It’s about knowing the truth — the ability to quickly get to the root of a problem when your reputation and revenue are on the line.

Not vanity metrics. Not checkbox features. Just fast, end-to-end, and undeniable truth.

What happens when two companies see the same issue differently?

A leading financial services provider (let’s call them Company A) was suddenly under pressure. A key enterprise client—Company B—reported delays of 3 to 6 seconds when hitting APIs embedded in their customer-facing apps.

  • Company B: "Your APIs are slow. It’s impacting our customer experience."
  • Company A (relying on Datadog APM): "Everything looks fine on our side."

A stalemate. And a textbook case of observability failure.

Why couldn’t Datadog find the issue?

This isn’t a knock on Datadog. It’s an excellent Application Performance Monitoring (APM) tool—but it wasn’t built to see beyond your own infrastructure.  

So even though Company A had robust APM and logging, they couldn’t see anything outside their own walls. They couldn’t install agents in Company B’s infrastructure, and they certainly couldn’t drop Real User Monitoring (RUM) scripts into someone else’s codebase.

Here’s what each tool can (and can’t) do:

  • APM (like Datadog): Great inside the app — once traffic arrives.
  • RUM: Excellent for frontend insights — but only if you own the app.
  • Logs: Useful for what already happened — but not where packets got stuck in transit.

The common denominator with all three is that none of them can see what’s happening between systems. Let’s get into why.

Why do APIs create blind spots between companies?

APIs are the interface between companies, the digital waiters of the software world. Just like you don’t walk into a restaurant kitchen to talk to the chef, companies don’t peek behind each other’s firewalls. They interact through APIs, exchanging structured requests and responses without ever seeing what’s really cooking on the other side.

And that’s where blind spots creep in.  

When two systems communicate through APIs, they lack visibility to each other’s inner workings. The moment a request leaves your infrastructure, it enters the black box of “someone else’s problem,” which include infrastructure, networks, and dependencies you don’t own and can’t instrument

The root problem is that the Internet isn’t instrumentable. You can’t deploy agents or RUM scripts across the networks and infrastructure you don’t control. That’s why traditional observability tools stop at the edge. Beyond that lies the unknown.

But delivering a great digital experience depends on multiple networks, protocols, agents, and sub-systems to work together in concert. These dependencies form what we call the Internet Stack: DNS, CDN, BGP, ISP, last mile, backbone, and more.  

The Internet Stack

When performance breaks down somewhere in that chain, it doesn’t matter if it’s your fault or not—your customers still feel it. APIs, after all, were designed for efficiency, not visibility.

This is where Internet Performance Monitoring (IPM) becomes essential. IPM enables deep visibility into every layer of the Internet that can impact your service. Think of it as APM for the Internet Stack; purpose-built for the systems you don't own but still rely on.  

How do you get to the truth when APM falls short?  

When traditional observability tools couldn’t explain the latency, IPM filled the gap. Instead of guessing, Company A used IPM to run synthetic API tests across real-world networks:

  • From user ISPs: major U.S. carriers and fiber providers
  • From backbone and enterprise vantage points
  • From inside Company A’s own infrastructure

Each test simulated actual API calls, complete with traceable request IDs and timestamps. And the results were undeniable.

This diagram maps the full path of an API call — from the client through Akamai, to internal proxy infrastructure and upstream systems. It clearly shows where latency accumulates:

  • DNS, connect, and SSL times are negligible.
  • Akamai's edge processing is fast (~48ms).
  • Major delays occur during origin fetch (3,143ms) and proxy fetch (2,364ms)—both inside the server infrastructure.
  • This confirms the problem isn’t with the client or CDN, but deep in the backend
Latency breakdown across cities

This chart tracks average response and wait times across major U.S. cities. The key insight:

  • Latency patterns are remarkably consistent across geography.
  • A single spike appears across multiple regions, ruling out a location-specific issue.
  • This supports the insight that the bottleneck lives within the origin infrastructure, not in external networks.
ISP breakdown

Here, performance is analyzed by ISP (e.g., AT&T, Comcast, Verizon):

  • Despite some noise, the pattern is stable across providers, with no single ISP showing consistently worse performance.
  • This helps eliminate ISP-side routing or congestion as a root cause.
  • The brief AT&T spike aligns with the same moment seen in city-level data.

The result: Consistent 3–6 second latency, internally and externally.

With that intelligence, they could rule out the usual suspects:

  • It wasn’t the ISP
  • It wasn’t the CDN
  • It wasn’t DNS
  • It wasn’t the proxy (Envoy)

The process of elimination worked like a proper diagnostic: isolate each layer, eliminate what’s clean, and close in on the source. Parsing response headers like x-envoy-upstream-service-time confirmed the latency was occurring further upstream, deep within Company A’s own service environment. This pointed engineers in the right direction without them needing to sift through endless log lines. Trace IDs and timestamps were shared with internal teams to help pinpoint issues around application dependencies—eventually confirmed to be the root cause.

This methodical approach, including initial discussion and setup, took just three hours and about 15 test runs. There was no guesswork. Just clarity.

After internal validation, teams began work on the improvements, which are still ongoing but already measurable where it matters most.

Backend latency has dropped significantly: both upstream service time and overall wait time have been cut nearly in half. These gains reflect steady optimization efforts that are clearly moving in the right direction.

What IPM delivers that APM can’t

Let’s be clear: Datadog, New Relic, and Dynatrace are outstanding at what they do — inside your infrastructure. But they weren’t designed to monitor the Internet itself.

Catchpoint IPM was. Here’s how:

A vast Global Agent Network

  • 3000+ agents across last-mile, backbone, cloud, enterprise, and on-prem environments
  • Real-user network emulation, not cloud-only testbeds

Full synthetic coverage

  • HTTP/S, APIs, Browser, DNS, SSL, BGP, MQTT, QUIC, Custom scripts

Advanced diagnostics

  • Packet loss, jitter, path tracing, hop analysis
  • Region-specific degradation detection

Frontend visibility

  • WebPageTest for in-depth frontend perf
  • Browser + mobile RUM SDKs for teams who can instrument the frontend

Seamless integration

  • Feeds directly into Datadog, Splunk, New Relic, Dynatrace
  • Enhances existing observability stacks without replacing them

Why teams cling to familiar tools even when they’re not fit for purpose

Familiar tools are comfortable. They’re already deployed, widely understood, and politically safe. But too often, comfort wins out over capability—especially in large, mature organizations where tooling decisions are influenced by inertia, not fitness for purpose. But when seconds matter and customers are impacted, you need clarity, not comfort.

Who takes the blame when APIs are slow?

In this case, Company B blamed Company A. Company A blamed Company B. But neither had data to prove their case.

Meanwhile, users just saw a slow experience.

End users don’t know an API call is crossing company boundaries. They only see the brand they’re interacting with. If it’s slow, they assume that brand is to blame. That’s why solving performance issues quickly is about more than technical hygiene. It’s about protecting business relationships and customer trust.

Final thought: What’s the real job of observability?

Observability isn’t about the coolest UI or the biggest vendor budget. It’s about getting to the truth, fast. And often, the truth lies outside your four walls.

In an AI-driven world, data powers decisions. But if your data is incomplete or your telemetry is limited to your own infrastructure, your AI is just guessing.

Catchpoint IPM gives teams the ability to:

  • Validate performance from the outside in
  • Prove or disprove internal assumptions with independent data
  • Pinpoint root causes in minutes, not days

Because the point of observability isn’t the tool.

It’s the truth.  

Got a latency mystery your tools can’t solve? Let’s talk.

An enterprise client reports latency. Your dashboards say everything is fine. They blame you. You blame them. Nobody can prove it either way.  

This is where most monitoring efforts hit a wall. Too often, the conversation gets stuck on dashboards and tools instead of the one thing that really matters: truth.

Observability isn’t about collecting metrics or building pretty dashboards. It’s about knowing the truth — the ability to quickly get to the root of a problem when your reputation and revenue are on the line.

Not vanity metrics. Not checkbox features. Just fast, end-to-end, and undeniable truth.

What happens when two companies see the same issue differently?

A leading financial services provider (let’s call them Company A) was suddenly under pressure. A key enterprise client—Company B—reported delays of 3 to 6 seconds when hitting APIs embedded in their customer-facing apps.

  • Company B: "Your APIs are slow. It’s impacting our customer experience."
  • Company A (relying on Datadog APM): "Everything looks fine on our side."

A stalemate. And a textbook case of observability failure.

Why couldn’t Datadog find the issue?

This isn’t a knock on Datadog. It’s an excellent Application Performance Monitoring (APM) tool—but it wasn’t built to see beyond your own infrastructure.  

So even though Company A had robust APM and logging, they couldn’t see anything outside their own walls. They couldn’t install agents in Company B’s infrastructure, and they certainly couldn’t drop Real User Monitoring (RUM) scripts into someone else’s codebase.

Here’s what each tool can (and can’t) do:

  • APM (like Datadog): Great inside the app — once traffic arrives.
  • RUM: Excellent for frontend insights — but only if you own the app.
  • Logs: Useful for what already happened — but not where packets got stuck in transit.

The common denominator with all three is that none of them can see what’s happening between systems. Let’s get into why.

Why do APIs create blind spots between companies?

APIs are the interface between companies, the digital waiters of the software world. Just like you don’t walk into a restaurant kitchen to talk to the chef, companies don’t peek behind each other’s firewalls. They interact through APIs, exchanging structured requests and responses without ever seeing what’s really cooking on the other side.

And that’s where blind spots creep in.  

When two systems communicate through APIs, they lack visibility to each other’s inner workings. The moment a request leaves your infrastructure, it enters the black box of “someone else’s problem,” which include infrastructure, networks, and dependencies you don’t own and can’t instrument

The root problem is that the Internet isn’t instrumentable. You can’t deploy agents or RUM scripts across the networks and infrastructure you don’t control. That’s why traditional observability tools stop at the edge. Beyond that lies the unknown.

But delivering a great digital experience depends on multiple networks, protocols, agents, and sub-systems to work together in concert. These dependencies form what we call the Internet Stack: DNS, CDN, BGP, ISP, last mile, backbone, and more.  

The Internet Stack

When performance breaks down somewhere in that chain, it doesn’t matter if it’s your fault or not—your customers still feel it. APIs, after all, were designed for efficiency, not visibility.

This is where Internet Performance Monitoring (IPM) becomes essential. IPM enables deep visibility into every layer of the Internet that can impact your service. Think of it as APM for the Internet Stack; purpose-built for the systems you don't own but still rely on.  

How do you get to the truth when APM falls short?  

When traditional observability tools couldn’t explain the latency, IPM filled the gap. Instead of guessing, Company A used IPM to run synthetic API tests across real-world networks:

  • From user ISPs: major U.S. carriers and fiber providers
  • From backbone and enterprise vantage points
  • From inside Company A’s own infrastructure

Each test simulated actual API calls, complete with traceable request IDs and timestamps. And the results were undeniable.

This diagram maps the full path of an API call — from the client through Akamai, to internal proxy infrastructure and upstream systems. It clearly shows where latency accumulates:

  • DNS, connect, and SSL times are negligible.
  • Akamai's edge processing is fast (~48ms).
  • Major delays occur during origin fetch (3,143ms) and proxy fetch (2,364ms)—both inside the server infrastructure.
  • This confirms the problem isn’t with the client or CDN, but deep in the backend
Latency breakdown across cities

This chart tracks average response and wait times across major U.S. cities. The key insight:

  • Latency patterns are remarkably consistent across geography.
  • A single spike appears across multiple regions, ruling out a location-specific issue.
  • This supports the insight that the bottleneck lives within the origin infrastructure, not in external networks.
ISP breakdown

Here, performance is analyzed by ISP (e.g., AT&T, Comcast, Verizon):

  • Despite some noise, the pattern is stable across providers, with no single ISP showing consistently worse performance.
  • This helps eliminate ISP-side routing or congestion as a root cause.
  • The brief AT&T spike aligns with the same moment seen in city-level data.

The result: Consistent 3–6 second latency, internally and externally.

With that intelligence, they could rule out the usual suspects:

  • It wasn’t the ISP
  • It wasn’t the CDN
  • It wasn’t DNS
  • It wasn’t the proxy (Envoy)

The process of elimination worked like a proper diagnostic: isolate each layer, eliminate what’s clean, and close in on the source. Parsing response headers like x-envoy-upstream-service-time confirmed the latency was occurring further upstream, deep within Company A’s own service environment. This pointed engineers in the right direction without them needing to sift through endless log lines. Trace IDs and timestamps were shared with internal teams to help pinpoint issues around application dependencies—eventually confirmed to be the root cause.

This methodical approach, including initial discussion and setup, took just three hours and about 15 test runs. There was no guesswork. Just clarity.

After internal validation, teams began work on the improvements, which are still ongoing but already measurable where it matters most.

Backend latency has dropped significantly: both upstream service time and overall wait time have been cut nearly in half. These gains reflect steady optimization efforts that are clearly moving in the right direction.

What IPM delivers that APM can’t

Let’s be clear: Datadog, New Relic, and Dynatrace are outstanding at what they do — inside your infrastructure. But they weren’t designed to monitor the Internet itself.

Catchpoint IPM was. Here’s how:

A vast Global Agent Network

  • 3000+ agents across last-mile, backbone, cloud, enterprise, and on-prem environments
  • Real-user network emulation, not cloud-only testbeds

Full synthetic coverage

  • HTTP/S, APIs, Browser, DNS, SSL, BGP, MQTT, QUIC, Custom scripts

Advanced diagnostics

  • Packet loss, jitter, path tracing, hop analysis
  • Region-specific degradation detection

Frontend visibility

  • WebPageTest for in-depth frontend perf
  • Browser + mobile RUM SDKs for teams who can instrument the frontend

Seamless integration

  • Feeds directly into Datadog, Splunk, New Relic, Dynatrace
  • Enhances existing observability stacks without replacing them

Why teams cling to familiar tools even when they’re not fit for purpose

Familiar tools are comfortable. They’re already deployed, widely understood, and politically safe. But too often, comfort wins out over capability—especially in large, mature organizations where tooling decisions are influenced by inertia, not fitness for purpose. But when seconds matter and customers are impacted, you need clarity, not comfort.

Who takes the blame when APIs are slow?

In this case, Company B blamed Company A. Company A blamed Company B. But neither had data to prove their case.

Meanwhile, users just saw a slow experience.

End users don’t know an API call is crossing company boundaries. They only see the brand they’re interacting with. If it’s slow, they assume that brand is to blame. That’s why solving performance issues quickly is about more than technical hygiene. It’s about protecting business relationships and customer trust.

Final thought: What’s the real job of observability?

Observability isn’t about the coolest UI or the biggest vendor budget. It’s about getting to the truth, fast. And often, the truth lies outside your four walls.

In an AI-driven world, data powers decisions. But if your data is incomplete or your telemetry is limited to your own infrastructure, your AI is just guessing.

Catchpoint IPM gives teams the ability to:

  • Validate performance from the outside in
  • Prove or disprove internal assumptions with independent data
  • Pinpoint root causes in minutes, not days

Because the point of observability isn’t the tool.

It’s the truth.  

Got a latency mystery your tools can’t solve? Let’s talk.

This is some text inside of a div block.

You might also like

Blog post

Observability isn’t about the tool. It’s about the truth

Blog post

Escalating risk, shrinking margins: The 2025 Internet Resilience Report

Blog post

From the source to the edge: the six agent types you can’t ignore