Blog Post

Why it’s time to move beyond APM: Monitoring from the user’s perspective

Updated
Published
September 9, 2025
#
 mins read

in this blog post

For years, organizations have relied on Application Performance Monitoring (APM) as the backbone of their observability strategy. The idea was simple: collect as many logs, metrics, and traces as possible, then sift through the data to uncover insights.

But as applications have shifted to the cloud and become increasingly API-driven, that model has broken down. And according to Gartner’s April 2025 report Get Your Observability Spend Under Control, by Padraig Byrne, Martin Caren, and Matt Crossley, the financial impact of sticking with old approaches is now impossible to ignore.

The staggering reality of observability spend

The Gartner research paints a stark picture of runaway costs.  

A review of observability platform vendor proposals submitted to Gartner between 2023 and 2024 showed the following:  

  • The median Gartner client spend on observability platforms now exceeds $800,000 annually with a single vendor.  
  • The median spend increased by more than 20% with a single vendor.  
  • Four percent of clients spend more than $10 million with a single vendor.

Much of this exponential growth stems from the explosion in telemetry data, particularly logs, which are notoriously verbose and costly to store and analyze. The report states "Gartner clients spend significantly more on log analysis tools than other parts of observability. For larger entities, this can be more than half of total spend. However, few indicate that these solutions deliver half the value of observability.”

One client case study highlighted in the research shows the severity of this trend: their observability costs grew from less than $40,000 in 2009 to nearly $10 million by 2024—a compounding annual increase of approximately 40% over 15 years.

Traditional telemetry strategies miss the mark

Don't get me wrong—logs and traces have their place. They can be invaluable when diagnosing technical issues deep inside your systems.

But here's the fundamental problem: traditional telemetry doesn't tell you about your user's actual experience. As Gartner observes, "Clients often indiscriminately apply observability in their systems rather than targeting spending on the most important applications and services."

Your customers don't care about a spike in traces or whether debug mode was accidentally left on. What they care about is whether your service works when they need it. Can they log in? Can they make a purchase? Can they access the features they're paying for?

If the answer is "no," then it doesn't matter how sophisticated your telemetry pipeline is—you've already lost.

Why DEM and IPM are the way forward

This is why we believe organizations need to start shifting their investments toward Digital Experience Monitoring (DEM) and Internet Performance Monitoring (IPM).

  • DEM provides a direct view into the customer experience. With synthetic monitoring and real user monitoring, you can see exactly what your customers see—in real time. The Gartner research describes these technologies as providing "an efficient and fast time-to-value method for providing near-real-time feedback on service performance at a fraction of the cost of full observability."
  • IPM extends visibility into the critical infrastructure your applications depend on — CDNs, ISPs, third-party APIs, BGP routing, and the broader internet ecosystem. In today's interconnected world, application performance is only as strong as its weakest external dependency. When a critical API goes down or an ISP experiences routing issues, traditional APM tools provide little insight into these external factors affecting user experience.

Together, DEM and IPM shift the focus away from telemetry overload and toward what truly drives business outcomes: user experience, reliability, and trust. This approach aligns with modern application architecture, where the application stack no longer sits fully within your control or your own data center.

Implementing smarter spending through tiered observability

Gartner makes a critical point in their research: not every application deserves the same level of investment. Their recommended tiered approach provides a strategic framework for aligning spending with business value. Based on the Gartner application tiers and our interpretation of how observability stacks should align with the following tiers.

Note: Application characteristics and percentages are from Gartner's research. The "Observability Stack" column reflects our recommendations based on the tiered approach.

The way we see it, this tiered approach recognizes that observability costs can differ by almost an order of magnitude between tiers. By aligning spend with business value instead of treating every system as equally critical, organizations can improve overall visibility while controlling costs.

Gartner's study notes, "Gartner clients have indicated that they can reduce costs associated with telemetry ingest and storage by more than 30% by successfully implementing a telemetry pipeline."

The path forward: strategic action steps

The old model of observability—collect everything, analyze later—no longer makes financial or operational sense. Gartner's study projects: "By 2028, 80% of enterprises that do not implement observability cost controls will overspend by more than 50%."

To avoid this fate, we believe organizations should:

  1. Audit existing log collection and implement governance frameworks to eliminate unnecessary telemetry and reduce costs
  1. Deploy telemetry pipelines to filter, route, and transform data before expensive ingestion
  1. Categorize applications into business-value tiers and align observability investments accordingly
  1. Leverage vendor-specific cost controls that many organizations don't realize they already have access to
  1. Rationalize existing tool portfolios using frameworks like TIME and PAID models to eliminate redundancy

The bottom line

What matters most is the experience of your users. If you're not measuring and optimizing for that, you're investing in the wrong metrics.

It's time for organizations to evolve their mindset: from APM's endless telemetry collection to DEM and IPM's user-first perspective. Not only is this approach more cost-effective — it's the only way to ensure you're investing in the performance that truly matters: the performance your customers experience every single day.

Gartner states, "Organizations must be aware that this is not a one-time exercise. Staying in control of costs requires an ongoing change in behaviors, methodologies and technologies related to observability." Ultimately, companies that make this shift now will be better positioned to weather the continued growth in observability complexity while delivering measurable business outcomes.

For actionable next steps to optimize IT observability spend, check out our expert guide: A 7-Step Approach to Optimize Observability IT Expenditure.

Summary

For years, organizations have relied on Application Performance Monitoring (APM) as the backbone of their observability strategy. The idea was simple: collect as many logs, metrics, and traces as possible, then sift through the data to uncover insights.

But as applications have shifted to the cloud and become increasingly API-driven, that model has broken down. And according to Gartner’s April 2025 report Get Your Observability Spend Under Control, by Padraig Byrne, Martin Caren, and Matt Crossley, the financial impact of sticking with old approaches is now impossible to ignore.

The staggering reality of observability spend

The Gartner research paints a stark picture of runaway costs.  

A review of observability platform vendor proposals submitted to Gartner between 2023 and 2024 showed the following:  

  • The median Gartner client spend on observability platforms now exceeds $800,000 annually with a single vendor.  
  • The median spend increased by more than 20% with a single vendor.  
  • Four percent of clients spend more than $10 million with a single vendor.

Much of this exponential growth stems from the explosion in telemetry data, particularly logs, which are notoriously verbose and costly to store and analyze. The report states "Gartner clients spend significantly more on log analysis tools than other parts of observability. For larger entities, this can be more than half of total spend. However, few indicate that these solutions deliver half the value of observability.”

One client case study highlighted in the research shows the severity of this trend: their observability costs grew from less than $40,000 in 2009 to nearly $10 million by 2024—a compounding annual increase of approximately 40% over 15 years.

Traditional telemetry strategies miss the mark

Don't get me wrong—logs and traces have their place. They can be invaluable when diagnosing technical issues deep inside your systems.

But here's the fundamental problem: traditional telemetry doesn't tell you about your user's actual experience. As Gartner observes, "Clients often indiscriminately apply observability in their systems rather than targeting spending on the most important applications and services."

Your customers don't care about a spike in traces or whether debug mode was accidentally left on. What they care about is whether your service works when they need it. Can they log in? Can they make a purchase? Can they access the features they're paying for?

If the answer is "no," then it doesn't matter how sophisticated your telemetry pipeline is—you've already lost.

Why DEM and IPM are the way forward

This is why we believe organizations need to start shifting their investments toward Digital Experience Monitoring (DEM) and Internet Performance Monitoring (IPM).

  • DEM provides a direct view into the customer experience. With synthetic monitoring and real user monitoring, you can see exactly what your customers see—in real time. The Gartner research describes these technologies as providing "an efficient and fast time-to-value method for providing near-real-time feedback on service performance at a fraction of the cost of full observability."
  • IPM extends visibility into the critical infrastructure your applications depend on — CDNs, ISPs, third-party APIs, BGP routing, and the broader internet ecosystem. In today's interconnected world, application performance is only as strong as its weakest external dependency. When a critical API goes down or an ISP experiences routing issues, traditional APM tools provide little insight into these external factors affecting user experience.

Together, DEM and IPM shift the focus away from telemetry overload and toward what truly drives business outcomes: user experience, reliability, and trust. This approach aligns with modern application architecture, where the application stack no longer sits fully within your control or your own data center.

Implementing smarter spending through tiered observability

Gartner makes a critical point in their research: not every application deserves the same level of investment. Their recommended tiered approach provides a strategic framework for aligning spending with business value. Based on the Gartner application tiers and our interpretation of how observability stacks should align with the following tiers.

Note: Application characteristics and percentages are from Gartner's research. The "Observability Stack" column reflects our recommendations based on the tiered approach.

The way we see it, this tiered approach recognizes that observability costs can differ by almost an order of magnitude between tiers. By aligning spend with business value instead of treating every system as equally critical, organizations can improve overall visibility while controlling costs.

Gartner's study notes, "Gartner clients have indicated that they can reduce costs associated with telemetry ingest and storage by more than 30% by successfully implementing a telemetry pipeline."

The path forward: strategic action steps

The old model of observability—collect everything, analyze later—no longer makes financial or operational sense. Gartner's study projects: "By 2028, 80% of enterprises that do not implement observability cost controls will overspend by more than 50%."

To avoid this fate, we believe organizations should:

  1. Audit existing log collection and implement governance frameworks to eliminate unnecessary telemetry and reduce costs
  1. Deploy telemetry pipelines to filter, route, and transform data before expensive ingestion
  1. Categorize applications into business-value tiers and align observability investments accordingly
  1. Leverage vendor-specific cost controls that many organizations don't realize they already have access to
  1. Rationalize existing tool portfolios using frameworks like TIME and PAID models to eliminate redundancy

The bottom line

What matters most is the experience of your users. If you're not measuring and optimizing for that, you're investing in the wrong metrics.

It's time for organizations to evolve their mindset: from APM's endless telemetry collection to DEM and IPM's user-first perspective. Not only is this approach more cost-effective — it's the only way to ensure you're investing in the performance that truly matters: the performance your customers experience every single day.

Gartner states, "Organizations must be aware that this is not a one-time exercise. Staying in control of costs requires an ongoing change in behaviors, methodologies and technologies related to observability." Ultimately, companies that make this shift now will be better positioned to weather the continued growth in observability complexity while delivering measurable business outcomes.

For actionable next steps to optimize IT observability spend, check out our expert guide: A 7-Step Approach to Optimize Observability IT Expenditure.

This is some text inside of a div block.

You might also like

Blog post

Why it’s time to move beyond APM: Monitoring from the user’s perspective

Blog post

The vendor trap: why your next outage won’t be your fault—but will be your problem

Blog post

From SEO to AEO: Why Web Performance Is the Key to AI Search Success