Learn

2025 SaaS Website Performance Benchmark Report

Why is digital experience critical for SaaS?

In the race to win trust and retain customers, milliseconds matter more than ever. In today’s SaaS-first world, a slow or unreliable experience doesn’t just frustrate users—it disrupts productivity and can drive entire teams to switch platforms. Whether someone is collaborating on a project, analyzing data, or communicating with their team, they expect speed, reliability, and stability—everywhere, every time.

The stakes are high:

  • Over three months, SaaS products lose 70% of users—only 30% remain active by month 3, underscoring the cost of poor performance.
  • Enterprises now deploy an average of 106 SaaS apps each, intensifying competition for user attention and performance budgets.
  • A 100 ms delay can cut sign-up conversions by ~7%, eroding growth at scale.

That’s why Catchpoint has benchmarked the performance of leading SaaS platform websites to understand how they’re delivering on this digital promise. This isn’t about naming and shaming. It’s about spotlighting the real experiences users face, surfacing opportunities to improve, and starting a deeper conversation about what best-in class SaaS looks like in 2025.

Whether you’re a global SaaS giant or a fast-growing disruptor, one thing is clear: in the modern SaaS economy, a fast, resilient, and globally consistent digital experience isn’t just a differentiator—it’s the foundation of customer trust, adoption, and growth.

Key takeaways

Tableau and Trello set the SaaS gold standard

Tableau loaded a full page in just 1.1 seconds; Trello delivered key content in 0.8 seconds and maintained perfect availability.

Key Finding 1 icon

Fast content is rare: just 6 out of 19 load in under 3 seconds

Just 32% of companies—including Tableau, Trello, and Microsoft 365 —loaded key content in under 3 seconds. ClickUp took 8.4 seconds; Monday.com, 4.5 seconds.

Key Finding 2 icon

Slow full page loads are the norm

Just 42% (8 of 19) met the 5-second Total Page Load target. The slowest (ClickUp) took 9.6 seconds on average.

Key Finding 3 icon

Monday.com is consistent, but consistently slow

Monday.com’s layout never shifts (CLS: 0.00019), but slow server response (1,755 ms) and main content load (3.8 s) drag down performance.

Key Finding 4 icon

Microsoft 365 quietly excels at scale

Strong across the board with fast document load (2.2 s), quick DNS (41 ms), and 96.7% uptime—despite its size.

Key Finding 4 icon

Regional performance gaps persist—even for top brands

Trello, Tableau, and Oracle shine in North America and Europe, but nearly all platforms—including leaders—deliver their slowest experiences in Middle East & Africa.

Key Finding 4 icon

No SaaS company meets every benchmark

No company hit all 8 benchmarks. There's room for every brand to improve—especially outside North America and Europe.

Key Finding 4 icon

Testing methodology

Catchpoint’s Professional Services team conducted Internet Performance Monitoring (IPM) on SaaS websites. Here’s how the data was collected and scored.

Timeframe

All data in this report was collected between June 5th and June 19th, 2025, providing a consistent two week snapshot of performance across the industry.

Monitored Pages

For each company, we monitored the public homepage, focusing on the initial user experience for typical visitors. This allowed for consistent comparison across institutions. It should be noted that this analysis is limited to homepage performance and may not reflect the performance of the broader web experience.

Locations

Tests were executed from 123 global monitoring agents, simulating end-user traffic across a diverse range of geographies.

  • 26 agents were based in the United States
  • 97 agents were located internationally, including locations in the UK, Germany, South Africa,Brazil, India, Hong Kong, Japan, and Australia

This approach enabled us to capture both global averages and regional disparities in performance.

Metrics tested

Each website was evaluated across eight critical performance metrics, normalized and weighted to reflect real-world impact.

Metric Definition Weight Recommended Target
Availability Uptime percentage 16% ≥99.9%
Document Complete Time until all key page elements are loaded 12% ≤3 seconds
Page Load Time Time until entire page is fully loaded 12% ≤5 seconds
Response Time Time to complete a full request 12% ≤500ms
Time to First Byte (TTFB) Time to receive first byte from server 12% ≤200ms
Largest Contentful Paint Time to load main content block 12% ≤2.5 seconds
Cumulative Layout Shift (CLS) Visual layout movement during load 12% <0.1
DNS Lookup Time Time to resolve domain to IP address 12% ≤100ms (ideal <50ms)

Ranking methodology

Each website was scored using a weighted composite model that reflects both uptime and user experience.

  • Availability (16%) was given additional weight to reflect its critical importance in banking, where even brief outages can erode customer trust.
  • The remaining seven performance metrics (each weighted at 12%)—including page load time, TTFB, and visual stability—were treated equally.

Why? Because in today’s digital world, slow is the new down. A perfectly available site that takes too long to load still delivers a poor user experience. This balanced approach rewards both reliability and real-world performance.

Full rankings: SaaS Website Performance Benchmark

The table below shows all evaluated banking websites ranked by their overall performance score(higher composite scores denote better performance). The composite score (0–100) encapsulates each site’s availability and performance, according to our weighted model.

How to read the scores:

  • Leading (85–100): Best-in-class digital experience — fast, reliable, and seamless.
  • Strong (75–84): Solid performance with opportunities for optimization, especially on front-end metrics.
  • Competitive (60–74): Functional but with clear areas for improvement to meet user expectations.
  • Challenged (<60): Performance gaps that may impact user satisfaction and engagement.

SaaS Web Perf Rankings - Composite

  • Tableau96.6%
  • Trello94.6%
  • Microsoft 36591.4%
  • Oracle90.4%
  • Salesforce85.7%
  • Slack SAAS85.2%
  • ServiceNow84.8%
  • SAP84.6%
  • Asana SaaS82.7%
  • Hubspot81.9%
  • Jira79.6%
  • Microsoft Dynamics78.3%
  • Mailchimp77.3%
  • Zoom77.0%
  • Dropbox75.8%
  • Notion74.3%
  • Zendesk71.4%
  • Clickup61.8%
  • Monday.com58.5%

SaaS performance: what the rankings reveal

Our benchmark reveals a clear hierarchy among the 19 SaaS platforms tested, shaped by how well each balances speed, reliability, and stability across eight critical web performance metrics. The most successful platforms aren’t just fast—they’re consistent across geographies and resilient under pressure.

#1 End-to-end excellence separates the leaders

Some companies stand apart by doing everything well—not just excelling in a single area, but balancing infrastructure, rendering, and stability. Four companies have achieved the coveted "Balanced Champions" status by demonstrating excellence across multiple performance dimensions without sacrificing any critical metric.

  • Tableau (1st): Blazing fast page load, near-perfect layout stability, and flawless DNS.
  • Trello (2nd): Perfect across availability, DNS, document completion, and CLS.
  • Microsoft 365 (3rd): Microsoft 365 proves that enterprise-scale platforms can achieve comprehensive excellence with no weak spots—just consistent, enterprise grade excellence.
  • Oracle (4th): Backend speed and DNS leadership, with some trade-offs in page rendering.

Takeaway: These balanced champions demonstrate that sustainable competitive advantage comes from avoiding the trap of single-metric optimization. Performance isn’t about being the fastest at one thing—it’s about not being slow anywhere.

#2 Optimizing in isolation creates performance blind spots

While top performers balanced speed, stability, and responsiveness, others fell short by optimizing just one layer of the experience. Fast servers, perfect uptime, or clean layouts meant little when other fundamentals lagged. Here are the platforms most affected by these trade-offs—starting with ClickUp

Speed demon vs. rendering runner-up: ClickUp

ClickUp (18th) leads on raw server metrics but falters on front-end metrics.

The Speed Demon Paradox: ClickUp achieves the fastest backend performance but delivers the slowest user experience

Under the hood: where the time goes

The bottom of the table told a very different story. First Data Corporation took nearly 1.8 seconds just to respond with the first byte and over 10 seconds to fully load the homepage.

Site DNS (ms) Connect (ms) SSL (ms) Wait (ms) Load (ms) Response (ms)
ClickUp 0 6 16 13 11 52
Monday.com 1 4 20 1717 14 1767
  • Initial server handshake happens in under 100 ms for ClickUp.
  • The Document Complete and Page Load metrics spike because the page must download and execute 144 resources, 115 of them JavaScript.
Site # Items (Total) Webpage Response (ms) GM # Script GM
ClickUp 144 9570 115
Dropbox 177 4335 82
Notion 101 4242 74
Asana SaaS 100 5444 63
Zoom 96 6144 49
Mailchimp 121 6042 49
Salesforce 61 3933 36
Oracle 63 3750 31
Monday.com 41 4603 29
Jira 44 3818 28
Trello 42 2868 28
ServiceNow 48 4425 13
Hubspot 30 3181 12
Slack SAAS Report 31 3109 12
Tableau 19 1059 8
Zendesk SAAS 22 2209 7
Microsoft 365 18 2218 6
Microsoft Dynamics 15 2146 5
SAP 9 2057 1

Root cause: Heavy client-side rendering (CSR)

  • ClickUp’s 144 total requests (115 JavaScript files) must be downloaded and executed before Document Complete and LCP can occur.
  • By contrast, Tableau with 19 resources (only 8 JS) completes in just ~1 second.

Takeaway: ClickUp’s performance reveals a critical flaw in single-metric optimization strategies. Their servers respond faster than any competitor, yet users wait 8-9 seconds for usable pages, demonstrating that backend speed alone cannot deliver superior user experience. Organizations must balance backend efficiency with frontend delivery excellence to achieve meaningful performance outcomes.

Consistency chameleon: Monday.com

Monday.com (19th) flips the script—rock-solid visual stability but glacial server response.

Metric Monday.com Value Rank
TTFB 1 717 ms 19th (slowest)
Response Time 1 767 ms 19th

Root Cause: Exceptionally high wait times during the initial HTML response are the main driver of Monday.com’s sluggish server performance.

Site DNS (ms) GM Connect (ms) GM SSL (ms) GM Wait (ms) GM Load (ms) GM Response (ms) GM
Clickup 0 6 16 13 11 52
Monday.com 1 4 20 1717 14 1767

While DNS, Connect, and SSL steps are on par with competitors, wait time alone accounts for over 1.7seconds—compared to just 13 ms for ClickUp.

Takeaway: Perfect layout stability is commendable, but if users are left waiting for the page to start loading, the experience still suffers. Performance must begin at the first byte—not just at the visual finish line.

Uptime illusionist: HubSpot

HubSpot boasts perfect availability yet stumbles on visual stability:

Metric HubSpot Value Rank
Availability 100 % 1st
CLS 0.285 18th (worst)

Root Cause: Infrastructure reliability overshadowing frontend health– Upkeep of servers and networks is excellent, but third-party scripts or CSS load patterns introduce layout shifts.

Takeaway: “Always up” is meaningless if your page jumps all over the screen. CLS must be treated as seriously as uptime in any holistic performance strategy.

Strategic insight: no single metric can define success

True competitive advantage comes from balancing backend speed, frontend rendering, and visual stability—avoiding extremes in any single dimension.

  1. Server speed without render optimization (ClickUp) leaves users waiting.
  2. Render stability without server performance (Monday.com) frustrates with lag.
  3. Reliability without UI health (HubSpot) erodes trust through janky experiences.

Organizations should prioritize comprehensive performance optimization rather than focusing on individual metrics. The analysis reveals that balanced performance across all eight dimensions correlates with higher overall rankings. Companies like Tableau and Trello succeed by maintaining consistently high scores rather than achieving perfect performance in select areas while neglecting others.

For enterprise decision-makers evaluating SaaS platforms, these performance metrics directly impact user experience, productivity, and operational efficiency. The top-tier performers demonstrate that excellent web performance is achievable and should be considered a critical factor in platform selection.

#3 Availability is table stakes, not a differentiator

Four companies—Asana, Hubspot, Trello, and Zoom—achieved perfect 100% availability. Once a headline-worthy feat, this level of uptime is now simply expected.

Yet, as the data shows, availability alone doesn’t guarantee a great user experience:

  • Hubspot posted a concerning CLS of 0.285, ranking 18th—well below acceptable thresholds forvisual stability.
  • Asana delivered 100% uptime, but struggled with 4,560 ms Document Complete and 5,444 ms Page Load—slower than many lower-ranked peers.
  • Zoom, another uptime leader, faced a similar paradox with 5,409 ms Document Complete and 6,144 ms Page Load.
  • Trello stands out as the exception—pairing perfect uptime with top-tier performance across DNS, document load, and layout stability.

Takeaway: Uptime is no longer a differentiator—it’s the baseline. What separates leaders like Trello is their ability to maintain perfect availability while excelling in front-end speed, stability, and user experience.

#4 Web performance varies dramatically by region.

Users in North America and Europe enjoyed significantly faster experiences, with average TTFB around 313ms and 365ms respectively. Conversely, users in Middle East & Africa faced average TTFB of over 725ms, nearly 2.5 times slower. Latin America and parts of Asia also experienced notable slowdowns.

Regional TTFB performance (milliseconds) - lower is better

Performance struggles in emerging markets

  • Middle East & Africa: The region presents the most significant challenges for SaaS providers, with 13 out of 19 companies experiencing their worst performance here. Companies showing performance gaps exceeding 200% in this region include Trello (236% worse), Slack SAAS Report(214% worse), and Notion (252% worse).
  • Latin America: Several companies faced performance penalties of 150-200% compared to North America, indicating significant optimization opportunities.
  • Asia-Pacific: Poses particular challenges for infrastructure-heavy services, with SAP, Tableau, and Service Now all experiencing their worst global performance in this region. This pattern suggests that companies with complex backend architectures face greater difficulties optimizing for Asian markets.

#5 Consistency matters: uneven performance undermines global reach

Regional CV by SaaS Company

  • SAP85.2%
  • Salesforce67.4%
  • Tableau42.3%
  • Microsoft 36536.7%
  • Trello29.4%
  • Oracle26.9%
  • Clickup19.8%
  • Monday.com16.5%

A high-performing SaaS platform isn’t just fast in one market—it delivers reliably across geographies. To measure this, we calculated the Coefficient of Variation (CV) for each company’s web performance across regions. CV quantifies performance consistency, with lower values indicating steadier delivery across global users, and higher values revealing greater disparity.

The results expose a critical divide:

  • SAP (85.2%) and Salesforce (67.4%) show the highest variability, meaning users in some regions experience vastly slower performance than others. This level of inconsistency can damage trust and adoption in emerging markets.
  • In contrast, ClickUp (19.8%) and Monday.com (16.5%) exhibit remarkably consistent regional performance. While their overall speed lags, users globally receive a relatively uniform experience.
  • Trello (29.4%) and Oracle (26.9%) strike a strong balance—pairing global consistency with high overall performance.

Takeaway: Global SaaS success isn’t just about peak speed—it’s about predictable, reliable delivery across all regions. Consistency builds user confidence, reduces friction, and scales trust worldwide. Platforms like ClickUp and Monday.com prove that even with performance challenges, regional uniformity can be a hidden strength—while SAP and Salesforce highlight the risks of uneven optimization.

Recommendations for Improving Web Performance

Based on our analysis of 19 global SaaS websites, the following recommendations are designed to helpdigital, DevOps, and performance teams deliver faster, more reliable, and more consistent webexperiences for users across the globe.

  1. Deliver a fast, consistent experience everywhere
    The top-ranked SaaS providers combined near-perfect uptime with page load times under 3seconds. This proves that fast, reliable performance is not only possible—it's expected
    • Track all 8 metrics equally—no single "hero metric" guarantees success.
    • Set performance baselines of 99.9%+ availability and sub-3-second document complete times.
    • Eliminate weak points in server response, frontend delivery, and visual stability.
  2. Shift from availability-only monitoring to Experience Level Objectives(XLOs)
    Most SaaS companies maintain strong uptime, but many fail to detect the real-world issues users face.
    • Experience Level Objectives (XLOs) shift the perspective. They measure from where users really are—on backbone networks, regional ISPs, or mobile connections—offering visibility into real-world issues that cloud checks can't see.
    • If SLOs tell you your system is available, XLOs tell you whether it's usable. This outside-in view turns monitoring data into actionable business insight.
  3. Optimize for global reach, not just local performance
    Many SaaS providers perform well in their home region but falter abroad due to latency, poor DNS resolution, or lack of CDN coverage.
    • Optimize DNS Globally. Companies with DNS lookup times >300ms lose 4.1 ranks on average vs sub-200ms peers.
    • Use anycast DNS and distributed CDN infrastructure to serve international users efficiently.
    • Deploy Internet Performance Monitoring from global locations for deep and wide visibility into regional performance.
  4. Prioritize Front-End Optimization
    Backend response time is crucial—but poor frontend practices can erase that advantage.
    • Fix layout shifts (CLS) before speed optimizations—visual instability often correlates with lower user satisfaction and site rankings.
    • Aim for LCP ≤2.0s to stay competitive.
    • Compress and defer non-critical assets, reduce third-party scripts, and streamline DOM complexity.
  5. Monitor APIs as mission-critical infrastructure
    SaaS apps are powered by APIs. But poor-performing APIs can silently degrade customer experience.
    • Proactively monitor internal and third-party APIs for reachability, availability, performance, and reliability.
    • Monitor from where users actually are—not just cloud regions—using backbone, cloud, and last-mile nodes.
  6. Don't overlook emerging market gaps
    African and South American users regularly experienced 2–4x slower speeds than those in North America or Europe.
    • Treat underserved regions as strategic growth areas, not afterthoughts.
    • Extend CDN and DNS coverage into these markets and evaluate user journeys region-by region.
  7. Benchmark continuously and learn from leaders
    Performance is a moving target. Small improvements can elevate a mid-ranked company into the top tier.
    • Monitor against both competitors and your historical baselines.
    • Establish performance budgets tied to KPIs and routinely test for peak demand scenarios.
  8. Make web performance a cultural priority
    Digital-first companies outperform legacy competitors not just because of better tools—but because they treat performance as core to customer experience.

Visualizing dependencies with Catchpoint’s Internet Stack Map

Understanding performance isn't just about measuring page load or uptime. Catchpoint’s Internet Stack Map enables you to visualize the Internet dependencies impacting service or application performance at a glance.

The stack map below shows how various services—including DNS providers, CDNs, APIs, microservices, and third-party trackers—interact in real time to deliver a modern SaaS experience.

Internet Stack Map visualizing DNS, frontend, backend, and third-party dependencies

Powered by our industry leading Internet Performance Monitoring platform, you can correlate global Internet Sonar signals with your own application tests, giving you a unified view of availability, service time, and incidents across the Internet Stack

This level of visibility helps teams:

  • Reduce MTTI/MTTR and save money by diagnosing issues in minutes—not hours
  • Visualize the Internet Stack from the perspective of your applications, not scattered alerts
  • Investigate issues instantly by drilling into live, interactive maps
  • Share application status through simple, real-time dashboards anyone can understand

Test Drive Internet Stack Map

This is some text inside of a div block.
What is ECN?

Explicit Congestion Notification (ECN) is a longstanding mechanism in place on the IP stack to allow the network help endpoints "foresee" congestion between them. The concept is straightforward… If a close-to-be-congested piece of network equipment, such as a middle router, could tell its destination, "Hey, I'm almost congested! Can you two guys slow down your data transmission? Otherwise, I’m worried I will start to lose packets...", then the two endpoints can react in time to avoid the packet loss, paying only the price of a minor slow down.

What is ECN bleaching?

ECN bleaching occurs when a network device at any point between the source and the endpoint clears or “bleaches” the ECN flags. Since you must arrive at your content via a transit provider or peering, it’s important to know if bleaching is occurring and to remove any instances.

With Catchpoint’s Pietrasanta Traceroute, we can send probes with IP-ECN values different from zero to check hop by hop what the IP-ECN value of the probe was when it expired. We may be able to tell you, for instance, that a domain is capable of supporting ECN, but an ISP in between the client and server is bleaching the ECN signal.

Why is ECN important to L4S?

ECN is an essential requirement for L4S since L4S uses an ECN mechanism to provide early warning of congestion at the bottleneck link by marking a Congestion Experienced (CE) codepoint in the IP header of packets. After receipt of the packets, the receiver echoes the congestion information to the sender via acknowledgement (ACK) packets of the transport protocol. The sender can use the congestion feedback provided by the ECN mechanism to reduce its sending rate and avoid delay at the detected bottleneck.

ECN and L4S need to be supported by the client and server but also by every device within the network path. It only takes one instance of bleaching to remove the benefit of ECN since if any network device between the source and endpoint clears the ECN bits, the sender and receiver won’t find out about the impending congestion. Our measurements examine how often ECN bleaching occurs and where in the network it happens.

Why is ECN and L4S in the news all of a sudden?

ECN has been around for a while but with the increase in data and the requirement for high user experience particularly for streaming data, ECN is vital for L4S to succeed, and major investments are being made by large technology companies worldwide.

L4S aims at reducing packet loss - hence latency caused by retransmissions - and at providing as responsive a set of services as possible. In addition to that, we have seen significant momentum from major companies lately - which always helps to push a new protocol to be deployed.

What is the impact of ECN bleaching?

If ECN bleaching is found, this means that any methodology built on top of ECN to detect congestion will not work.

Thus, you are not able to rely on the network to achieve what you want to achieve, i.e., avoid congestion before it occurs – since potential congestion is marked with Congestion Experienced (CE = 3) bit when detected, and bleaching would wipe out that information.

What are the causes behind ECN bleaching?

The causes behind ECN bleaching are multiple and hard to identify, from network equipment bugs to debatable traffic engineering choices and packet manipulations to human error.

For example, bleaching could occur from mistakes such as overwriting the whole ToS field when dealing with DSCP instead of changing only DSCP (remember that DSCP and ECN together compose the ToS field in the IP header).

How can you debug ECN bleaching?

Nowadays, network operators have a good number of tools to debug ECN bleaching from their end (such as those listed here) – including Catchpoint’s Pietrasanta Traceroute. The large-scale measurement campaign presented here is an example of a worldwide campaign to validate ECN readiness. Individual network operators can run similar measurement campaigns across networks that are important to them (for example, customer or peering networks).

What is the testing methodology?

The findings presented here are based on running tests using Catchpoint’s enhanced traceroute, Pietrasanta Traceroute, through the Catchpoint IPM portal to collect data from over 500 nodes located in more than 80 countries all over the world. By running traceroutes on Catchpoint’s global node network, we are able to determine which ISPs, countries and/or specific cities are having issues when passing ECN marked traffic. The results demonstrate the view of ECN bleaching globally from Catchpoint’s unique, partial perspective. To our knowledge, this is one of the first measurement campaigns of its kind.

Beyond the scope of this campaign, Pietrasanta Traceroute can also be used to determine if there is incipient congestion and/or any other kind of alteration and the level of support for more accurate ECN feedback, including if the destination transport layer (either TCP or QUIC) supports more accurate ECN feedback.

The content of this page is Copyright 2024 by Catchpoint. Redistribution of this data must retain the above notice (i.e. Catchpoint copyrighted or similar language), and the following disclaimer.

THE DATA ABOVE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS OR INTELLECTUAL PROPERTY RIGHT OWNERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THIS DATA OR THE USE OR OTHER DEALINGS IN CONNECTION WITH THIS DATA.

We are happy to discuss or explain the results if more information is required. Further details per region can be released upon request.