Learn

2025 Athletic Footwear and Apparel Digital Experience Benchmark Report

Executive Summary

Athletic brands are facing a troubling gap between what their dashboards report and what customers actually experience online. This risk is magnified heading into one of the busiest retail seasons of the year. This report analyses the top 20 athletic footwear and apparel brands by revenue, with performance measurements taken from over 120 locations around the globe and ranked using the Digital Experience Score, a holistic view of website and application performance from the users perspective.

Digital experience is no longer optional — it’s survival. Poor website performance doesn’t just kill online sales, it destroys in-store revenue too. Nike's digital platforms generate $12.1 billion of its $49.3 billion total revenue. Meanwhile, digitally influenced sales now represent 62% of all U.S. retail, rising to 70% by 2027. When your website fails, customers don't wait—they buy from competitors.  

A single slow-loading page can hand customers directly to competitors who've mastered digital-first performance. The findings in this report challenge assumptions, expose blind spots, and reveal why only a handful of brands are truly delivering for customers.

Key Takeaways

Experience Score Rankings

The table below shows all evaluated athletic brand websites ranked by their overall digital Experience Score. 

Understanding the Digital Experience Score

The Digital Experience Score is a single, user-centric metric (0–100) that reflects how customers actually experience a brand’s digital touchpoints. Unlike raw infrastructure metrics, it combines device, network, and application factors into a holistic view of experience quality.

How it’s calculated

The score is built from three dimensions:

  • Endpoint score – end-user device performance (e.g., CPU, memory constraints)
  • Network score – connectivity quality (packet loss, latency, round-trip time)
  • Application score – web/app performance (load times, CLS, responsiveness, error rates)

How to read the scores

  • Leading (90–100): Seamless, fast, globally consistent experiences
  • Strong (83–89): Solid performance, with room to optimize consistency or stability
  • Competitive (66–82): Acceptable but with risks, especially on mobile or in certain regions
  • Challenged (<66): Digital friction is likely hurting satisfaction, conversion, and loyalty

For full details, see our guide to the Digital Experience Score.

Insight 1: Performance separates winners from losers

Most brands are underperforming, and revenue doesn’t guarantee performance.

Only 20% of the tested brands delivered exceptional digital experiences.  

  • Just four — Fila (#1, 96), Under Armour (#2, 95), New Balance (#3, 91), and HOKA (#4, 89) — land in the “Leading” or “Strong” tiers.  
  • The other 16 brands sit in the “Challenged” category, with experience scores below 66.

The industry's two largest players dramatically underperform:

  • Nike, despite generating nearly $50 billion annually, ranked #16 with a 52.6 experience score.  
  • Adidas was firmly mid table at #11 with a 57.8 experience score and concerning 92.2% availability.  

So what?

Some might ask: if even the leaders score modestly, does this matter? The answer is yes because disruption doesn’t wait. Challenger brands like On and Hoka are already growing fast by capturing the revenue that giants leave behind:  

When giants stumble, losses are amplified at global scale and create a strategic opening, one the disruptors are already exploiting.

Source

The performance crisis is real  

Most brands simply aren’t fast enough. Only a handful clear the 3-second industry threshold while the majority crawl well beyond customer tolerance.

  • Only 3 brands (15%) load in under 3 seconds, the expectation from customers
  • The median site takes 6.6 seconds, more than double that standard
  • 70% of brands exceed 5 seconds
  • 85% of brands fail the industry standard.

Speed and experience move in perfect harmony

The data makes it clear: faster sites consistently deliver better experiences, not by coincidence, but by cause and effect.

The faster the site, the higher the experience score

  • Correlation between load times and experience scores: -0.829 — the slower a site loads, the worse the customer experience
  • The performance elite — New Balance, Under Armour, Fila, HOKA — average 2.6s load times and a 92.6 score.
  • Bottom performers average 10.8s loads and a 50.7 score — a 320% gap.
  • The fastest performer loads 8.2x faster than the slowest.

Why the performance elite wins

The top performers haven't just optimized their websites—they've recognized that in today's instant-gratification economy, slow is the new down. When customers can abandon a slow site and reach a competitor in seconds, speed becomes the ultimate competitive advantage.

Curious how your brand compares?  

Get a Free Retail Assessment with one of our Internet Performance Monitoring experts. 

Insight 2: The giants of the industry are struggling to stay online — and it’s costing them

The biggest names, Nike, Adidas, Puma, and Asics, operate with sub-par reliability, well below enterprise standards.

  • The gap is huge: Gymshark runs at 99.93%, Adidas just 92.24%
  • The giants are in crisis: Nike, Adidas, Puma, and Asics all fall under 93% availability.
  • Challengers excel: Gymshark, On, and Brooks prove near-perfect uptime is possible, consistently above 99.8%.

Challengers hit near-perfect uptime. Giants like Adidas and Nike lag dangerously behind.

The downtime bill for giants

When availability slips below enterprise standards, the financial impact is staggering. Nike and Adidas, the two biggest names in athletic retail, are losing hundreds of millions each year simply because their sites don’t stay online.

  • Nike (92.9% availability)
    • ~51 hours of downtime per month
    • $17M lost monthly
    • $200M+ lost annually
  • Adidas (92.2% availability)
    • ~56 hours of downtime per month
    • $19M lost monthly
    • $225M+ lost annually

* Downtime costs calculated using Gartner’s $5,600/minute industry benchmark.

The availability paradox

Perfect uptime doesn’t guarantee good experiences. Over half of high-uptime brands still fail customers.

  • 16 of 20 brands (80%) run at ≥99.5% availability.
  • Yet more than half still deliver weak experiences (<66).
  • Adidas, Puma, and Lululemon illustrate this paradox clearly: high uptime, weak experiences.

Insight 3: The monitoring blind spot - when location determines reality

In the same city, for the same website, real customers waited up to 15× longer than cloud dashboards suggested.

Cloud dashboards don’t match customer reality. In our snapshot tests of Nike, Adidas, and Lululemon, the same websites showed radically different load times, sometimes 8–15× slower experiences, depending on where the performance was measured. Cloud agents running in controlled, data-center environments tended to showed fast results, while last-mile monitoring (simulating real users on consumer ISPs) exposed much slower, less reliable experiences.

Same-city monitoring blind spots

This chart shows how monitoring vantage point alone can change the performance story.  

Cloud monitoring shows 3–5s load times, but local ISP users in the same cities waited 15–16s — up to 14× slower.

Why this matters: same-city comparisons remove geography. The only difference is how monitoring is done:

  • Cloud and backbone vantage points run on optimized infrastructure and premium routes.
  • Real customers connect over broadband, mobile, or satellite with variable performance.  

Lululemon: The global expansion blind spot

Perfect uptime doesn’t guarantee good experiences. Over half of high-uptime brands still fail customers.

3.9s from a U.S. cloud vantage vs. 19.7s for PLDT broadband users in the Philippines — a 15.8s gap.

Beyond city-level tests, the gaps widen globally. U.S.-based monitoring creates a false sense of security: what looks fine at home is unusable abroad. These aren’t edge cases; they appear in key expansion markets.

The lastmile ISP lottery

Even within one city, user experience can vary wildly based on ISP.

AWS Cloud shows ~4.6s. Vodafone users load in 2.8s (39% faster), while Sky customers wait 16.3s (3.6× slower)
Nike users on Sky wait 28s vs. 3.5s for Vodafone — an 8× difference in the same city.

The blind spot: Cloud dashboards flatten these differences into a “steady” number, masking the 5–8× swings real customers feel.

Insight 4: When dashboards lie - The technical vs reality gap

The most dangerous assumption in digital performance management? That green dashboards mean happy customers.

We ranked all 20 brands twice: once by customer experience, once by traditional technical metrics. The results expose a disconnect that explains why so many digital teams fail despite "perfect" infrastructure monitoring.

Exp Rank Brand Experience Score Traditional Rank Rank Difference
1Fila Holdings968+7
2Under Armour953+1
3New Balance911-2
4Hoka892-2
5Gymshark776+1
6Brooks Running649+3
7On6414+7
8Asics6218+10
9Columbia Sportswear5916+7
10Saucony5912+2
11Adidas5817+6
12Puma5710-2
13Lululemon567-6
14Mizuno5611-3
15Decathlon545-10
16Nike5319+3
17Anta Sports5113-4
18Li-Ning5120+2
19Reebok5015-4
20Skechers504-16

  • Positive Rank Difference: real users experience the site better than the technical metrics suggests (over‑performer).
  • Negative Rank Difference: the site looks better on paper than it feels to users (under‑performer).

What are technical metrics?

In addition to the Digital Experience Score, we evaluated each brand’s website across eight core technical metrics. These capture how a site performs from an infrastructure and browser perspective, and are commonly used in digital performance monitoring.

Metrics tested:

  • Availability (16%) – Uptime percentage (target ≥99.9%)
  • Document Complete (12%) – Time until all key page elements are loaded (≤3s)
  • Page Load Time (12%) – Time until entire page is fully loaded (≤3s)
  • Response Time (12%) – Time to complete a full request (≤500ms)
  • Time to First Byte (12%) – Time to receive first byte from server (≤200ms)
  • Largest Contentful Paint (12%) – Time to load main content block (≤2.5s)
  • Cumulative Layout Shift (12%) – Visual layout stability (<0.1)
  • DNS Lookup Time (12%) – Time to resolve domain to IP address (≤100ms, ideally <50ms)

Ranking methodology:

  • Availability was weighted more heavily (16%) because downtime directly translates into lost sales and trust.
  • The other seven metrics were weighted equally (12% each) to reflect the balance of speed, responsiveness, and stability that shape customer experience.

The dashboard deception: when good metrics mask bad experiences

40% of athletic brands shift by 5+ places between technical and experience rankings — proving dashboards don’t tell the whole story.

This gap explains why digital teams struggle to improve business metrics despite green dashboards—they're optimizing for the wrong measures while customers suffer in silence.

The green dashboards, red reality club

Some brands shine in technical checklists but fall apart in real life.  

Skechers (-16), Decathlon (-11), and Lululemon (-6) all look strong on paper but deliver weak experiences in practice.

The hidden gems: modest tech, exceptional experience

Asics: the biggest comeback story

  • Technical rank: #18 → Experience rank: #8 (+10 positions)
  • Shows how customer-focused optimization beats raw infrastructure

Fila: from middle pack to market leader

  • Technical rank: #8 → Experience rank: #1 (+7 positions)
  • Demonstrates user-centric execution trumps technical perfection

The alignment champions: strong on both fronts

Only two brands successfully convert technical strength into customer experience:

  • Under Armour: Technical #3 → Experience #2 (+1 position)
  • New Balance: Technical #1 → Experience #3 (-2 positions)

Both prove that well-designed systems can excel across multiple performance dimensions simultaneously.

Conclusion: Digital experience decides the winners

Across performance, availability, technical metrics, and monitoring blind spots, one truth cuts through: what customers actually experience online is often far worse than dashboards suggest.

  • Performance separates winners from losers: Only four brands deliver acceptable load times, while 85% fail the 3-second benchmark.
  • Availability is table stakes: giants like Nike and Adidas suffer hundreds of millions in downtime costs while smaller challengers stay online.
  • Technical metrics mislead: nearly half of brands look “green” on dashboards yet fail customers in practice.
  • Cloud monitoring hides reality: in the same city, real users wait up to 15× longer than cloud dashboards report.

The lesson is clear: digital excellence doesn’t come from infrastructure metrics or perfect uptime alone. It comes from monitoring and optimizing the real customer journey, across devices, networks, and geographies.

In today’s fragmented and unforgiving market, digital experience is no longer optional. It is the ultimate competitive advantage.

Testing Methodology

This benchmark evaluated 20 of the world’s largest athletic footwear and apparel companies, compiled by global revenue rankings to ensure representation of the industry’s most influential brands.

Timeframe

All data was collected between August 1 and August 31, 2025, providing a consistent one-month snapshot of real-world performance across all monitored sites.

Monitored Pages

We tested the public homepages of each company — the first touchpoint for most shoppers. This provided a standardized basis for comparison, capturing how a typical visitor experiences each brand’s digital storefront.

Testing Locations

Tests were conducted from 123 global monitoring locations across six continents:

  • 26 North American agents
  • 97 international agents, including the UK, Germany, India, Japan, Australia, South Africa, and Brazil

Agent types used

Catchpoint’s Global Agent Network includes cloud, wireless, last mile ISP, and backbone agents, each offering distinct vantage points to measure performance across region

  • Cloud agents operate within major public cloud providers (AWS, Azure, Google, etc.), detecting performance issues inside cloud data centers.
  • Wireless agents simulate real-world mobile access (3G/4G/5G) across carriers, revealing issues unique to cellular users.
  • Last-mile ISP agents run on actual residential broadband networks, capturing the true end-user experience with each local ISP.
  • Backbone agents are placed in Tier 1 and Tier 2 ISPs, providing a core Internet perspective to spot global trends, routing anomalies, and CDN-level outages.  

Why variety matters

Using these varied agents delivers comprehensive visibility, ensuring both global averages and regional nuances in performance are accurately detected and differentiated.

This is some text inside of a div block.
What is ECN?

Explicit Congestion Notification (ECN) is a longstanding mechanism in place on the IP stack to allow the network help endpoints "foresee" congestion between them. The concept is straightforward… If a close-to-be-congested piece of network equipment, such as a middle router, could tell its destination, "Hey, I'm almost congested! Can you two guys slow down your data transmission? Otherwise, I’m worried I will start to lose packets...", then the two endpoints can react in time to avoid the packet loss, paying only the price of a minor slow down.

What is ECN bleaching?

ECN bleaching occurs when a network device at any point between the source and the endpoint clears or “bleaches” the ECN flags. Since you must arrive at your content via a transit provider or peering, it’s important to know if bleaching is occurring and to remove any instances.

With Catchpoint’s Pietrasanta Traceroute, we can send probes with IP-ECN values different from zero to check hop by hop what the IP-ECN value of the probe was when it expired. We may be able to tell you, for instance, that a domain is capable of supporting ECN, but an ISP in between the client and server is bleaching the ECN signal.

Why is ECN important to L4S?

ECN is an essential requirement for L4S since L4S uses an ECN mechanism to provide early warning of congestion at the bottleneck link by marking a Congestion Experienced (CE) codepoint in the IP header of packets. After receipt of the packets, the receiver echoes the congestion information to the sender via acknowledgement (ACK) packets of the transport protocol. The sender can use the congestion feedback provided by the ECN mechanism to reduce its sending rate and avoid delay at the detected bottleneck.

ECN and L4S need to be supported by the client and server but also by every device within the network path. It only takes one instance of bleaching to remove the benefit of ECN since if any network device between the source and endpoint clears the ECN bits, the sender and receiver won’t find out about the impending congestion. Our measurements examine how often ECN bleaching occurs and where in the network it happens.

Why is ECN and L4S in the news all of a sudden?

ECN has been around for a while but with the increase in data and the requirement for high user experience particularly for streaming data, ECN is vital for L4S to succeed, and major investments are being made by large technology companies worldwide.

L4S aims at reducing packet loss - hence latency caused by retransmissions - and at providing as responsive a set of services as possible. In addition to that, we have seen significant momentum from major companies lately - which always helps to push a new protocol to be deployed.

What is the impact of ECN bleaching?

If ECN bleaching is found, this means that any methodology built on top of ECN to detect congestion will not work.

Thus, you are not able to rely on the network to achieve what you want to achieve, i.e., avoid congestion before it occurs – since potential congestion is marked with Congestion Experienced (CE = 3) bit when detected, and bleaching would wipe out that information.

What are the causes behind ECN bleaching?

The causes behind ECN bleaching are multiple and hard to identify, from network equipment bugs to debatable traffic engineering choices and packet manipulations to human error.

For example, bleaching could occur from mistakes such as overwriting the whole ToS field when dealing with DSCP instead of changing only DSCP (remember that DSCP and ECN together compose the ToS field in the IP header).

How can you debug ECN bleaching?

Nowadays, network operators have a good number of tools to debug ECN bleaching from their end (such as those listed here) – including Catchpoint’s Pietrasanta Traceroute. The large-scale measurement campaign presented here is an example of a worldwide campaign to validate ECN readiness. Individual network operators can run similar measurement campaigns across networks that are important to them (for example, customer or peering networks).

What is the testing methodology?

The findings presented here are based on running tests using Catchpoint’s enhanced traceroute, Pietrasanta Traceroute, through the Catchpoint IPM portal to collect data from over 500 nodes located in more than 80 countries all over the world. By running traceroutes on Catchpoint’s global node network, we are able to determine which ISPs, countries and/or specific cities are having issues when passing ECN marked traffic. The results demonstrate the view of ECN bleaching globally from Catchpoint’s unique, partial perspective. To our knowledge, this is one of the first measurement campaigns of its kind.

Beyond the scope of this campaign, Pietrasanta Traceroute can also be used to determine if there is incipient congestion and/or any other kind of alteration and the level of support for more accurate ECN feedback, including if the destination transport layer (either TCP or QUIC) supports more accurate ECN feedback.

The content of this page is Copyright 2024 by Catchpoint. Redistribution of this data must retain the above notice (i.e. Catchpoint copyrighted or similar language), and the following disclaimer.

THE DATA ABOVE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS OR INTELLECTUAL PROPERTY RIGHT OWNERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THIS DATA OR THE USE OR OTHER DEALINGS IN CONNECTION WITH THIS DATA.

We are happy to discuss or explain the results if more information is required. Further details per region can be released upon request.