Blog Post

Aligning Web Performance Goals with Business Initiatives

Maintaing a healthy balance between IT and business by meeting your users' and IT team's needs is imperative when building a web performance strategy.

The following post originally appeared as a guest byline on Multichannel Merchant.

Ever since the internet and eCommerce websites were created, the needs of the end users have been at the forefront of the mind of every IT operations professional. It’s a perfectly logical prioritization to make; the customer is always right, so it only makes sense to put the end users’ needs above all others. And hey, at the end of the day we’re end users as well, so we know how our own users want and deserve to be treated.

However, that only tells part of the story. In addition to the needs of the end users, the needs of the business and IT teams within the company must be taken into account as well. After all, these are the people who are ultimately setting the goals for the site, and making sure that those goals are reached. With responsibilities like that, the need for efficiency and prioritization becomes clear. It’s not that the customers’ needs should take a backseat; rather, their needs will be better served if the business and IT teams are able to meet their goals in the fastest and most efficient manner possible.

Here are four steps that any company can take to accomplish this:

Filter out the ambient noise

You can’t fix what you don’t measure, but nor can you fix what you can’t identify. Many web performance initiatives result in a firehose effect when the data starts coming in, but too much information is often just as bad as no information. Sometimes a problem will even be detected, but the sheer volume of data means that hours of sifting through it are required in order to pinpoint the root cause of that problem and solve it.

The bottom line is that data is not enough; it must be accompanied by extensive analytics which isolate the various systems that make up the complex infrastructure of a page. This also helps you establish baselines, identify historical trends, and see the impact of different optimization techniques.

It’s also necessary to save that data and export it to the company’s own internal systems. The ability to analyze data going back as far as 2-3 years will help you to compare trends month-over-month and year-over-year in order to understand seasonal impact, and help to identify problems that may only arise once or twice in any given calendar year.

Proactive testing around the clock

Your monitoring strategy needs to deliver data on performance as it’s experienced by end users across major U.S. markets. The key, of course, is to catch problems before those users encounter them through round-the-clock testing.

Synthetic (or Active) monitoring is extremely useful in achieving this goal, and alerting businesses about problems within their infrastructure by testing from a ‘clean lab’ environment. By simulating web traffic to show how quickly (or slowly) end users are able to enter a site or a particular page, synthetic monitoring offers a proactive approach.

However, synthetic also gives little insight into what users are actually doing once they enter your site, or discerning the impact of any outages that do occur. By using collecting data from actual end users (aka Real User Measurement or RUM) in conjunction with your synthetic strategy, site managers are able to identify areas in which they can optimize the user experience one end-users actually enter the site. After all, there’s no use in having a highly reliable, fast homepage if your critical conversion paths deeper within the site are slow. Nor is there any point in only measuring user activity if you can’t catch an outage before it impacts them.

This approach can also be done in any international markets that represent growth potential for your company – China and India being two major markets where eCommerce companies are trying to establish a foothold – so that you can monitor your performance in the face of the obstacles that are unique to different regions around the globe.

Be wary of who you let into your house

A modern website – particularly those of eCommerce or media companies – is often rife with third party tags and elements that can ultimately have a marked effect on the performance of the page. Hosting all of these third parties means that you also have to monitor their performance as well your own systems, because the two are interconnected; poor performance on the part of just one third party service is sometimes all it takes to drag down performance for a whole site. And as the number of these services grows, the risks grow higher, and websites become harder and more complex to manage.

With that in mind, clear performance baselines are needed for every third party element, with SLAs in place for when they fail to meet the minimum requirements. By maintaining a rigorous monitoring initiative of these elements on top of your own, you can quickly and easily demonstrate the performance of the various hosts on your pages. This allows you to understand overall web performance both before and after a third party service is added, and work with third party service providers to continually optimize performance. And, in the event of a third party failure, companies can implement contingency plans quickly, to suspend the service temporarily from the site.

Stay in bed when you can

Ask any IT professional what their biggest job-related pet peeve is, and the most likely answer you’ll get is being woken up in the middle of the night by what turns out to be a false alert. Often these false positives are the result of network problems as opposed to website problems, or problems with a third party as opposed to a first party. Having an alerting system in place that can tell the difference between these and verify failed tests before issuing an alert can be the difference between a night of restful sleep and one of scrambling for no good reason.

The ultimate goal is to empower IT teams to find and fix the widest range of problems possible in the least amount of time. Having those teams overwhelmed with excess data in order to fix a problem is a colossal waste of time and resources, as is having them spending time on problems that are ultimately outside of their control. By freeing that time up to work on initiatives to improve the existing platforms, all three groups – the IT department, the business team, and the end users – will be able to enjoy a richer, more rewarding web experience.

Synthetic Monitoring
Real User Monitoring
Network Reachability
SLA Management
Media and Entertainment
This is some text inside of a div block.

You might also like

Blog post

Mastering IPM: Protecting Revenue through SLA Monitoring

Blog post

The SRE Report 2024: Essential Considerations for Readers

Blog post

Retail Resilience: Lessons Learned from Cyber Week 2023

Blog post

Adobe Experience Cloud Outage: The Impact of Relying on Third-party Services