Last week, I took time from my “day job” to spend at the Gartner ITOSS conference in Orlando, and it was worth the investment. From the education sessions and keynotes to engaging with our customers to meeting with Gartner analysts, I walked away with a sense of rejuvenation regarding the future role of IT operations and infrastructure in digital business.
My seminal takeaways can be summed up as follows:
- Monitoring requirements of the “new world of digital business” means your tooling strategy must change
- I&O must change focus from building and running to delivering on business outcomes
Change your tooling strategy
Let’s start with what’s driving the big changes in monitoring requirements: Cloud. Today cloud computing or what Milind Govenkar called a “virtualized infrastructure” in his “The Cloud Computing Scenario – The Last and the Next 10 Years” presentation, where he showed Gartner’s forecast that by 2020 a quarter of all IT spend will be in the cloud. He added that today, most companies, with the exception of what Gartner defines as a “Mode 2” companies (which I take to mean companies born in the cloud such as Google, LinkedIn, Salesforce.com), are in a “discovery, cloud first” state of maturity. But, this will change dramatically and fast. Milind explained that the primary driver the next ten years will be on “exploiting new capabilities with a cloud only” approach to gain a competitive advantage. And according to Gartner analysts Hank Marquis, who co-presented the Gartner keynote, “Digital Business Platforms: The I&O Perspective,” by 2020, “50% of CEOs say their industries will be digitally transformed.” This is something we already see happening in industries with the rise of fintech, software vendors shifting to cloud-only deployment such as Office 365, and of course, the biggest disrupter in retail and beyond, Amazon. In a signal of the coming disruption, both Miland and Hank admonished I&O leaders to “join the API economy.”
So what does this mean to your monitoring requirements and tooling strategy?
My takeaway from Miland’s and Hank’s presentations is an equally daunting change will continue in monitoring. In the era where services were largely controlled inside a company’s data center, much of the focus on monitoring was on internal variables like server utilization, fan speed, and database resources. However, with the shift to the cloud, the API economy, and increasingly IoT, you essentially have a “black box” regarding visibility of all the moving parts that can impact your service, whether it is online shopping, digital banking, or live sports streaming. What this means is augmenting your tooling strategy from looking inward to a customer or user in approach to monitoring, your services to collect telemetry on external variables that can degrade performance or cause outages as we’ve recently witnessed with the AWS S3 outage. I&O leaders need to monitor external components and services like APIs, network protocols, CDNs, SaaS providers, and even DNS attacks. Dyn recently handled such an unfortunate incident.
Catchpoint’s Dennis Callaghan said it best, “today’s web applications rely on so much more than these two protocols. They traverse a global network of Internet service providers, cloud infrastructure providers, content delivery networks, internal and external domain name services (DNS); they call on third-party hosts and APIs, they deploy tags for advertising and personalization services.” Gartner calls this type of performance monitoring “Digital Experience Monitoring,” or DEM. It expects that “by 2020, 30% of global enterprises will have strategically implemented DEM technologies or services, up from fewer than 5% today.”
My next blog covers my second key learning at Gartner’s ITOSS 2017 conference to “focus on outcomes.”