Blog Post

Is It Time to Rethink Your Network Monitoring Strategy?

As networks become more complex each year organizations need to re-examine their network monitoring strategy.

Networks have become more complex over the years, with organizations having to support BYOD initiatives, hybrid cloud environments, third party content, and a geographically disperse workforce. If I could predict the future, I would say networks will become more complex in the coming years which means more potential headaches for network professionals. As networks become more complex organizations need to re-examine their network monitoring strategy.

What worked when applications and servers were centralized and controlled by the enterprise may not work as well in a decentralized environment. According to Shamus McGillicuddy from EMA, network teams “need to determine whether their existing tools can provide them with end-to-end visibility across internal, private cloud infrastructure and external, public cloud infrastructure, as well as visibility into the network connectivity that links the two.”

It’s not about collecting all the data, organizations are drowning in data. It is about collecting the right data and making the most out of it.

Traditional network monitoring tools collect information via:

  • Device polling. Devices are queried via SNMP to collect data on interface status, traffic statistics, load, CPU, and memory.
  • Active probing. Network properties are measured from an application perspective, by sending traffic onto the network to gather information. Requests can be as simple as a ping or more complex application-level queries such as a request for a web page or the initiation of a VoIP stream.
  • Internet Protocol Flow Information Export (IPFIX). An IETF standard based on Cisco’s NetFlow to export IP flow information from routers and network devices. Information is captured at a device and sent to a centralized analyzer or network management system.
  • Packet captures. Traffic transmitted on the network is intercepted, logged, decoded and analyzed according to RFCs.
  • Data logging or log analysis. A collection of data from logs or audit trails to correlate events or understand user behavior.

These methods all work great when you own all the network devices, but what happens when this isn’t the case? You suddenly don’t have visibility into how applications are performing across the network and time spent troubleshooting issues increases.

If you are considering or already dealing with any of the following, now is a perfect time to rethink your monitoring strategy:

Moving to the cloud- Performance and availability are now heavily influenced by the performance of your SaaS, IaaS, and/or PaaS provider. Can you identify from the end user perspective how your application performs across different geographies and providers? As you plan, migrate, or support applications in the cloud, you need to understand how or if performance will change and independently validate SLAs. Recent research from Digital Enterprise Journal reveals 51% of organizations do not have effective performance monitoring tools for dynamic and hybrid environments.

Implementing BYOD policies- As employees increasingly use personal devices at work, networks may see increased traffic and congestion. Increased network traffic can have a negative impact on business critical applications. How do applications perform on the network? Can users access internal and SaaS applications? Being able to monitor critical business applications from the corporate network reduce help desk calls complaining about poor application performance.

Supporting remote employees- With remote employees, it’s not just your network you have to worry about, but also the networks at airports, hotels, coffee shops, employees’ homes, etc. Are SaaS and internal applications available from all geographies where you have employees? If a regional ISP is having problems, how does that impact your employees? Having visibility into application performance and availability from their vantage point results in happy, productive employees.

Refreshing monitoring solutions- Even if you aren’t currently considering or supporting one of the above situations but are looking to refresh or replace existing solutions, now is the time to choose a toolset that will grow with your organization.

What is needed today is the ability to integrate, correlate, and analyze data collected from monitoring things, people, applications, network, and infrastructure across complex environments. Look at the solutions in place and see if they give you the end-to-end visibility needed to ensure an optimal digital experience, regardless of the hardware  installed or the owner the network.

Synthetic Monitoring
Network Reachability
VoIP and Video
SLA Management
Workforce Experience
SaaS Application Monitoring

You might also like

Blog post

Mythbusting IPv6 with Jan Zorz, and Why IPv6 Adoption is Slow

Blog post

Internet Sonar: A Game-Changer for Incident Detection

Blog post

Why you Need WiFi Observability in the Era of Work From Anywhere

Blog post

Fall 2023 Product Launch Recap