Blog Post

Real-time detection of BGP blackholing and prefix hijacks

Updated
Published
May 22, 2025
#
 mins read

in this blog post

Border Gateway Protocol (BGP) remains the backbone of inter-domain routing on the Internet, but its fundamental trust model leaves it vulnerable to misconfigurations, hijacks, and blackholing. When these issues occur, they often go undetected by the impacted networks—until users report degraded performance or service outages.

This post walks through a real-world incident in which a legitimate traffic spike led to an upstream provider mistakenly blackholing a critical IP address. The scenario illustrates how BGP blackholing can silently disrupt service and how external observability enables rapid diagnosis and resolution.

Understanding BGP blackholing

BGP blackholing is a commonly used DDoS mitigation tactic. A network under attack announces a more specific route for the targeted IP or subnet, directing that traffic to a null interface to prevent it from reaching the intended service infrastructure. While effective in protecting resources during volumetric attacks, this approach can inadvertently block legitimate traffic when applied too aggressively.

Let us try to understand this with help of an example:

Picture 924464936, Picture
AS2 creates BGP blackhole. No traffic reaching intended server.

In this case, the /24 prefix 1.0.0.0/24 was owned and announced by one autonomous system (AS1). A specific point-of-presence (PoP) within this prefix was responsible for live-streaming a global event. The virtual IP for this PoP—1.0.0.100—saw a surge in traffic from viewers worldwide.

The traffic passed through an upstream provider (AS2), which monitored for DDoS patterns. Seeing the sudden spike, AS2’s automated mitigation system assumed the traffic was malicious. It responded by injecting a more specific /32 route for 1.0.0.100 into the global routing table and directed it to a null interface.

The effect was immediate: traffic destined for the live-streaming service was dropped silently by AS2, resulting in widespread loss of availability for users across multiple regions.

Challenges in diagnosing upstream blackholing

From AS1’s perspective, the service infrastructure remained operational, and no anomalies were observed in internal telemetry. However, users were unable to access the stream.  

A diagram of a computerAI-generated content may be incorrect., Picture
Why traditional monitoring misses upstream blackholing

Because the traffic was dropped before reaching AS1’s infrastructure, no logs or packet traces indicated a problem.

This is a common limitation when relying solely on internal monitoring. In upstream blackholing scenarios, routing changes happen outside of the origin network’s control, and the only observable symptom may be an unexplained drop in traffic or availability.

The diagnostic challenge is further complicated by the specificity of the blackhole route. While the legitimate route for 1.0.0.0/24 remained active, the injected /32 for 1.0.0.100 took precedence due to BGP’s longest-prefix match rule, causing traffic to be rerouted and dropped at AS2.

Detecting origin AS mismatches and route hijacks

The incident was identified through external route monitoring that detected an origin AS mismatch—the /32 prefix was being originated by AS2 instead of the expected AS1. This deviation triggered an alert, which prompted further analysis of the BGP path and propagation behavior.

Picture 546420361, Picture
Catchpoint platform BGP alert for ASN origin mismatch

An inspection of the AS path confirmed that certain regions were receiving the incorrect /32 advertisement and routing traffic through AS2, which blackholed the packets. The blackhole route had global reachability in select geographies, explaining the outage pattern observed by users.

Mapping the propagation of the erroneous route helped identify the scope of the impact and enabled coordination with AS2 to withdraw the blackhole announcement. Once removed, traffic to 1.0.0.100 resumed normal routing, and the live-streaming service was restored.

Picture 1533534755, Picture
Catchpoint platform showing BGP path, clearly identifying where traffic was split due to blackhole

Broader implications

This incident highlights the fragility of the global routing layer and the potential for automated systems to cause collateral damage, even when operating as designed. It also underscores the limitations of relying solely on internal data to understand end-to-end Internet performance.

A map of the world with different colors of the world mapAI-generated content may be incorrect., Picture
Visibility into prefix propagation across the globe

External BGP monitoring allows operators to observe how their prefixes are being routed across the Internet and to detect anomalies such as:

  • Prefix hijacks by unintended or malicious ASes
  • Upstream blackholing through more-specific announcements
  • AS path divergence and propagation anomalies

Such visibility is critical for large-scale services that rely on third-party transit and upstream providers to reach global users.

Looking forward

BGP remains a powerful but fragile protocol, and incidents like this illustrate the importance of proactive, third-party observability into Internet routing. As automated mitigation systems become more prevalent, it is increasingly important for network operators to verify not just whether their services are available, but whether their prefixes are being routed as intended.

For a detailed exploration of BGP monitoring techniques and best practices, check out our in-depth guide

Border Gateway Protocol (BGP) remains the backbone of inter-domain routing on the Internet, but its fundamental trust model leaves it vulnerable to misconfigurations, hijacks, and blackholing. When these issues occur, they often go undetected by the impacted networks—until users report degraded performance or service outages.

This post walks through a real-world incident in which a legitimate traffic spike led to an upstream provider mistakenly blackholing a critical IP address. The scenario illustrates how BGP blackholing can silently disrupt service and how external observability enables rapid diagnosis and resolution.

Understanding BGP blackholing

BGP blackholing is a commonly used DDoS mitigation tactic. A network under attack announces a more specific route for the targeted IP or subnet, directing that traffic to a null interface to prevent it from reaching the intended service infrastructure. While effective in protecting resources during volumetric attacks, this approach can inadvertently block legitimate traffic when applied too aggressively.

Let us try to understand this with help of an example:

Picture 924464936, Picture
AS2 creates BGP blackhole. No traffic reaching intended server.

In this case, the /24 prefix 1.0.0.0/24 was owned and announced by one autonomous system (AS1). A specific point-of-presence (PoP) within this prefix was responsible for live-streaming a global event. The virtual IP for this PoP—1.0.0.100—saw a surge in traffic from viewers worldwide.

The traffic passed through an upstream provider (AS2), which monitored for DDoS patterns. Seeing the sudden spike, AS2’s automated mitigation system assumed the traffic was malicious. It responded by injecting a more specific /32 route for 1.0.0.100 into the global routing table and directed it to a null interface.

The effect was immediate: traffic destined for the live-streaming service was dropped silently by AS2, resulting in widespread loss of availability for users across multiple regions.

Challenges in diagnosing upstream blackholing

From AS1’s perspective, the service infrastructure remained operational, and no anomalies were observed in internal telemetry. However, users were unable to access the stream.  

A diagram of a computerAI-generated content may be incorrect., Picture
Why traditional monitoring misses upstream blackholing

Because the traffic was dropped before reaching AS1’s infrastructure, no logs or packet traces indicated a problem.

This is a common limitation when relying solely on internal monitoring. In upstream blackholing scenarios, routing changes happen outside of the origin network’s control, and the only observable symptom may be an unexplained drop in traffic or availability.

The diagnostic challenge is further complicated by the specificity of the blackhole route. While the legitimate route for 1.0.0.0/24 remained active, the injected /32 for 1.0.0.100 took precedence due to BGP’s longest-prefix match rule, causing traffic to be rerouted and dropped at AS2.

Detecting origin AS mismatches and route hijacks

The incident was identified through external route monitoring that detected an origin AS mismatch—the /32 prefix was being originated by AS2 instead of the expected AS1. This deviation triggered an alert, which prompted further analysis of the BGP path and propagation behavior.

Picture 546420361, Picture
Catchpoint platform BGP alert for ASN origin mismatch

An inspection of the AS path confirmed that certain regions were receiving the incorrect /32 advertisement and routing traffic through AS2, which blackholed the packets. The blackhole route had global reachability in select geographies, explaining the outage pattern observed by users.

Mapping the propagation of the erroneous route helped identify the scope of the impact and enabled coordination with AS2 to withdraw the blackhole announcement. Once removed, traffic to 1.0.0.100 resumed normal routing, and the live-streaming service was restored.

Picture 1533534755, Picture
Catchpoint platform showing BGP path, clearly identifying where traffic was split due to blackhole

Broader implications

This incident highlights the fragility of the global routing layer and the potential for automated systems to cause collateral damage, even when operating as designed. It also underscores the limitations of relying solely on internal data to understand end-to-end Internet performance.

A map of the world with different colors of the world mapAI-generated content may be incorrect., Picture
Visibility into prefix propagation across the globe

External BGP monitoring allows operators to observe how their prefixes are being routed across the Internet and to detect anomalies such as:

  • Prefix hijacks by unintended or malicious ASes
  • Upstream blackholing through more-specific announcements
  • AS path divergence and propagation anomalies

Such visibility is critical for large-scale services that rely on third-party transit and upstream providers to reach global users.

Looking forward

BGP remains a powerful but fragile protocol, and incidents like this illustrate the importance of proactive, third-party observability into Internet routing. As automated mitigation systems become more prevalent, it is increasingly important for network operators to verify not just whether their services are available, but whether their prefixes are being routed as intended.

For a detailed exploration of BGP monitoring techniques and best practices, check out our in-depth guide

This is some text inside of a div block.

You might also like

Blog post

Real-time detection of BGP blackholing and prefix hijacks

Blog post

The Power of Over 3000 Intelligent Observability Agents

Blog post

Monitoring in the Age of Complexity: 5 Assumptions CIOs Need to Rethink