Affected
Degraded performance from 7:25 PM to 12:12 PM, Operational from 12:12 PM to 10:31 AM
Degraded performance from 7:25 PM to 12:12 PM, Operational from 12:12 PM to 10:31 AM
- PostmortemPostmortem
Reason for Outage (RFO) - Network Incident, 17 April 2026
Summary
On 17 April 2026 our network experienced two related DDoS events. A short-duration event at 02:00 caused brief service degradation and was resolved by 02:30; we now assess this to have been a reconnaissance probe by the same actor.
The main attack began at 20:15 the same day, peaking above 2 Tbps. It employed a carpet bombing strategy, spreading malicious traffic across a wide range of our IP space rather than concentrating on a single host, which reduced the effectiveness of conventional destination-based mitigation.
Service stability was restored by 20:41 after we withdrew all peering and transit sessions and routed 100% of traffic via our upstream scrubbing path. The network continued to operate in this scrub-only configuration over the weekend as a precaution, with full capacity restored at approximately 09:00 on Monday 20 April once peering and transit sessions were safely reinstated.
During the scrub-only period, the network remained fully operational but some customers may have seen sub-optimal routing and modestly elevated latency compared to normal conditions.
Timeline
Early hours, 17 April 2026 (precursor event)
Time
Event
02:01
Smaller-scale attack traffic detected against our network; brief service degradation
02:30
Attack traffic subsides; service restored
Based on attack vectors, targeting, and timing, we now assess this earlier event to have been a reconnaissance probe by the same actor ahead of the main attack approximately 18 hours later.
Friday 17 April 2026 (main attack)
Time
Event
20:15
Automated monitoring detects anomalous ingress traffic; alerts fire
20:17
Upstream scrubbing engaged; attack confirmed as distributed volumetric
20:21
Traffic peaks above 2 Tbps across multiple prefixes
20:22
All peering and transit sessions withdrawn; 100% of traffic routed via upstream scrubbing path
20:35
Traffic volumes begin to decline; scrubbing effectiveness improves
20:41
Service stability restored; network operating in scrub-only configuration
Saturday 18 – Sunday 19 April
Network continued to operate in scrub-only configuration. Attack traffic monitored for resumption; none observed at prior scale.
Monday 20 April 2026
Time
Event
09:00
Peering and transit sessions progressively reinstated; full network capacity restored
Impact
Precursor event (02:01 - 02:30, 17 April)
A short period of degraded service was observed during this event. Impact was limited due to the smaller scale of the traffic.
Main attack (20:15 - 20:41, 17 April)
The incident caused network-wide degradation of service. Customers experienced packet loss, elevated latency and jitter, and intermittent connectivity issues. The network remained online throughout, but performance was significantly impacted during the peak of the attack due to widespread congestion.
Scrub-only period (20:22 on 17 April - 09:00 on 20 April)
From the point at which peering and transit sessions were withdrawn, all inbound traffic transited our upstream scrubbing path. The network remained fully operational, but customers may have observed:
Modestly elevated latency for some destinations due to non-optimal routing
Minor changes in path selection visible via traceroute
No packet loss or connectivity issues attributable to this configuration
Full performance was restored once peering and transit were reinstated on Monday morning.
Attack Characteristics
Analysis of the main attack traffic showed:
Peak volume above 2 Tbps
Carpet bombing across multiple prefixes, impacting large portions of our IP space simultaneously
Predominantly UDP-based flood traffic (IP protocol 17)
High volumes of fragmented UDP packets, increasing processing overhead and reducing mitigation efficiency
Traffic distributed across a large number of global source IPs and ASNs, consistent with a distributed botnet
Concurrent targeting of multiple destination IPs within the same subnet
The combination of fragmentation and wide distribution is a well-known technique designed to bypass destination-based filtering, increase load on mitigation infrastructure, and reduce the effectiveness of single-path scrubbing.
The earlier 02:01 event exhibited similar characteristics at significantly smaller scale, which is consistent with reconnaissance intended to test our detection and response profile before committing the main attack.
Cause
At peak, the scale and distribution of the attack exceeded the capacity of our single upstream mitigation path. This is a known architectural limitation that this incident brought into sharp focus. The result was:
Saturation of upstream scrubbing capacity
Upstream packet drops and queueing
Spillover congestion affecting broader network performance
Because traffic was spread across many IPs and prefixes, it could not be isolated to a single target and required network-wide mitigation handling.
Mitigation & Response
The attack was detected immediately via automated monitoring. Traffic was initially routed through upstream scrubbing, and filtering rules were adjusted dynamically as the attack pattern evolved.
As the attack continued to scale and spill over onto peering and transit ingress paths, a decision was taken at 20:22 to withdraw all peering and transit sessions and route 100% of inbound traffic through our upstream scrubbing path. This took the remaining attack traffic off our direct ingress and placed it behind dedicated mitigation capacity, which restored stability within minutes.
The network was held in this configuration deliberately over the weekend to ensure the attack had fully subsided before reintroducing direct paths. Peering and transit sessions were progressively reinstated from approximately 09:00 on Monday 20 April, with full capacity restored shortly thereafter.
Improvements Implemented
Following this incident, we have made immediate changes:
Additional upstream scrubbing capacity has been provisioned
Multi-provider mitigation is now in place, removing reliance on a single scrubbing path
Mitigation thresholds and escalation behaviour under extreme load have been re-tuned, including earlier engagement of secondary capacity
Correlation of smaller precursor events with subsequent activity has been strengthened, so probe-style traffic is flagged and tracked proactively
Conclusion
This was a large-scale, highly distributed volumetric attack, preceded earlier the same day by a smaller reconnaissance event, designed to stress both capacity and mitigation systems simultaneously through scale, fragmentation, and distribution. Service stability was restored within 26 minutes of detection by withdrawing peering and transit sessions and placing all traffic behind upstream scrubbing. The network was held in this configuration over the weekend as a precaution, with full capacity reinstated on the morning of Monday 20 April.
While the network remained operational throughout, the event exposed limitations in single-path mitigation under extreme conditions. Those limitations have now been addressed.
We remain committed to transparency about incidents of this nature and will continue to evolve our defensive posture as attack techniques develop. If you have specific questions about how this incident affected your services, please contact support.
- ResolvedResolvedThis incident has been resolved.
- UpdateUpdate
As part of our initial mitigation, peering was temporarily taken down to help contain the attack. We are now beginning to bring peering back up in stages. Upstream filtering remains active and connections should be stable, though users may notice slightly elevated ping times while peering is restored. Further updates to follow.
- MonitoringMonitoring
We have identified a large-scale volumetric DDoS attack targeting our infrastructure, which is causing intermittent connection drops, timeouts, and performance fluctuations. Mitigation is actively in progress and traffic is being filtered upstream. Some users may continue to experience degraded performance while we work to fully stabilise the service. Further updates will follow as mitigation progresses.
