<?xml version="1.0" encoding="UTF-8"?>
<feed xml:lang="en-US" xmlns="http://www.w3.org/2005/Atom">
  <id>tag:status.olilo.co.uk,2005:/history</id>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk"/>
  <link rel="self" type="application/atom+xml" href="https://status.olilo.co.uk/history.atom"/>
  <title>Olilo Status - Incident history</title>
  <updated>2026-04-17T19:25:00.000+00:00</updated>
  <author>
    <name>Olilo</name>
  </author>
  
<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmo3cguva00kgem2bd0wrfqdt</id>
  <published>2026-04-17T19:25:00.000+00:00</published>
  <updated>2026-04-17T19:25:00.000+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmo3cguva00kgem2bd0wrfqdt"/>
  <title>Intermittent Connectivity Issues</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 2 days, 15 hours and 6 minutes</p>
    <p><strong>Affected Components:</strong> Core Router - 01</p>
    <p><small>Apr <var data-var='date'> 17</var>, <var data-var='time'>19:25:00</var> GMT+0</small><br /><strong>Monitoring</strong> -
  We have identified a large-scale volumetric DDoS attack targeting our infrastructure, which is causing intermittent connection drops, timeouts, and performance fluctuations. Mitigation is actively in progress and traffic is being filtered upstream. Some users may continue to experience degraded performance while we work to fully stabilise the service. Further updates will follow as mitigation progresses..</p>
<p><small>Apr <var data-var='date'> 18</var>, <var data-var='time'>12:12:09</var> GMT+0</small><br /><strong>Monitoring</strong> -
  As part of our initial mitigation, peering was temporarily taken down to help contain the attack. We are now beginning to bring peering back up in stages. Upstream filtering remains active and connections should be stable, though users may notice slightly elevated ping times while peering is restored. Further updates to follow..</p>
<p><small>Apr <var data-var='date'> 20</var>, <var data-var='time'>10:31:06</var> GMT+0</small><br /><strong>Resolved</strong> -
  This incident has been resolved..</p>
<p><small>Apr <var data-var='date'> 20</var>, <var data-var='time'>10:32:36</var> GMT+0</small><br /><strong>Postmortem</strong> -
  # Reason for Outage (RFO) - Network Incident, 17 April 2026

## Summary

On **17 April 2026** our network experienced two related DDoS events. A short-duration event at **02:00** caused brief service degradation and was resolved by **02:30**; we now assess this to have been a reconnaissance probe by the same actor.

The main attack began at **20:15** the same day, peaking above **2 Tbps**. It employed a carpet bombing strategy, spreading malicious traffic across a wide range of our IP space rather than concentrating on a single host, which reduced the effectiveness of conventional destination-based mitigation.

Service stability was restored by **20:41** after we withdrew all peering and transit sessions and routed 100% of traffic via our upstream scrubbing path. The network continued to operate in this scrub-only configuration over the weekend as a precaution, with **full capacity restored at approximately 09:00 on Monday 20 April** once peering and transit sessions were safely reinstated.

During the scrub-only period, the network remained fully operational but some customers may have seen sub-optimal routing and modestly elevated latency compared to normal conditions.

## Timeline

### Early hours, 17 April 2026 (precursor event)

| Time  | Event                                                                                |
| ----- | ------------------------------------------------------------------------------------ |
| 02:01 | Smaller-scale attack traffic detected against our network; brief service degradation |
| 02:30 | Attack traffic subsides; service restored                                            |

Based on attack vectors, targeting, and timing, we now assess this earlier event to have been a reconnaissance probe by the same actor ahead of the main attack approximately 18 hours later.

### Friday 17 April 2026 (main attack)

| Time  | Event                                                                                          |
| ----- | ---------------------------------------------------------------------------------------------- |
| 20:15 | Automated monitoring detects anomalous ingress traffic; alerts fire                            |
| 20:17 | Upstream scrubbing engaged; attack confirmed as distributed volumetric                         |
| 20:21 | Traffic peaks above 2 Tbps across multiple prefixes                                            |
| 20:22 | All peering and transit sessions withdrawn; 100% of traffic routed via upstream scrubbing path |
| 20:35 | Traffic volumes begin to decline; scrubbing effectiveness improves                             |
| 20:41 | Service stability restored; network operating in scrub-only configuration                      |

### Saturday 18 – Sunday 19 April

Network continued to operate in scrub-only configuration. Attack traffic monitored for resumption; none observed at prior scale.

### Monday 20 April 2026

| Time  | Event                                                                                 |
| ----- | ------------------------------------------------------------------------------------- |
| 09:00 | Peering and transit sessions progressively reinstated; full network capacity restored |

## Impact  

## Precursor event (02:01 - 02:30, 17 April)

A short period of degraded service was observed during this event. Impact was limited due to the smaller scale of the traffic.

### Main attack (20:15 - 20:41, 17 April)

The incident caused **network-wide degradation of service**. Customers experienced packet loss, elevated latency and jitter, and intermittent connectivity issues. The network remained online throughout, but performance was significantly impacted during the peak of the attack due to widespread congestion.

### Scrub-only period (20:22 on 17 April - 09:00 on 20 April)

From the point at which peering and transit sessions were withdrawn, all inbound traffic transited our upstream scrubbing path. The network remained fully operational, but customers may have observed:

* Modestly elevated latency for some destinations due to non-optimal routing
* Minor changes in path selection visible via traceroute
* No packet loss or connectivity issues attributable to this configuration

Full performance was restored once peering and transit were reinstated on Monday morning.

## Attack Characteristics

Analysis of the main attack traffic showed:

* Peak volume above **2 Tbps**
* **Carpet bombing** across multiple prefixes, impacting large portions of our IP space simultaneously
* Predominantly **UDP-based flood traffic** (IP protocol 17)
* High volumes of **fragmented UDP packets**, increasing processing overhead and reducing mitigation efficiency
* Traffic distributed across a **large number of global source IPs and ASNs**, consistent with a distributed botnet
* Concurrent targeting of multiple destination IPs within the same subnet

The combination of fragmentation and wide distribution is a well-known technique designed to bypass destination-based filtering, increase load on mitigation infrastructure, and reduce the effectiveness of single-path scrubbing.

The earlier 02:01 event exhibited similar characteristics at significantly smaller scale, which is consistent with reconnaissance intended to test our detection and response profile before committing the main attack.

## Cause

At peak, the scale and distribution of the attack exceeded the capacity of our **single upstream mitigation path**. This is a known architectural limitation that this incident brought into sharp focus. The result was:

* Saturation of upstream scrubbing capacity
* Upstream packet drops and queueing
* Spillover congestion affecting broader network performance

Because traffic was spread across many IPs and prefixes, it could not be isolated to a single target and required network-wide mitigation handling.

## Mitigation &amp; Response

The attack was detected immediately via automated monitoring. Traffic was initially routed through upstream scrubbing, and filtering rules were adjusted dynamically as the attack pattern evolved.

As the attack continued to scale and spill over onto peering and transit ingress paths, a decision was taken at **20:22** to **withdraw all peering and transit sessions** and route 100% of inbound traffic through our upstream scrubbing path. This took the remaining attack traffic off our direct ingress and placed it behind dedicated mitigation capacity, which restored stability within minutes.

The network was held in this configuration deliberately over the weekend to ensure the attack had fully subsided before reintroducing direct paths. Peering and transit sessions were progressively reinstated from approximately **09:00 on Monday 20 April**, with full capacity restored shortly thereafter.

## Improvements Implemented

Following this incident, we have made immediate changes:

* **Additional upstream scrubbing capacity** has been provisioned
* **Multi-provider mitigation** is now in place, removing reliance on a single scrubbing path
* **Mitigation thresholds and escalation behaviour** under extreme load have been re-tuned, including earlier engagement of secondary capacity
* **Correlation of smaller precursor events** with subsequent activity has been strengthened, so probe-style traffic is flagged and tracked proactively

## Conclusion

This was a large-scale, highly distributed volumetric attack, preceded earlier the same day by a smaller reconnaissance event, designed to stress both capacity and mitigation systems simultaneously through scale, fragmentation, and distribution. Service stability was restored within 26 minutes of detection by withdrawing peering and transit sessions and placing all traffic behind upstream scrubbing. The network was held in this configuration over the weekend as a precaution, with full capacity reinstated on the morning of Monday 20 April.

While the network remained operational throughout, the event exposed limitations in single-path mitigation under extreme conditions. Those limitations have now been addressed.

We remain committed to transparency about incidents of this nature and will continue to evolve our defensive posture as attack techniques develop. If you have specific questions about how this incident affected your services, please contact support..</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmo2o7y430cgs4ds27jh4fo9t</id>
  <published>2026-04-17T08:53:19.858+00:00</published>
  <updated>2026-04-17T08:53:19.858+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmo2o7y430cgs4ds27jh4fo9t"/>
  <title>DDoS Attack - 17/04/2026</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    
    <p><strong>Affected Components:</strong> Radius - 02, Radius - 01, Database - 01, Database - 02, Core Router - 01</p>
    <p><small>Apr <var data-var='date'> 17</var>, <var data-var='time'>08:53:19</var> GMT+0</small><br /><strong>Resolved</strong> -
  Just a quick update - earlier today we experienced a large scale targeted DDoS attack.

The impact was minimal, and our team handled it quickly. Most users shouldn’t have noticed anything beyond possible brief disruption.

Everything is now operating normally, and we’re continuing to keep an eye on things.

Thanks for your patience 💙.</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmhjp32pn0027ykd6r7r7f6fn</id>
  <published>2025-11-03T22:11:40.733+00:00</published>
  <updated>2025-11-03T23:31:25.738+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmhjp32pn0027ykd6r7r7f6fn"/>
  <title>Edinburgh - CityFibre Outage</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 1 day, 13 hours and 12 minutes</p>
    <p><strong>Affected Components:</strong> CityFibre</p>
    <p><small>Nov <var data-var='date'> 3</var>, <var data-var='time'>23:31:25</var> GMT+0</small><br /><strong>Identified</strong> -
  The Major Incident Team has identified the root cause of the incident as a fibre break, resulting from a high-voltage power issue that caused the cables to melt. Due to the extent of the damage, engineers are required to remove the existing cable and install a new one. The estimated time for full service restoration is approximately 8 hours; however, some services are expected to restore prior to the completion of the works. The Major Incident Team is closely monitoring the situation to ensure the issue is fully resolved. Next update at: 03:30 03/11/2025.</p>
<p><small>Nov <var data-var='date'> 4</var>, <var data-var='time'>08:01:47</var> GMT+0</small><br /><strong>Identified</strong> -
  Engineers remain on-site and are continuing with the splicing works. CityFibre’s NOC has confirmed that the majority of services have now been restored. However, due to some delays encountered in the field, full restoration is now expected by approximately 07:30\. The Major Incident Team is closely monitoring the situation to ensure the issue is fully resolved. Next update at: 08:00 04/11/2025.</p>
<p><small>Nov <var data-var='date'> 3</var>, <var data-var='time'>22:11:40</var> GMT+0</small><br /><strong>Investigating</strong> -
  The Major Incident Team are currently investigating an incident that is impacting a number of services in Edinburgh. CityFibre are working to identify the issue and resolve the issue as quickly as possible. Next update at: 22:30 03/11/2025.</p>
<p><small>Nov <var data-var='date'> 3</var>, <var data-var='time'>23:06:54</var> GMT+0</small><br /><strong>Identified</strong> -
  The Major Incident Team have identified the cause of the incident as fibre strike and are currently working to restore the services. The ERS engineers are currently on-site to repair the fibre. The Major Incident Team are closely monitoring the situation to ensure the issue is fully resolved. Business Impact: Approximately 3000 FTTH, FTTP and Dark Fibre services are down. Next update at: 23:30 03/11/2025.</p>
<p><small>Nov <var data-var='date'> 4</var>, <var data-var='time'>08:02:12</var> GMT+0</small><br /><strong>Monitoring</strong> -
  CityFibre believes that all services have now been restored. However, we are currently awaiting confirmation from the on-site engineers that the splicing works have been completed and that they are fully hands-off. Next update at: 08:30 04/11/2025.</p>
<p><small>Nov <var data-var='date'> 5</var>, <var data-var='time'>11:23:44</var> GMT+0</small><br /><strong>Resolved</strong> -
  This incident has been resolved..</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Maintenance/cmh99w4wn00jxlzh60ohi6cxu</id>
  <published>2025-10-27T23:00:00.000+00:00</published>
  <updated>2025-10-27T23:00:00.000+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/maintenance/cmh99w4wn00jxlzh60ohi6cxu"/>
  <title>Planned Maintenance - IP Range Split</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 10 minutes</p>
    <p><strong>Affected Components:</strong> Core Router - 01</p>
    <p><small>Oct <var data-var='date'> 27</var>, <var data-var='time'>23:00:00</var> GMT+0</small><br /><strong>Identified</strong> -
  Hey everyone - we’ll be performing a quick maintenance tonight at **23:00** to split our **/23 network into two /24s**. This helps us scale and improve routing for future deployments.

During the change, you might notice a **short interruption in connectivity**. Once it’s complete, your router will simply need to **request a new DHCP lease** \- don’t worry, **your IP address won’t change**..</p>
<p><small>Oct <var data-var='date'> 27</var>, <var data-var='time'>23:00:01</var> GMT+0</small><br /><strong>Identified</strong> -
  Maintenance is now in progress.</p>
<p><small>Oct <var data-var='date'> 27</var>, <var data-var='time'>23:10:00</var> GMT+0</small><br /><strong>Completed</strong> -
  Maintenance has completed successfully.</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmghzwqa200e210vfr2arw807</id>
  <published>2025-10-08T12:59:25.691+00:00</published>
  <updated>2025-10-08T12:59:25.691+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmghzwqa200e210vfr2arw807"/>
  <title>High ping and disconnects</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 19 hours and 29 minutes</p>
    <p><strong>Affected Components:</strong> Core Router - 01</p>
    <p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>12:59:25</var> GMT+0</small><br /><strong>Investigating</strong> -
  We are currently investigating this incident..</p>
<p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>14:00:20</var> GMT+0</small><br /><strong>Monitoring</strong> -
  We’re now monitoring the issue closely, and the reduced peering will remain in place while we make sure everything stays stable.

  .</p>
<p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>08:28:42</var> GMT+0</small><br /><strong>Resolved</strong> -
  The DDoS attack has been fully mitigated. Services are now confirmed stable and performing as expected.  
  
We’ve implemented some changes to our DDoS defence system to mitigate similar events in the future.  
  
Thanks for your patience while we dealt with this one 💪.</p>
<p><small>Oct <var data-var='date'> 9</var>, <var data-var='time'>08:30:40</var> GMT+0</small><br /><strong>Postmortem</strong> -
  ### **RFO – DDoS Incident**

**Date/Time:** 8th October 2025

**Impact:** Intermittent connectivity and packet loss across parts of the network.

**Duration:** \~45 minutes

**Status:** Resolved

**Summary:**

At approximately 14:00, our network began experiencing a large-scale Distributed Denial of Service (DDoS) attack targeting our edge infrastructure. The attack caused intermittent packet loss and service degradation for some customers.

**Actions Taken:**

* Identified malicious traffic patterns.
* Temporarily withdrew our route servers via **LINX** and **LONAP** to isolate and contain the impact.
* Engaged automated DDoS filtering which successfully mitigated the attack and restored stability.

**Preventative Measures:**

We’ve implemented some changes to our DDoS defence system and filtering rules to mitigate this in the future.

**Status:**

All services are now stable and fully operational..</p>
<p><small>Oct <var data-var='date'> 8</var>, <var data-var='time'>13:30:15</var> GMT+0</small><br /><strong>Identified</strong> -
  We’re currently being targeted by a DDoS attack which is causing some disruption.

Our network team is actively mitigating and things should start stabilising soon.

Please bear with us while we get everything back to normal 💪.</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmfzxylux01d113snaojirypz</id>
  <published>2025-09-25T21:45:03.040+00:00</published>
  <updated>2025-09-25T21:45:03.040+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmfzxylux01d113snaojirypz"/>
  <title>CityFibre - Scotland - Increased Latency</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 3 days, 16 hours and 52 minutes</p>
    <p><strong>Affected Components:</strong> CityFibre</p>
    <p><small>Sep <var data-var='date'> 25</var>, <var data-var='time'>21:45:03</var> GMT+0</small><br /><strong>Investigating</strong> -
  We are seeing reports of slightly increased latency in Scotland.

This does not appear to be isolated to Olilo, as there are reports from outside our network as well.

The issue has been raised with CityFibre and we’ll provide updates as we receive them..</p>
<p><small>Sep <var data-var='date'> 29</var>, <var data-var='time'>14:36:45</var> GMT+0</small><br /><strong>Resolved</strong> -
  We’ve run checks on our side, and the increased latency only seems to be affecting certain CityFibre areas (Scotland).

Since there’s no actual downtime, this isn’t considered a fault on their end.

The best we can do for now is wait for CityFibre to resolve it, likely as part of ongoing upgrades or network improvements..</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Maintenance/cmfmidn7b000kzsj5w80jegcz</id>
  <published>2025-09-23T22:00:00.000+00:00</published>
  <updated>2025-09-23T22:00:00.000+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/maintenance/cmfmidn7b000kzsj5w80jegcz"/>
  <title>Minor Service Interruption</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour</p>
    <p><strong>Affected Components:</strong> Core Router - 01, CityFibre</p>
    <p><small>Sep <var data-var='date'> 23</var>, <var data-var='time'>22:00:00</var> GMT+0</small><br /><strong>Identified</strong> -
  We’ll be carrying out scheduled maintenance on **Tuesday, 23/09/2025 between 23:00 and 00:00**.  
During this window, an FPC reboot will take place, which may cause a short service interruption. Service will automatically restore once the reboot is complete, and we expect the overall impact to be minimal.

This work is an exciting next step in bringing **Openreach customers onto the Olilo network**. Once complete, we’ll be ready to start welcoming you onto our **Core Network**.

Thanks for bearing with us while we continue building out the network!.</p>
<p><small>Sep <var data-var='date'> 23</var>, <var data-var='time'>22:00:01</var> GMT+0</small><br /><strong>Identified</strong> -
  Maintenance is now in progress.</p>
<p><small>Sep <var data-var='date'> 23</var>, <var data-var='time'>23:00:00</var> GMT+0</small><br /><strong>Completed</strong> -
  Maintenance has completed successfully.</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Maintenance/cmf9zh2mi000s13e7g54dl9bl</id>
  <published>2025-09-14T23:30:00.000+00:00</published>
  <updated>2025-09-14T23:30:00.000+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/maintenance/cmf9zh2mi000s13e7g54dl9bl"/>
  <title>Minor Service Interruption</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 30 minutes</p>
    <p><strong>Affected Components:</strong> Core Router - 01, CityFibre</p>
    <p><small>Sep <var data-var='date'> 14</var>, <var data-var='time'>23:30:00</var> GMT+0</small><br /><strong>Identified</strong> -
  We will be performing planned maintenance on **Monday 15/09/2025** between **00:30 and 01:00**.  
During this time, an FPC reboot will take place which may cause a brief interruption in service.  
We expect the impact to be minimal and service will automatically restore once the reboot is complete.

Thank you for your patience..</p>
<p><small>Sep <var data-var='date'> 15</var>, <var data-var='time'>00:00:00</var> GMT+0</small><br /><strong>Completed</strong> -
  Maintenance has completed successfully..</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cme0305zo00vone80seu02d34</id>
  <published>2025-08-06T14:46:49.162+00:00</published>
  <updated>2025-08-06T20:49:31.335+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cme0305zo00vone80seu02d34"/>
  <title>CityFibre Outage | Glasgow &amp; Inverness</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 16 hours and 33 minutes</p>
    <p><strong>Affected Components:</strong> CityFibre</p>
    <p><small>Aug <var data-var='date'> 6</var>, <var data-var='time'>20:49:31</var> GMT+0</small><br /><strong>Identified</strong> -
  Major Incident Management can confirm that a restoration plan has been put in place - The restoration strategy involves initiating a new cable route by tapping into the CF Joint. From there, the team will install a new length of cable leading directly to a track joint that will be positioned within one of the designated carriageway chambers. This track joint will serve as an intermediary connection hub, allowing for flexible routing in turn restoring services to the affected customers. Due to the level of work that is required it has been confirmed that this work will carry into the night - Based on this next comms will be published at 1am on the 7th of August Updates to follow at the incident progresses. \*\*Next update at: 01:00 07/08/2025\*\*.</p>
<p><small>Aug <var data-var='date'> 6</var>, <var data-var='time'>14:46:49</var> GMT+0</small><br /><strong>Investigating</strong> -
  The Major Incident Team are currently investigating an incident that is significantly impacting services in Glasgow and surrounding areas.

CityFibre are working to identify the issue and resolve the issue as quickly as possible.

**Next update at: 16:30 06/08/2025**.</p>
<p><small>Aug <var data-var='date'> 6</var>, <var data-var='time'>15:46:11</var> GMT+0</small><br /><strong>Investigating</strong> -
  CityFibre have got engineers on-site and our team will continue to triage the issue.  
  
**Next update at: 17:30 06/08/2025**.</p>
<p><small>Aug <var data-var='date'> 6</var>, <var data-var='time'>17:50:57</var> GMT+0</small><br /><strong>Investigating</strong> -
  Engineers are currently on-site and actively investigating the issue, with all relevant support teams fully engaged.

To ensure the teams have the space to focus on resolution efforts, we are managing communications carefully and will provide an update as soon as further information becomes available.

Major Incident Management are closely monitoring the situation to ensure the issue is resolved as quickly as possible.

**Next update at: 20:00 06/08/2025**.</p>
<p><small>Aug <var data-var='date'> 6</var>, <var data-var='time'>19:02:47</var> GMT+0</small><br /><strong>Identified</strong> -
  Major Incident Management has held an additional bridge call with our NOC, Planning team, and on-site personnel. 

It has been confirmed that the issue was caused by third-party civil works taking place in the Glasgow area. The ground team is currently developing a restoration plan to reinstate services between Glasgow and Inverness. Updates to follow at the incident progresses. 

**Next update at: 22:00 06/08/2025**.</p>
<p><small>Aug <var data-var='date'> 7</var>, <var data-var='time'>07:19:29</var> GMT+0</small><br /><strong>Resolved</strong> -
  CityFibre is pleased to confirm that the service disruption has been fully resolved as of 03:08 on 07/08/2025.

The incident was caused by ongoing third-party civil works in the Glasgow area, which impacted multiple services across our network. We sincerely appreciate your patience and understanding during this time. .</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmdyh8zcr04rj2tefch51hg7h</id>
  <published>2025-08-05T11:50:02.905+00:00</published>
  <updated>2025-08-05T11:50:02.905+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmdyh8zcr04rj2tefch51hg7h"/>
  <title>CityFibre Outage | Glasgow</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 6 hours and 1 minute</p>
    <p><strong>Affected Components:</strong> CityFibre</p>
    <p><small>Aug <var data-var='date'> 5</var>, <var data-var='time'>11:50:02</var> GMT+0</small><br /><strong>Investigating</strong> -
  We have had reports of outages in the Glasgow area.

This has been reported to CityFibre and will update once we get confirmation..</p>
<p><small>Aug <var data-var='date'> 5</var>, <var data-var='time'>11:55:33</var> GMT+0</small><br /><strong>Investigating</strong> -
  CityFibre are aware of an ongoing incident affecting a number of services. Engineers are conducting preliminary investigations to determine the comprehensive Service Impact Analysis. CityFibre apologise for the inconvenience caused. Our main priority is to deliver the levels of service that our customers deserve and as such we have invoked our Major Incident Process. CityFibre are working to affect a full restoration as soon as possible. They will continue to issue regular updates until all services have been restored. Further details will be issued within 15 minutes..</p>
<p><small>Aug <var data-var='date'> 5</var>, <var data-var='time'>12:45:41</var> GMT+0</small><br /><strong>Identified</strong> -
  The Major Incident Team are currently investigating an incident that is significantly impacting services in Glasgow and surrounding areas. We are working to identify the issue and resolve the issue as quickly as possible.

**Next update at: 14:30 05/08/2025**.</p>
<p><small>Aug <var data-var='date'> 5</var>, <var data-var='time'>13:36:14</var> GMT+0</small><br /><strong>Identified</strong> -
  CityFibre have identified the cause of the incident as a broken fibre, engineers are on-site repairing the fibre to restore the service. CityFibre apologise for any inconvenience. CityFibre are closely monitoring the situation to ensure the issue is fully resolved. 

**Next update at: 15:30 05/08/2025**.</p>
<p><small>Aug <var data-var='date'> 5</var>, <var data-var='time'>14:32:29</var> GMT+0</small><br /><strong>Monitoring</strong> -
  CityFibre have now confirmed splicing has been completed.

Services are now back online..</p>
<p><small>Aug <var data-var='date'> 5</var>, <var data-var='time'>17:50:51</var> GMT+0</small><br /><strong>Resolved</strong> -
  This incident has been resolved..</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmdvfz5sf03svt1h2sqkez1h3</id>
  <published>2025-08-03T08:51:06.110+00:00</published>
  <updated>2025-08-03T08:51:06.110+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmdvfz5sf03svt1h2sqkez1h3"/>
  <title>CityFibre Outage | Glasgow</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 7 hours and 34 minutes</p>
    <p><strong>Affected Components:</strong> CityFibre</p>
    <p><small>Aug <var data-var='date'> 3</var>, <var data-var='time'>08:51:06</var> GMT+0</small><br /><strong>Investigating</strong> -
  We have had reports of outages in the Glasgow area.

This has been reported to CityFibre and will update once we get confirmation..</p>
<p><small>Aug <var data-var='date'> 3</var>, <var data-var='time'>10:39:40</var> GMT+0</small><br /><strong>Investigating</strong> -
  We have contacted CityFibre support again who have said they are unaware of any issues but have raised to the networks team.

We are aware of the rumour of damaged cable replacement but we are at liberty to CityFibre confirming this. If this is planned and not been communicated we will be raising a complaint..</p>
<p><small>Aug <var data-var='date'> 3</var>, <var data-var='time'>11:46:19</var> GMT+0</small><br /><strong>Monitoring</strong> -
  We have had reports that members are now back online. We will keep monitoring this and resolve if we don’t hear anything further. .</p>
<p><small>Aug <var data-var='date'> 3</var>, <var data-var='time'>16:25:11</var> GMT+0</small><br /><strong>Resolved</strong> -
  As we have had no further reports we will be marking this has resolved..</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmdobniuw00dsvah5jc9cwe8f</id>
  <published>2025-07-29T09:15:41.800+00:00</published>
  <updated>2025-07-29T09:15:41.800+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmdobniuw00dsvah5jc9cwe8f"/>
  <title>CityFibre | Major Incident | Sheffield</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 10 hours and 9 minutes</p>
    <p><strong>Affected Components:</strong> CityFibre</p>
    <p><small>Jul <var data-var='date'> 29</var>, <var data-var='time'>09:15:41</var> GMT+0</small><br /><strong>Investigating</strong> -
  CityFibre are aware of an ongoing incident affecting a number of services. Engineers are conducting preliminary investigations to determine the comprehensive Service Impact Analysis. CityFibre apologise for the inconvenience caused. They are working to affect a full restoration as soon as possible..</p>
<p><small>Jul <var data-var='date'> 29</var>, <var data-var='time'>09:30:14</var> GMT+0</small><br /><strong>Investigating</strong> -
  The Major Incident Team are currently investigating an incident that is causing a high level of degraded services across Sheffield.  
They are working to identify the issue and resolve the issue as quickly as possible.  
**Completed Actions:** \- Major Incident Process Invoked. - CityFibre NOC Engaged. - CityFibre Ground Team Engaged.  
**Next update at: 11:30 29/07/2025**.</p>
<p><small>Jul <var data-var='date'> 29</var>, <var data-var='time'>10:41:22</var> GMT+0</small><br /><strong>Investigating</strong> -
  The Major Incident Team held an additional bridge call with internal support teams and can confirm that our field engineers are currently on-site conducting triage testing at the affected location. Once the exact fault is identified using OTDR (Optical Time Domain Reflectometer) diagnostics, ground crews will be mobilised to the suspected break point to initiate repairs and restore degraded customer services.  
  
**Current Impact** There is no total outage at this time. However, customers may be experiencing severe latency and reduced service performance. Further updates will follow as restoration efforts progress.  
  
**Completed Actions:** \- CityFibre Ground Team at the affected FEX.  
  
**Next update at: 13:30 29/07/2025**.</p>
<p><small>Jul <var data-var='date'> 29</var>, <var data-var='time'>12:34:50</var> GMT+0</small><br /><strong>Identified</strong> -
  CityFibre engineers have conducted an OTDR trace at our Rotherham FEX (rot36), focusing on **Rack 9, ODF Ports 5 and 6**. The trace has identified a possible fault impacting **Port 6**, which is now under detailed investigation.

Major Incident Management is actively engaged, and ground teams have been authorised to proceed with repair activities once the fault location is confirmed. Communications have been approved for release as progress is made.

There is **no total outage** at this time. However, customers may experience **severe latency** or **degraded service performance**.

Ground teams are preparing for repair work following fault confirmation. Major Incident Management will oversee all updates and changes to ensure timely resolution.

**Next update at: 15:00 – 29/07/2025**.</p>
<p><small>Jul <var data-var='date'> 29</var>, <var data-var='time'>14:38:52</var> GMT+0</small><br /><strong>Identified</strong> -
  As of now, restoration efforts are actively progressing. Major Incident Management remains closely engaged and is working in collaboration with support teams to coordinate a comprehensive resolution to the issue. Further updates will be shared as progress continues.

  
**Current Impact** There is no total outage at this time. However, customers may be experiencing severe latency and reduced service performance.

  
**Next update at: 17:30 29/07/2025**.</p>
<p><small>Jul <var data-var='date'> 29</var>, <var data-var='time'>19:25:07</var> GMT+0</small><br /><strong>Resolved</strong> -
  CityFibre are pleased to inform you that the issue has been resolved. The issue was due to damaged fibre, spare fibre was used to restore service. Thank you for your patience during this time..</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Maintenance/cmcm4lmrd001k1lazhxv69yc3</id>
  <published>2025-07-04T00:00:00.000+00:00</published>
  <updated>2025-07-04T00:00:01.000+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/maintenance/cmcm4lmrd001k1lazhxv69yc3"/>
  <title>LNS/BNG session rebalancing</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Maintenance</p>
    <p><strong>Duration:</strong> 1 hour</p>
    <p><strong>Affected Components:</strong> Openreach, CityFibre</p>
    <p><small>Jul <var data-var='date'> 4</var>, <var data-var='time'>00:00:01</var> GMT+0</small><br /><strong>Identified</strong> -
  Maintenance is now in progress.</p>
<p><small>Jul <var data-var='date'> 4</var>, <var data-var='time'>00:00:00</var> GMT+0</small><br /><strong>Identified</strong> -
  During this window our partner will be introducing 3x new LNS routers in London to support terminating broadband subscribers.  
  
As a result they will need to re-balance sessions onto the new LNS routers by disconnecting currently online tails.  
  
A brief disconnect and reconnect will be observed during this window for subscribers..</p>
<p><small>Jul <var data-var='date'> 4</var>, <var data-var='time'>01:00:00</var> GMT+0</small><br /><strong>Completed</strong> -
  Maintenance has completed successfully.</p>

        ]]>
  </content>
</entry>

<entry>
  <id>tag:status.olilo.co.uk,2005:Incident/cmcjbbepz00bh2i7og0ttxcta</id>
  <published>2025-06-30T12:59:00.000+00:00</published>
  <updated>2025-06-30T12:59:00.000+00:00</updated>
  <link rel="alternate" type="text/html" href="https://status.olilo.co.uk/incident/cmcjbbepz00bh2i7og0ttxcta"/>
  <title>CityFibre Outage | Slough</title>

  <content type="html">
  <![CDATA[
    <p><strong>Type:</strong> Incident</p>
    <p><strong>Duration:</strong> 17 hours and 48 minutes</p>
    <p><strong>Affected Components:</strong> CityFibre</p>
    <p><small>Jun <var data-var='date'> 30</var>, <var data-var='time'>12:59:00</var> GMT+0</small><br /><strong>Investigating</strong> -
  CityFibre has declared a **Major Incident** due to a core link failure between London (LON5) and Slough (SLO664). This is affecting all services in the area..</p>
<p><small>Jun <var data-var='date'> 30</var>, <var data-var='time'>13:00:00</var> GMT+0</small><br /><strong>Investigating</strong> -
  Engineers from both FLM and ERS have been dispatched to investigate. We&#039;re currently waiting on ETAs for both teams. At the same time, CityFibre has escalated to the vendor to check for any hardware or software faults in the active network equipment..</p>
<p><small>Jun <var data-var='date'> 30</var>, <var data-var='time'>15:40:00</var> GMT+0</small><br /><strong>Investigating</strong> -
  We are aware of an ongoing incident affecting a number of services in Slough. Engineers are carrying out preliminary investigations to assess the full service impact. We apologise for the disruption — restoring service is our top priority, and our Major Incident Process remains active. CityFibre is working urgently to achieve a full restoration. Further details will be issued within 15 minutes. Thanks for bearing with us!.</p>
<p><small>Jun <var data-var='date'> 30</var>, <var data-var='time'>16:08:00</var> GMT+0</small><br /><strong>Investigating</strong> -
  The Major Incident Team has received an ETA for ERS and FLM of 18:45 and 18:40 respectively. In the meantime, the vendor responsible for the active equipment are continuing to investigate the hardware. The Major Incident Team continues to closely monitor the situation and will ensure that all necessary actions are taken to achieve a full resolution..</p>
<p><small>Jun <var data-var='date'> 30</var>, <var data-var='time'>18:48:00</var> GMT+0</small><br /><strong>Identified</strong> -
  ERS and FLM engineers on-site have identified that the issue lies with the active hardware.  
The Major Incident Team has engaged the vendor, who is continuing remote diagnostics via the FLM engineer on-site.  
The vendor has also dispatched a technician, with an ETA of approximately 21:00.  
The Major Incident Team continues to closely monitor the situation and will ensure that all necessary actions are taken to achieve a full resolution.  
**Next update at: 21:30 30/06/2025**.</p>
<p><small>Jun <var data-var='date'> 30</var>, <var data-var='time'>21:04:31</var> GMT+0</small><br /><strong>Identified</strong> -
  The vendor’s technician has arrived on-site and is working with their remote teams and CityFibre’s Technical teams to troubleshoot the issue.  
The Major Incident Team continues to closely monitor the situation and will ensure that all necessary actions are taken to achieve a full resolution.  
**Next update at: 23:30 30/06/2025**.</p>
<p><small>Jun <var data-var='date'> 30</var>, <var data-var='time'>22:18:08</var> GMT+0</small><br /><strong>Identified</strong> -
  The vendor’s technician has determined the fan modules and power modules have failed in the chassis. New fan modules and power modules have been requested and have ETA of 02:00.  
The Major Incident Team will continue to closely monitor the situation and will ensure that all necessary actions are taken to achieve a full resolution as quickly as possible.  
**Next update at: 02:30 30/06/2025**.</p>
<p><small>Jul <var data-var='date'> 1</var>, <var data-var='time'>06:46:41</var> GMT+0</small><br /><strong>Resolved</strong> -
  CityFibre is pleased to confirm that the issue was successfully resolved at 01:14 on 01/07/2025.

The issue was resolved by reloading the configuration, restoring full-service functionality.

The Major Incident Team will continue to monitor for stability..</p>

        ]]>
  </content>
</entry>

</feed>