Find out how the digital infrastructure we put in place coped with Storm Ophelia.
Posted on 24 October 2017 -When the weather forecasts first indicated that Northern Ireland would be getting a battering from former hurricane Ophelia last week, we immediately knew that would create a lot of work for our client NIE Networks, which is responsible for the upkeep of the electricity network in Northern Ireland.
We also knew that among the side-effects of storm Ophelia would be a huge increase in demand on the NIE Networks website, which we host. We had previously blogged about how bad weather results in a massive uplift in use of energy company websites and on Monday morning, hours before Ophelia was due to hit Northern Ireland, we wrote about the strain that would be placed on NIE Networks’ digital infrastructure by the imminent storm.
And so it proved. The NIE Networks traffic received 10-times more traffic on Monday than it would get on a normal day. More than 26,000 people viewed the site on the day of the storm, which was a 1,394.8% increase on the equivalent Monday in September. And with many of those users checking the site regularly throughout the day, there were more 88,000 pageviews over the course of the day.
Of course, many well-known websites get 88,000 daily pageviews or more. What’s harder to achieve is the flexibility and scalability to cope with that huge jump from the average day that was caused by people wanting to report damage to the network, to check on known faults before reporting a problem or find out when power was due to be restored in their area. Would your hosting infrastructure cope with a 1,000% increase in traffic from its usual levels?
It was important for the website to remain stable and available to allow NIE Networks to perform its public service duties. CPU usage and memory ticked over nicely all day. The next graph is of network traffic on Monday, October 16. It shows the website getting particularly busy around the time that most of the properties affected by the storm lost their power. This peaked at just shy of 100Mbps around 6pm.
You can compare that with the graph below, which covers a 24-hour period between 11am on Wednesday and 11am on Thursday. During that period, network traffic peaked at just over 15Mbps when a backup was being performed. Other than, traffic did not go beyond 10Mbps.
So, anybody visiting the website at any stage during the storm was able to provide or find valuable information. NIE Networks performed its public service functions admirably, providing regular website updates on the situation ( 24 in total from a final warning on Monday morning through to details of the final faults being repaired on Wednesday).
In truth, we were quietly confident the NIE Networks website would cope admirably with the onslaught because we perform regular and rigorous testing in which we subject it to precisely the sort of scenario caused by a storm like Ophelia. We probably wouldn’t have written a blog about it beforehand if we had been concerned.
Essentially, the hard work had already been done. We just had to monitor the situation to make sure everything was going to plan. Thankfully, it was.
What is harder to replicate on a computer is the real-world scenarios that NIE Networks staff were facing in responding to the damage caused by Ophelia. Training and preparation is one thing, but the NIE team had difficult and dangerous work to carry out last Monday and throughout the following week. They did awesome work, backed up by their colleagues in the office, and we thank them for that.