Sponsored

Keeping the lights on all the time

Today, with data driving every transaction and interaction, if you go offline then, in effect, you are out of business

Ciaran Forde, business segment manager for data centre and ICT (EMEA), Eaton: the company has opened a dedicated cybersecure lab to help clients assess the risk to their operations and how to take preventative measures

The concept of mission-critical infrastructure (MCI) rather implies the existence of frivolous infrastructure. Given that businesses do not spin up servers for a laugh, however, what it really means is that some infrastructure is so crucial for operations that its loss, even temporarily, will grind operations to a halt. But what kind of systems are typically designated as mission critical?

Any system whose failure causes the primary function or operation of a business or facility to completely cease can be designated as mission critical, said Ciaran Forde, business segment manager for data centre and ICT for Europe, the Middle East and Africa (EMEA) at Eaton.

“In a data centre, that primary function relates to the IT compute and the associated inbound and outbound telecommunications,” he said.

For example, the perimeter security might fail or even lighting might fail, but so long as the computer servers and storage device’s function and communicate then the primary function is maintained.

“So, top of the charts is, of course, the IT systems themselves, hardware and software, and the power that feeds them. Then the connecting telecom networks, be that the transmission equipment, physical connection or service. Cooling is up there too as, eventually, loss of a controlled environment and overheating will cause operations to cease. More dramatically of course would be actual fire or activation of fire life safety systems causing complete or partial shutdown,” he said.

Power on

When it comes to protecting crucial systems, preference must be given to power, however.

“Megabits only survive in the presence of megawatts. If the grid fails, what keeps the data centre running is backup power systems,” said Forde.

This will consist of an uninterruptible power supply (UPS) with two functions. Firstly, it conditions the incoming power, converting it from a raw grid supply with all its imperfections, which can adversely affect the data centre equipment, to a perfect power quality source.

Secondly, Forde said, it is the ‘power brain’ sensing any grid outage and instantly taking action to maintain power through its large battery energy storage.

“This provides enough time for either the supply to return or the activation of backup generators on site. It can be said to be the primary power element to sustain a mission-critical operation,” he said.

Forde said that there are a number of recurring threats when it comes to system failure in mission-critical infrastructure.

“The UPTIME institute note, in data centres, the rate of outages is generally falling relative to the increasing number of data centres. But still, of operators surveyed, 60 per cent reported outages in the last three years. 66 per cent of these were negligible to minimal with 30 per cent ranking as significant to severe,” he said.

Human factors tend to feature high on the list of causes of outages, and ex post facto analysis points to the majority as being avoidable either by practice or design.

“Next would be the actual data centre design perhaps not having the correct level of resilience or redundancy designed into some or part of the systems deployed. Next would be design related, but associated with the inter-dependencies between systems be that along the power train, IT system and load, cooling, operational controls and management,” he said.

Finally, individual component and product reliability is a factor.

“Intrinsic product quality and the understanding of potential failure mechanisms at component and product level is part of the data centre engineers tool kit.

To best address continuity of power, businesses need to look beyond the traditional building blocks of design, where individual components and equipment are mixed and matched. Instead, taking a systems approach where the whole end to end infrastructure is considered within the design phase provides a finely tuned engineered system.

‘A systems approach considers on top of the electrical infrastructure the influences of mechanical and data systems, allowing simplified compatibility between products, creating seamless integration of digital and power demands. It takes the concept of design, build, operate and maintain, to create a self-aware, self-optimised, intelligent infrastructure that connects the electrical and data domains across the end-end system,” he said.

Finally, Forde said, in light of cyber attacks on power systems, the days of cyber security being seen as a separate issue are over.

“The reason for this is that generally today’s operational technology [OT] is increasingly reliant on using IT-type smart devices, protocols, sensors, actuators, drivers and controllers that are in themselves digital and networked”.

Indeed, even if such OT networks are ‘out of band’ and off the main IT and telecom networks they are still vulnerable.

“The extent of the OT network may also make it difficult to spot interconnection or points of access to both. This is why not only does Eaton design and test its products to UL cybersecure standards, it has also opened a dedicated cybersecure lab to help clients assess the risk to their operations and how to take preventative measures,” he said.