Delivering breakthroughs at breakneck speed

The high-performance computing sector represents a vital, but lesser-known, sector of the IT world, writes Jason Walsh

Lee Rand, director of high-performance computing and artificial intelligence at HPE EMEA

Bigger, better, faster, more is not always merely a statement of ego; sometimes these things matter, and nowhere is that truer than in the case of computational power. And however fast a workstation or desktop computer, or however much grunt can be leased from a cloud provider, there is nothing that can match the performance of supercomputing, also known as high-performance computing (HPC).

There is also a significant geo-political dimension. Historically, the leading nations in HPC have been major political players: the United States, Japan, France Germany and Britain — most of which were, incidentally, home to major research institutions and computer manufacturers in the key periods of the 1970s and 1980s. Considering their use in the defence sector, as well as key industries such as oil and gas, this dominance is hardly surprising — but it also means that supercomputing installations can be used as one kind of meter of technological and economic power.

It should come as no surprise then, that of the top 500 known supercomputers in the world today, 206 are in China. The US is in second place with 124, and Japan third with 36.

This year, however, the US regained the lead for the single most powerful machine: Summit, built by IBM for Oak Ridge National Laboratory, a US Department of Energy research lab, was hailed as the fasted in the world with a performance of 200 petaflops (the fifth power of 1,000 floating point operations per second).

Not unlike the relationship between Formula One racing cars and family saloons, early supercomputers bore little relationship to desktop PCs. Developments did filter down, however — and eventually started to filter up, too, with parallel processing meaning some supercomputers have contained significant amounts of high-end commodity hardware.

Not all supercomputers are located in major world powers, though, and Ireland has its own supercomputing centre in the form of ICHEC, the Irish Centre for High-End Computing.

“Nowadays almost every country has supercomputing resources. Nations today simply cannot operate without them,” said ICHEC’s director, professor Jean-Christophe Desplat.

“This is not only true in order to support national research, but also to develop commercial activities as more and more companies are processing large amounts of data; so-called ‘big data’.

“For sure, countries like the US, Germany and Japan have the largest supercomputing processing capabilities but other smaller countries, like Ireland, are necessarily increasing their use, and indeed reliance, on supercomputers,” he said.

Indeed, over the summer the various EU institutions approved the the European High-Performance Computing Joint Undertaking (EuroHPC), of which Ireland is a founder member.

ICHEC, meanwhile, was founded in 2005 with funding from Science Foundation Ireland (SFI) and the Higher Education Authority (HEA), with an original aim to establish and operate Ireland’s first national academic HPC service, intended to benefit of all Irish higher education and research institutions.

“Since its humble beginnings as a seed project grant, ICHEC has grown to an organisation of over 30 staff and is recognised internationally by its peers and by industry as a partner of choice for HPC services and R&D collaborations,” said Desplat.

ICHEC has since directly supported over 1,400 academic researchers, he said, and public sector engagement followed in 2007 when ICHEC established a strategic partnership with the national weather forecasting agency Met Éireann, later followed by the Central Statistics Office (CSO).

An active programme of industry engagement was initiated in 2009 with support from Enterprise Ireland, he said. Other clients include Tullow Oil and neural network research for Opening.io, including the use of GPUs to augment CPUs for faster performance.

But what does ICHEC do, exactly, and what HPC applications are in use in Ireland?

Dr Simon Wong, who leads ICHEC’s education, training and outreach, said that ICHEC had just brought online Ireland’s new national supercomputer.

“The applications researchers run on the system typically require lots of computing power to run larger or longer simulations, and/or to process large amounts of data,” he said.

“This allows research to operate in domains include nanotechnology, where researchers simulate interactions between atoms and molecules to develop new materials. Users of the system also run simulations for medical device development, and process large amounts of biological data for genomics to better understand diseases and treatments. The new system is also a powerful platform in the field of Artificial Intelligence, allowing users to train and deploy the large neural networks which have become the foundation of many new discoveries. Finally, the machine will be deployed to create ICHEC’s quantum computing environment for the development of quantum applications and libraries.”

Niall Wilson, infrastructure manager at ICHEC, said that despite the explosion in processing power following the cloud revolution, HPC still needed to be understood as a discrete sector. The tasks performed in HPC have a very different focus from that of the cloud providers.

“It is true that cloud computing offerings from tech giants — for example, Amazon, Google, [and] Microsoft — have made large computer and storage resources easily available to scientists, but the design and pricing of these offerings are focused on web and enterprise services.

“HPC requires not only the use of massive processing power, but also specialist networks to enable efficient parallel processing as well as a large and diverse set of bespoke software. Thus, national HPC centres need to have dedicated systems to cater for these needs in a more cost effective way than can currently be supplied by the cloud,” he said.

The changing face of supercomputing

Today, the leading names in HPC are HPE, Fujitsu, IBM and Cray, and typical areas of application include medical research, oil and gas, government and defence and climate research.

“HPE is engaged across multiple sectors within the HPC industry,” said Lee Rand, director of HPC and artificial intelligence at HPE for Europe, the Middle East and Africa.

“The primary sectors [for HPE] are government and defence, education and research, financial services, life sciences, manufacturing and climate.”

HPE’s HPC software ranges from workload managers such as PBS and MOAB, and extends to HPE-specific libraries for MPI (HPE Message Passing Tool Kit) and HPCM (HPE Performance Cluster Manager).

“We also provide HPC software environments for alternative processors such as ARM, and AI workloads,” he said.

Also included in HPE’s portfolio is the assets of SGI, a longterm innovator in HPC — and one-time owner of Cray —that, despite being known and loved, found itself unable to compete due to economies of scale.

For Rand, acquiring SGI and its proven technologies is a recognition of HPC’s role in the company’s future plans.

“HPE is committed long-term to the HPC market,” he said.

“The acquisition of SGI enhanced this commitment by adding capability from a software, support and technology perspective.”

But are supercomputing applications moving to the cloud nonetheless? And isn’t this just the latest example of commodity hardware eventually catching-up with the ultra high-end, the very thing that did for SGI in the end? Partly, but the cloud concept allows for new supercomputing applications — and anyway, the high-end keeps on getting higher, not to mention continuing to meet needs far removed from traditional server and workstation setups.

“We are seeing some applications move to the cloud, but data and security remain the largest issues to overcome,” said Rand.

“We believe that the market will move to a hybrid model where applications reside on infrastructure most suitably aligned to the specific workflow and use case. In this instance, the workload may be best suited to being run on premise with others consumed within a cloud infrastructure whether this be on or off premise.

“The challenge will be the software that makes this transparent to the user community.”

Any move in this direction would have one obvious upshot, of course: supercomputers are notoriously expensive, such that even things like power use are a major consideration. Shared resources could give access to businesses — in the domestic manufacturing sector, for instance — who hitherto simply could not afford HPC.

“This is one of the key aspects of HPC provisioned within the cloud,” said Rand.

“The cloud doesn’t have to be defined as a huge migration to T1 service provider. The cloud can also be simply defined as ‘someone else’s computer’. We are seeing many HPC centres offer their resources in this model. This not only lowers the entry price for HPC but, more importantly, allows new users and new workflows to benefit from an HPC ecosystem without the large upfront costs.”

HPE recently announced an ARM-based supercomputer. Using this particular chipset, originally developed for the Acorn Archimedes computer in the 1980s and now the backbone of almost all mobile technology, suggests a very particular focus: power consumption.

ARM processors, the most successful reduced instruction set (RISC) CPUs on the market, are known for an internal simplicity, and this means they only sip electricity where other processors guzzle it.

“The launch of the new ARM-based supercomputer is the result of a development program by the HPE Advanced Technologies Group,” said Rand.

“The aim of the ARM-based system is to explore alternative chipset technology for HPC workloads. ARM is one of these technologies. Will ARM deliver a lower-power, higher-performance supercomputer at an application level? This is what we intend to find out!”