Artificial intelligence moves to the edge

The internet of things is evolving, with a move from automating routine tasks towards active machine learning

In healthcare, the internet of things might provide insights into individuals’ lives that could help provide personalised solutions

The internet of things (IoT) has moved very much intro the mainstream of industrial applications in recent years, but the promise of making sense of data on site is being touted as an imminent step change.

IoT devices will, it seems, meet artificial intelligence (AI). Hitherto mostly focused on small amounts of sensor data, the IoT of tomorrow will feature a lot more processing and storage power at the edge of the network.

Why move intelligence to the edge? Simply because the edge is where the data is.

“With this data decade there will be new insights,” said Jeff McCann, director of IoT and 5G strategy for customer solution centres at Dell Technologies.

In the healthcare space there will be personalised solutions driven by technology that gives insights into people’s lives on an individual basis. “How the patient or citizen lives will become central,” McCann said.

Boris Cergol, head of artificial intelligence at Comtrade Digital Services, said the next step will include running neural networks on mobile phones – the neural networks running on the phone itself, not in the cloud – as well as on more typical IoT devices equipped with extra processors.

Edge processing is simply necessary. “You don’t really have a choice,” he said. “In some use cases, video analytics for example, to send multiple 4K streams to the cloud would cost a fortune.”

Jeff McCann, director of IoT and 5G strategy for customer solution centres at Dell Technologies: “The whole point of AI is to enable humans to make better decisions by using data sets.” Photo by True Media

Other uses for edge AI, including video, are more obviously commercial. Grant Caley, chief technologist for NetApp, said in-store retail applications are already clear.

“They’re using video cameras not to just track theft and shrinkage but also track the movements of people for marketing statistics and just-in-time stocking,” he said.

When partnered with so-called smart shelves, this can bring clear benefits to the bottom line. “One I heard of was when items weren't moving fast enough to drop the prices, particularly in terms of fresh items,” said Caley.

AI and IoT technology need not be the preserve of multinationals either. “I’ve actually worked wth SME customers who are providing machine learning solutions for other organisations,” said Jeff McCann.

One that McCann has worked with has developed machine vision capability for road repairs. “There’s lots of really interesting companies out there,” he said.

The tech behind the tech

Chip vendors have certainly noticed a gap in the market. Arm Holdings, designers of the world’s most popular CPU, expect to get its low-power devices not only into smartphones but into objects as mundane as light bulbs.

In February, Arm unveiled the Ethos-U55 and the Cortex-M55 CPU cores specifically designed for edge devices. The Cortex-M55 is a general microcontroller-grade CPU, while the Ethos-U55 is specifically intended for AI. Designed for vector calculations, the Ethos-U55 is in effect a micro Neural Processing Unit.

According to Grant Caley, the key is that the cost of these devices is only going in one direction: through the floor. “Processors generally are cheaper than ever, look at ARM,” he said.

If this all sounds like it is a long way from temperature sensors, that is because it is. Nonetheless, Caley said the expansion of processing at the edge does not mean that the data centre will become redundant.

“Modelling, the very intensive stuff, is still done at the data centre, but there is what is called ‘inference’ done at the edge,” he said.

According to Boris Cergol, the use-potential cases are not yet really imaginable. “They will become real next year – and next year is not that far off,” he said.

Processing could be performed in devices that we do not as yet actually consider devices at all.

“We could see walking sticks that analyse video in real time to alert about obstacles,” Cergol said. “I think AI will be adopted in all kinds of devices that we’re not yet considering, precisely because it will get so cheap to do this,” he said.

The 5G future

One technology we are still waiting for when it comes to edge AI is the connectivity infrastructure. This promises a step change because it will link the edge devices back to the data centre or cloud.

“If you start to look at 5G, you will see how things will open up,” said Grant Caley.

The result will be pervasive computing, he said: “There was a concept a few years ago of ‘fog computing’. That capability is now much more practical.”

Grant Caley of NetApp: “Ethics, security and compliance are all challenges.”

But if the edge is where it is at, so to speak, why is pervasive connectivity necessary? Simply because the edge will not replace central data storage so much as augment it.

“Previously data was just sent to a data warehouse, but now it is processed and only the relevant part is sent elsewhere,” said Boris Cergol.

There will also be a requirement for AI training and this will largely be done in the cloud. “The training of all these algorithms will still happen in the cloud,” he said.

That said, there is the possibility of federated learning: the idea is that you train very small parts of overall AI, on the device. “In this way machine learning can happen without individual data going to a central store,” said Cergol.

Dell recently released its first 5G-enabled device, the 9510 laptop, but the company is also working on complex industrial solutions. One such area is around digital oceans, a concept that covers shipping and ports and uses 5G to deliver environmental and cost benefits.

“We’re working with a smart maritime networks to see what a smart shipment would look like in the future,” said Jeff McCann. “Think about fully autonomous ports using 5G networks to manage the trains and loading onto the trucks.”

Caley said that the point was not the infrastructure, but that the infrastructure would create new possibilities. Data should be kept in mind at all times, he said: “Our focus is on the data that drives all this.”

The technology facilitates the use cases, he said, but the true focus is the data. “That’s where we see the need: creating a data fabric, a seamless data environment across the edge, the data centre and the cloud.”

Where there is a need, however, there are also risks, and AI opens up the possibility of truly dystopian scenarios. The technology may be unstoppable, but how we use it is a choice.

Boris Cergol said he is concerned about the unethical use of AI, and posited the possibility that potentially terrifying applications could be developed, infringing on civil liberties and radically altering social life.

“It will technically be possible to censor in advance,” he said. “I think we need to limit such cases, to ensure we do not stop people expressing themselves even when such an opinion might be considered undesirable.”

Doing this will require a debate that goes well beyond techno-joy and, indeed, beyond industry and government to bring in ethicists and the public at large. “What we need is an ongoing focus on ethics. The technology has reached a level that has become very, very powerful,” he said.

For Jeff McCann, the key is to keep humans at the centre of the process. In his vision, AI is about augmenting human capabilities, not replacing them or running roughshod over them.

“The whole point of AI is to enable humans to make better decisions by using data sets,” he said.

What will we be doing at the edge?

Artificial intelligence (AI) applications at the edge are all about processing the data where it is, whether that is in order to reduce latency or to save on transmission time and cost. Proposed applications include industrial and smart city uses.

On the industrial side, predictive maintenance in factories can be augmented by capturing vibration, temperature and noise information to make quick decisions on imminent problems. Longer term analysis will likely remain in the cloud or data centre.

In a smart city context, image, audio and even video processing will demand edge processing, in order to remove lag and make data useful on a live basis.

Boris Cergol, head of AI at Comtrade Digital Services: “I think AI will be adopted in all kinds of devices that we’re not yet considering.” Photo by Iain White

Dell’s Jeff McCann can envisage a situation where personal medical devices will collect and process data that will then be used to deliver personalised healthcare. “How many steps do I take is a simple example,” he said.

Boris Cergol of Comtrade said edge processing will help with security and compliance. “A whole other area that is important is privacy. Here, edge computing really does have an edge,” he said.

For autonomous vehicles, of course, edge AI is a sine qua non. No matter how fast and pervasive our networks become, safety will always demand that machine vision and other data driving transport will be processed in the vehicle.

Face-to-face with surveillance

Controversially, facial recognition is one of the clearest uses of edge AI. Live facial recognition technology has matured rapidly and is now in use in the real world.

Some have interpreted the EU’s general data protection regulation (GDPR) as outright banning face recognition, though law enforcement agencies are among those likely to take a different view.

NetApp’s Grant Caley said that the ethical implications have not gone unnoticed in the industry. “Ethics, security and compliance are all challenges,” he said.

They haven’t gone unnoticed in government, either. In February the European Commission published a white paper on artificial intelligence which noted that processing which uniquely identified a person required significant and specific justification.

“Specifically, under the GDPR, such processing can only take place on a limited number of grounds, the main one being for reasons of substantial public interest,” it said.

For now, across the world, different attitudes prevail. In Britain, London’s Metropolitan Police force is already using face recognition. “As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London,” assistant commissioner Nick Ephgrave said in a statement to the press.

Ephgrave said the technology was already widely used across the country by the private sector and claimed the public supported its use.

In the undisputed world capital of tech, San Francisco, however, face recognition was banned in 2019. A mooted EU-wide, five-year ban was dropped from the Commission’s report, though it recognised the technology “carries specific risks for fundamental rights”, and warned of potential racial, sex and other biases in processing.

Bizarrely, this led to the report suggesting AI systems at work in the EU should not be controlled in the name of freedom, but given access to data from the EU, “retraining the system in the EU in such a way as to ensure that all applicable requirements are met”.