Sponsored

Developing a deep understanding of business data

The power of AI goes well beyond chatbots and content creation. Businesses today are on the cusp of unlocking a new level of predictive information

Muhammad Zeeshan Khan, chief technology officer of the Microsoft services division at TEKenable

Today, we can clearly see that AI is destined to change not only IT operations but business processes themselves. However, despite the hype – and genuine excitement – one major question remains: how can businesses get started?

Both soon and one step at a time, said Muhammad Zeeshan Khan, chief technology officer of the Microsoft services division at TEKenable.

This is because using AI, even for straightforward tasks, is a skill. And like all skills it is something that requires practice.

“I think now is the time to start looking at it. Even with generative AI. I have early access to the Copilots from Microsoft and I can say that it takes some getting used to in order to get the best result out of them. It’s like a skill you have to learn, so it’s not a level playing field if there is a six-month gap between two people starting to use AI,” he said.

TEKenable

Year founded: 2002

Number of staff: 200

Why it is in the news: Recent moves by Microsoft will drive AI-based Copilot into the mainstream of business

It is important to get started, though because it is already clear that the direction of travel is that AI will radically change business.

“It’s not just about automating tasks, but also the predictive capabilities of AI that help business decisions,” Khan said.

The predictive ability of AI is not new. However, thanks to new forms of AI, these capabilities are now opened up beyond the realm of developers and data scientists.

“What has changed is that access to AI technology is more widely available. Machine learning [ML] and data science require specialist training and toolsets, whereas the thing about generative AI is that in order to work with it you really only need the language skills,” he said.

The rise of generative AI does not spell the end for ML and other forms of AI, however.

“They’re not mutually exclusive, they actually complement each other. Think of it this way: generative AI can create new data, whereas ML can predict based on data.”

Given this, he said, one interesting application for generative AI is actually using it to create synthetic data for training AI models.

“ML produces a number or forecast. Generative AI produces something much richer from our [human] perspective”.

The key advantage of generative AI for business is that it can make sense of unstructured data, which is to say: most of the data that businesses actually run on.

“It can make sense of unstructured data really quite quickly, and a significant amount of business data is unstructured: emails, social media posts, documentation, all sorts of things.

“A lot of that information will be in a [natural] language, be that English or German or French, and extracting from that, reformatting it and summarising it, is what generative AI excels at. That is the first major use case for Microsoft Copilots,” Khan said.

One of the significant advantages of generative AI is that it can make sense of unstructured data really quite quickly

However, the implications of this go far beyond the chatbots that we have all become accustomed to.

“We have set up a LLM [large language model] in such a way that it can access an SQL database. Right now, if you use an SQL database you are likely a database programmer, but in this case a business user can now query that database.

“It turns the natural language query into an SQL query and then provides the answer,” he said.

“That data is there and you can now unlock it. It’s something that was quite cumbersome before, but is now easy”.

One thing we all know about AI, though, is that the computational power required to run it is so vast that we will not be running it ourselves, instead relying on cloud providers.

In fact, this is only half true, said Khan. In reality, local AI does exist – and for good reason.

“There is a concept of AI at the edge, and while it is true that you do need a lot of data and computational power to create a useful model, it is also true that you can run it at the edge, enabling real-time processing.”

In other words, while the training of an AI demands a lot of power, the inference from the data can be run on more modes systems. This matters in specific cases, notable latency-sensitive applications that can’t wait for a round trip to the cloud to make a decision, such as self-driving cars.

“In the example of self-driving cars, you can see that the AI was trained on a lot of data and trained in the cloud, but it runs locally,” Khan said.

While cars are certainly a dramatic example, they are not the only application for edge AI: many more quotidian examples exist, and new use cases are beginning to reveal themselves, too.

For example, the manufacturing sector, Khan said, can build on existing predictive maintenance techniques and extend this out to the product itself.

“In a smart factory, edge AI can detect anomalies in the production process and immediately take action.

“Or, for instance, the new phone from Samsung, the S24, has a live [audio] translator right there on the phone.

“This is all being done locally, though of course the training was done in the cloud,” he said.