Artificial Intelligence is predicted to boost Ireland’s GDP by 11.6 per cent by the year 2030, but much of that future remains unmapped. How will we create ‘AI for good’, addressing its ethical quandaries while taking advantage of its benefits? The AI Summit convened at Croke Park last Thursday to discuss this question and others. The second of its kind after last year’s inaugural event, it was aimed at CEOs, innovation leaders, developers, engineers and others working in this rapidly-developing field.
This year’s theme was The Roadmap to Creating an AI Ireland’. Delivering an opening keynote, David Hegarty, Assistant Secretary of the Department of Business, Enterprise and Innovation discussed updates made to our national AI strategy.
“The key objective is to make sure we have the frameworks in place to navigate our way to a low carbon and digital future,” Hegarty said. “We know many jobs will be created by AI, but others will be transformed or displaced.”
A national strategy on AI is currently in the works – provided a new government is put in place relatively soon, it should be finalised within the next few months. Under the title AI: Here for Good, the strategy will address ethical issues, governance, regulation, the negative impacts of AI, and what support should be put in place to encourage AI developments in othe SME sector.
Speaking on How AI Will Shape Ireland’s Future, Vincent McCarthy, co-founder of the Festival of Curiosity and chief executive of Curiosity Studio, addressed questions of change; how can AI maximise the work we do, and how can it minimise risk? Citing the Edelman Trust Barometer of 2020, he noted that 80 per cent of employees in Ireland right now are worried about losing their jobs, particularly as a result of AI.
“Trust in technology companies is decreasing in general,” McCarthy said. “We can see that the public’s view of tech companies has lowered significantly lately.” He stressed the need for both the public and private sector to listen to public concerns: “Ireland needs to become a test-bed of publicly-trusted AI”.
Speaking as part of a panel discussion on how Ireland can develop a thriving AI system, Professor Noel O’Connor, CEO of the Insight Centre for Data Analytics, highlighted Ireland’s AI-related Masters and PhD programmes, as well as the large number of practical test beds generating data, “the lifeblood of AI research.” O’Connor said.
“We’ve noticed that our industry partners are interested in longer-term strategic engagements, which creates a roadmap, both long-term and short-term.” Echoing McCarthy’s talk earlier, he stressed the importance of earning the public’s trust: “What we need is more engagement with the public. Not just telling people what we’re doing, but engaging them as stakeholders.”
In a talk on ModelOps: Operationalising AI, Dr Iain Brown, Head of Data Science at SAS UK and Ireland, and Adjunct Professor of Marketing Analytics at the University of Southampton, argued for bringing data science together into a framework which can be easily implemented in business applications.
The challenges with deploying AI models – only 60 per cent of which, according to Gartner, are actually put to use – include integrating analytic solutions into workflows, and serving models into an infrastructure to make the right decisions. There’s also the problem of resistance to change; a lack of KPIs, and failure to address the AI’s ethical quandaries are holding organisations back. “Deployment through decision-making should be a crucial step,” said Brown.
In a panel on Ethics and Standards for AI: Concerns and Best Practises, former Google engineer Laura Nolan discussed tech ethics as a whole.
“Self-regulation doesn’t necessarily get us to where we want to be,” Nolan said, making reference to Cambridge Analytica as one example.
GDPR triggered a wave of industry concern; might similar regulations, like auditing and ethics assessments, be implemented to regulate AI? Professor Barry O’Sullivan, Director of the Insight Centre for Data Analytics at UCC and the SFI Centre for Research Training in Artificial Intelligence, raised the issue of definitions: the meaning of 'AI' itself, as a term, remains surprisingly vague, and this poses a problem. The likelihood of ‘killer robots’ was also raised: “Firstly, close your door, because robots aren’t able to open doors,” said O’Sullivan. “Then make yourself a cup of tea. Take your time, and eventually the robot’s battery will run down...”
Barry Lowry, government CIO of the Department of Public Expenditure and Reform, spoke on AI in the public sector. One of several speakers critical of the EU’s recent whitepaper on AI (issued in February), Lowry made the case for AI making work more enjoyable and intellectually engaging by eliminating repetitive tasks, and cited use of AI by the governments of Singapore and Estonia as good examples for the Irish government to follow. “Government itself isn’t exploiting its own data, and that’s something we have to learn to do… We need to work out how to use data safely, to deliver better policies as well as better services, ” Lowry said.
Venkatesh Kannon, Technical Manager at the Irish Centre for High-End Computing, discussed the “new frontier” of AI-led Earth Observation. Disaster management, road mapping, land management, infrastructure assessment, urban planning and many other fields stand to benefit from AI.
Ireland’s skills shortage in AI has long been of concern within our tech industry, and was the subject of a talk by Dr Pepjn van de Ven, Senior Lecturer at the Department of Electronic and Computer Engineering, at the Faculty of Science and Engineering at the University of Limerick. “Clearly there is some work to be done here,” said van de Ven. “Part of that is that we need to demystify AI and the agenda around it.”
Speaking on AI momentum, maturity and models for success, Kieran Towey, Managing Director, Data Scientist and KPMG Analytics Lead, warned that imbalances in compute power and data ownership will create a similarly imbalanced AI landscape, one facilitating state surveillance, ‘fake news’, propaganda and even AI-fuelled warfare. As an example, Towey cited a Guardian report from 2017, showing that bots had created one quarter of the climate change denial posts on Twitter. Towey outlined a ‘trust equation’: trustworthiness is equal to credibility, reliability and intimacy, divided by self-orientation.
Blockchain is often listed alongside AI as one of the technologies set to dramatically change everyday life in the near future. Dr Oisín Boydell, Principal Data Scientist and Head of the Applied Research Group at CeADAR, spoke on how AI can support blockchain through a shared and trusted data model. Data accessibility, data quantity and data quality all pose a challenge to AI adoption. “Poor quality data leads to poor quality AI,” said Boydell, citing Microsoft’s infamous ‘Tay’ chatbot, which had to be taken offline after tweeting hate speech, as an example.
Throughout the AI Summit the theme of trust surfaced again and again, speaking to the time of transition we’re in – one where we’re slowly learning share our information, and our jobs, with artificial intelligence. Trust, it’s worth noting, is earned; for the public and private sector alike, the time is now to develop an ethical framework for how AI will be put to use.