Malcolm Byrne: Algorithms can do more harm than good if not checked for bias

The design of the predicted grade system for the Leaving Cert must be fair, transparent and explained in detail – a rule the state should apply to all algorithmic decision-making

Leaving Cert students will have the choice this year of sitting traditional exams or having their predicted grades calculated

Leaving Certificate students are to be given the choice this year between predicted grades as a means of assessment of their knowledge of a subject or having that knowledge assessed in the traditional format of three-hour exams which are blindly marked afterwards.

A major factor that needs to be considered in this hybrid model is which algorithm might be used to provide the final results and how will it be audited.

The idea of the algorithm is to ensure, based on a variety of factors, that the overall grades awarded nationally will be roughly in line with previous years and so not result in grade inflation. As we saw last year, however, even when a significant number of marks originally awarded by teachers were adjusted by the algorithm, there was still significant grade inflation.

Leaving Cert grades were on average 4.4 per cent higher last year compared to the class of 2019. While the government significantly increased the number of higher education places, the big losers last year were those who had completed their exams in 2019 or before and who had to compete with those from 2020 with higher grades.

When predictive grading was used for A Levels in the UK, one of the factors included in the algorithm was a school's past performance in those exams. This led to talented students in disadvantaged schools being particularly short-changed.

The government here learned from the mistakes of the UK system, and school past performance was excluded from the Leaving Cert predicted grades algorithm but that decision is now the subject of a court challenge.

What factors will be used for the standardising the algorithm this year, not that predictive grading is back on the agenda. A student’s Junior Cert results? An order of ranking by teachers? Internal school exams? The past performance of the school?

The Irish Second Level Students’ Union has already noted that there was an increase in continuous assessment from teachers when schools reopened in September, perhaps in anticipation that some form of grading based on these results would be considered.

The design and audit of the algorithm will be of crucial importance, and the model used must be clearly explained, particularly to students.

Algorithmic decision-making is playing a part in more and more aspects of our lives, from recommending what we might next view on Netflix based on past choices to suggesting what we might purchase online given our browsing history.

Internationally, it is also playing a greater role in how governments make decisions. Decisions on targeting resources in health, transport, education and policing based on data can be very helpful in policy making. Hiring processes based on algorithms are becoming more common in the public sector as well as the private sector.

But there are already many documented risks to individuals – and to the reputation of state agencies – from such decision-making where the algorithm is not audited to prevent against any possible bias. Using algorithms in determining bail in some jurisdictions, for example, black defendants were more likely to be incorrectly classified as having a higher risk of re-offending than white defendants because the algorithms were not checked for possible bias.

The Australian government last year scrapped its Robodebt scheme, an algorithmic process of pursuing individuals for debt which primarily failed due to lack of auditing. It must now pay back AUS$721 million for wrongly issued debts. The Dutch government fell recently partly due to botched algorithm profiling in social welfare payments.

The state cannot confine its concerns about algorithmic decision-making to government departments or state agencies, it must protect and educate citizens who interact with those processes.

We need to monitor how companies in a dominant market position or in an oligopolistic fashion could use algorithms to manage markets or even to price fix. The efforts of other states or private interests to use algorithms to influence or undermine our democratic structures (where Cambridge Analytica used algorithms to influence the Brexit vote in the UK, for example) should also influence state thinking around regulating this area. Those of us in democracies also need to be aware of how non-democratic governments may use algorithms to monitor dissent.

In regulating algorithmic decision-making, the state needs to bring our values and standards as a society to the process, and we need to ensure that these standards can be enforced and that those who breach them will be sanctioned.

University of Pennsylvania Professor Kartik Hosanger has made the case in the US for an Algorithmic Bill of Rights. He argues that citizens should have a right to know why algorithms decide what they decide and that individuals should be able to request and receive such an explanation. Firms should also be required to fully audit their data.

His argument echoes a Council of Europe study that also sets out the case for human rights impact assessments to be carried out before algorithmic decision-making in all areas of public administration. It makes the argument for certification and auditing mechanisms, and points out the need to be particularly alert to algorithmic processing in the context of elections (and I believe this should also apply to referenda).

The level of knowledge of this area in government, I would argue, is limited. As politicians, we are certainly not experts. But our responsibility is to protect the public interest. For the state to regulate algorithmic decision-making, it is best to do so working with tech companies, and to ensure that legal and ethical principles inform the algorithmic design. In developing the mathematical values of the algorithm, regard must be given to the public interest, particularly the concepts of equality, fairness and privacy.

While some might argue that ethics are not a designer’s responsibility, it should be remembered that Google’s original code of conduct, indeed its motto, was “Don’t be evil”. In the state’s interaction with those engaged in algorithmic decision-making, it could be a useful starting point.

When it comes to the Leaving Cert, student interest should be to the fore, and algorithms should be checked for bias, transparent and explained in detail.

Malcolm Byrne is a Fianna Fáil Senator