Last week, the Law Commission of Ontario hosted a roundtable in Toronto on emerging digital rights as it explores follow up projects to its forthcoming report on Defamation Law in the Internet Age. I had the opportunity to present some thoughts on frameworks of procedural fairness in a digital age, and identified five digital and artificial intelligence (AI) contexts which present both challenges and opportunities for the further elaboration of fairness in administrative law and state regulation.
a. Fairness in digital dispute-resolution
First, existing templates of fairness will need to be applied to digital spaces. It is already clear what disclosure and the right to be heard consist of in dispute resolution in digital ecospheres such as Amazon, Google, Facebook, etc. In fact, some of those templates now inform the user experience in online statutory tribunals such as the BC Civil Resolution Tribunal (CRT) or Ontario’s Condominium Authority Tribunal (CAT) two of Canada’s first online administrative tribunals (with many no doubt to follow).
b. Fairness in the development and deployment of AI
Second, rights of transparency and access to information may attach to how algorithms are being used to guide and/or influence public decision-making. This will be at least two relevant AI settings: (1) explicitly coded, closed-rule algorithms (‘‘legal expert systems”), and (2) trained, machine-learning (ML) algorithms (‘‘predictive analytics”) (see Jesse Beatson, “AI-Supported Adjudicators: Should Artificial Intelligence Have a Role in Tribunal Adjudication?” (2018) 31 CJALP 307) (a summary of the article is here). The exclusion of people living with disabilities from the data sets used in machine-learning algorithms already has been cited as a possible infringement of the AODA in the U.S. (See the FTC Report, “Big Data: A Tool for Inclusion or Exclusion” (2016).
c. Fairness in the reliance on AI in investigations or evidence in legal proceedings
Third, the reliance on AI in human decision-making already has emerged as a key area for challenges on grounds of fairness. For example, according to a recent University of Toronto report, algorithms and artificial intelligence “are augmenting and replacing human decision-making in Canada’s immigration and refugee system, with alarming implications for the fundamental human rights of those subjected to these technologies,” The U of T report, “Bots at the Gate: A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System” details how the federal government’s use of these tools threatens to create a “laboratory for high-risk experiments”. Tribunal and judicial oversight of a decision-maker or decision-making body’s reliance on machine learning, predictive analytics and big data on fairness grounds can ensure a culture of transparency and justification grows alongside the use of AI in public decisions (for further elaboration of this idea, see the "Algorithms and Justice" project of the Berkman Klein Center for Internet and Society at Harvard University) .
d. Fairness in evolving uses of AI in specific regulatory sectors
Fairness concerns also will arise in sector specific contexts involving digital and/or AI disruptions. For example, a host of well-rehearsed fairness issues are arising in the setting of self-driving cars, from agency (is the car an artificial person, like a corporation, with rights and responsibilities, or is it simply a manifestation of the actions of other actors who design, build, monitor and operate the car’s systems? Fairness rights in the regulatory process will flow from these determinations of agency and legal responsibility, as they will in other AI mediated settings from drones to chat-bots. Blockchain, digital currencies and other technological innovations similarly give rise to puzzles of fairness.
In each of the contexts, it will be necessary to adapt our existing understanding of the scope and dynamics of fairness. Indeed, in this sense, fairness itself can be understood as an algorithm (in the sense of a set of rules, which enables problem solving). Further, just as an algorithm itself can learn and adapt as it experiences new problems to solve, so can our understanding of fairness morph to fit new circumstances. Canadian courts have not just recognized specific rules of due process which must be followed in relation to specific settings (like the Criminal Code provides for criminal trials) but a general and variable duty of fairness applicable to all public decision-making settings (see my recent post on the 40th anniversary of Nicholson, the Supreme Court decision setting out this rule).
The first rule of the general duty of fairness is that those affected by a decision have the right to participate in that decision. The extent of the participatory rights flows along a spectrum determined by a range of contextual factors, including the nature of the decision, the statutory context, how significantly a person is affected by the decision and the institutional experience of the decision-maker.
The second rule of the general duty of fairness is that decisions must be made by impartial and independent decision-makers. Again, the extent of the impartiality and independence is variable, along a spectrum. Additionally, impartiality and independence are determined objectively, based on a balance of probabilities as to whether a decision maker is more likely than not biased in the eyes of a reasonable observer – this is referred to as the “reasonable apprehension of bias” standards. Finally, both impartiality and independence have individual and institutional dimensions, depending on whether the concern is with bias in relation to a particular decision or decision-maker, or a systemic bias which might affect all or a cluster of decision-makers and all their decisions.
Thus, if an algorithm is relied upon directly or indirectly in a decision that affects people, or if an algorithm is the product of a decision that affects people, or if an algorithm is itself recognized as an artificial person that can be affected by the decisions of other public actors, then it is necessary to adapt these two key pillars of the general duty of fairness to the context of actions undertaken by, through and with the assistance of algorithms. Mapping out these adaptations in an age of rapid technological change and disruption is the task at hand for administrative law. And, ironically perhaps, the analog common law method of contextual applications of general principle, altered periodically by statutes reflecting democratic will, and tempered by constitutional rules, may be ideally suited allow the duty of fairness to evolve to reflect changing society and new ideas. Administrative law, in other words, is a framework and methodology well-suited to the project of developing and animating digital rights.