In a recently published report (9th March 2019), ‘Regulating in a Digital World’, the House of Lords select committee on communications, outlines the result of their survey, and their suggestions concerning digital regulation. The report is substantive in length (83 pages) and adds to the growing number of government-initiated research publications that touch on the social, political and ethical dimensions of the emerging digital technologies, as well as the evolving regulative issues on relatively ‘old’ new tech (such as the internet).

The report is divided into six chapters, namely: Principles of Regulation, Ethical Technology, Market Concentration, Online Platforms, and Digital Authority.

Rather than surmise and repeat the finding of the report, over a series of short blog posts I will discuss relevant points in light of broader concerns in the field of digital ethics.

Beginning with Chapter 1, a general introduction is given that offers a working definition of ‘digital world’ as: an environment composed of digital services—facilitated by the internet—which plays an ever-increasing role in all aspects of life (1). The speed at which technological change occurs is noted (2) and connected to a litany of concerns that are emerging (these include ‘harmful online content, abusive and threatening behaviour, cybercrime, misuse of data, and political misinformation and polarisation’ (3)). Additionally, the evolution of these technologies from largely military and academic development to private industry i.e. public to private transfer of innovation is highlighted in terms of issues relating to government oversight (4).

Considering this, the report identifies the need to amend, update, and, if need be, create, regulation in the digital realm. The framework is set such that regulation, co-regulation (government and industry), and self-regulation are to be considered (12), while taking into account issues of enforcement (16), given the borderless nature of the digital realm (18). Indeed, a note on the need for international cooperation and how the UK has played – and should continue to play – a key role in addressing the concerns raised in this field is found (18). Interestingly, by pioneering the field of digital regulation and ethics, the report foresees digital business opportunities (19). One can think of ethical auditing of algorithms or digital business models as just one example.

In Chapter 2, the ‘Principles for Regulation’ are presented. Indeed, perhaps the central take away from the report is the statement of these principles for regulation. The report argues that a principles approach is better than a rule centric one because principles provide guidance and regulative flexibility that strict rules do not (28).

The following table notes these principles provide a brief explanation and then notes some pertinent points.

Principle Explanation Notes

Parity

Online and off-line equivalence. Age verification, ex. adult sites (37).

Accountability

Individuals and organisations need to be held to account. Widespread lack of accountability (38), issues of enforcement (jurisdictions). Law needs to be enforced, rather than new regulation (39).
Transparency Algorithmic design transparency (44), the disclosure of data collected (46), auditing of outcomes. Notice these are engineering problems.
Openness Free flow of information (47-49). Very little content here.
Ethical design Ethical design by default (52). This is a potential market – UK Ethical AI Stamped (made in the UK, etc. ) (53).
Privacy Much regulation already exists c.f. regulation of Information Commissioner’s Office. The gap between data protection framework and what users expect (51) – public desire for more action is noted. GDPR has helped but not solved the public understanding problem.
Recognition of childhood Importance of child protection is of special concern.   Only two paragraphs (54, 55).
Respect for human rights and equality rights Participation, access (human rights are equivalent to equality of access?) (56), becoming part and parcel of democratic activity (57), c.f. disability access/use of the internet and other digital things (60). The ability and opportunity to access the digital realm is spoken about in terms of human rights.
Education and awareness-raising Digital literacy as forth pillar of child education (along with reading, writing, and maths) (61). What does this mean? Should a child be taught to code, or to evaluate fake news, etc. Education may also deal with problems without the need to legislate so much.
Democratic accountability, proportionality and evidence-based approach Must be aware of the dangers of over-regulation (knee-jerk ‘outrage’ legislation), and therefore an evidence-based approach is needed (64). Must not stifle competition, freedom of expression and information (65). An awareness that self-regulation is not working (66). Delimiting the role of government and data-driven regulation.


One of the first takeaways from the table is that the principles of accountability, transparency and openness are complimentary, and in some respects, synonymous. This is particularly the case with respect to transparency and openness. Indeed, the relationship between the principles can be explored further when it is pointed out that transparency is detailed in terms of algorithmic design transparency, and disclosure of data collected (46), which are central features of the ethical design. Furthermore, the principles of education and respect for equality are intimately related: as the equality discussed is one of access, and the education discussed is one of digital literacy i.e. literacy is a major factor of access.

It is important to think about the interrelation of these principles, because in the first instance, if fleshed out, the principles can perhaps be condensed into fewer and more focused statements, and in the second instance, some of the principles can be shown to be in contradiction. The most glaring and prima facie example of contradiction is between the principle of privacy and that of transparency, openness, and accountability. In fact, there are good reasons to think that the principle of privacy would not run in diametric opposition to the principle of recognition of childhood. It is not difficult to imagine violations of privacy with respect to the communications (ex. text and images) of adults-to-children online.

Aside from the complementary/contradictory interrelations between the principles, there is also the concern of vagueness. For instance, the principle of education viz. digital literacy, is opaque. As noted in the table, the question is raised as to how this would manifest: does digital literacy involve some technical/computer engineering element?, is it a matter of the development of discursive skills (how to ‘read/interpret’ in the digital world?, does it concern an awareness of how personal data is collected, processed, stored, sold, etc?, should the education inform with respect to legal rights and obligations?, etc. These are matters which require far more detail.

Given the above, it is noteworthy that a recent report (12 February 2019) published by the Nuffield Foundation, ‘Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research’, has as its main takeaway that AI ethics must move beyond lists of principles. To the best of my knowledge, this report is not cited. The Nuffield report is instructive because it is indeed a move away from broad principles, toward delimiting i. philosophical/social issues, and ii. the technical challenges. Furthermore, the interrelating of i. and ii. is crucial and can only be done when conceptual clarity is fleshed out in both. Upon this, real interdisciplinary work can be done. Outlining the ethical issues in terms of cost/benefit (though not in the economic sense) is to the point. To state the point more clearly, many of the principles in the House of Lords report are statements of values – ex. privacy and transparency are both values – which are elevated or traded depending on the context. The value placed upon the protection of children is such that the value of privacy is readily traded for it, etc.

Although broad statements of principles can be found in the industry (ex. Google’s Responsible AI Practices, and Accenture’s The Future of AI), the central role of government cannot be subverted. In fact, notwithstanding the need to ensure that competition and freedom of expression are not stifled and that kneejerk legislation (off the back of a ‘moral outrage’ event) is resisted, there is a strong statement toward the acknowledgement that self-regulation is not the solution.

In the following blog post, I will discuss issues raised in the rest of the report, principally that of ‘Ethical Technology’.