In this blog, Dr. Emre Kazim offers the second part of his commentary on the House of Lords ‘Regulating in a Digital World’ report. The first blog can be found here.

Building upon a set of principles, for digital regulation, the House of Lords report then turns to the practical concerns of how these principles are to be addressed and acted upon. Concretely, ‘Ethical Technology’ (Chapter 3) is highlighted as central to this. What is being referred to by this term is a ‘design’ consciousness and approach to digital ethics and regulation. As such, rather than thinking of regulation as reacting, or as something that imposes itself on the results of technological activity, the regulative/ethical principles are to be kept in mind and indeed built-in to the technologies themselves.

This is an important and crucial point. For one, it is preferred that damage is prevented rather than addressed post-facto (by things like prosecution). This is particularly acute in the digital realm, where, in many cases, the harm is irreversible or impossible to trace with respect to legal culpability. Secondly, by building-in ethical principles, computer engineers and digital companies at large, will be forced to consider how and in what ways their technology has a moral/social and ethical impact. Thirdly, just by virtue of the fact that ethical considerations are being considered from an engineering perspective, the ethical/legal demands will in themselves have to be re-evaluated. In the previous post, I noted that principles such as ‘transparency’ and ‘privacy’ are all very well in themselves, however, once taken together there are good reasons to think that they will be in contradiction with one another. The ethical design will inevitably lead to a dialectic with ethicists and legislators, which will necessarily mature the field.

The report notes that:

‘different user groups may need specific design ethics applied to them. The internet should also cater for adults with specific needs, older people and children of different ages’ (70).

It is noteworthy that in this context the ‘digital’ that is being pointed to is the ‘internet’, which I read in broad terms. This is a common thread throughout the report, i.e. reference to digital technology as synonymous with the internet and reveals somewhat of a lack of understanding with respect to the emergence, function and nuanced impact of various technologies. Understandably, as will be noted below, ‘internet’ is an encompassing term, and in this report is taken implicitly to also refer to social media platforms (ex. Twitter, Facebook, etc.) and service providers (ex. Amazon, etc.). However, from a technical point of view, ‘internet’ is unlikely to be useful.

For example, when noting that specific design solutions will have to be considered for particular demographics i.e. children, and those with disabilities, the design solution will itself be different for children on Facebook, and children on Amazon. With respect to some social media platforms, such as Facebook, a series of concerns, such as cyberbullying, exposure to adult content and grooming, will have to be addressed in ways that require different design curation when compared to the likes of ‘market’ platforms, such as Amazon.

Indeed, these are the types of issues that the report fails to address. Instead, the report asserts such things as the need to enforce legislation that already exists (73), by provisions that are largely to do with the ‘settings’ of a particular technology. For example, ‘high privacy’ is spoken of in terms of ‘geolocation off by default, the upholding of published age-restrictions, content and behaviour rules by online services, preventing auto-recommendation of content detrimental to a child’s health and wellbeing, and restrictions on addictive features, data-sharing, commercial targeting and other forms of profiling (74). In all of these cases, ‘opt-out’ does not touch on design because the technology requires particular types of interaction that necessarily harness particular kinds of data. Opting-out of geolocation is no real possibility for someone carrying a mobile phone because for the phone to work, it’s signalling (technically) requires functions which will render the phone geolocatable. Design focused anonymity is likely to be dramatically distinct from opt-out solutions.

Data collection is vital to the business models of ‘big tech’ (75) and it is important to consider the relationship between the free use of many of these platforms or internet services, considering concerns relating to privacy. One simple example is that of Google, which provides an extremely powerful free-to-use search engine: the utility of this service is such that it almost constitutes a ‘right’, read in terms of openness and access to information. By limiting data collection, or demanding that significant limits are to be placed on the use of personal data, it is a question as to whether such services will remain ‘free’ (in terms of currency payment, the payment now is in terms of personal data!).

Recommendations are made such that users should have ‘the right to receive a processing transparency report on request’ (80) and, data controllers/processors:

‘should be required to publish an annual data transparency statement detailing which forms of behavioural data they generate or purchase from third parties, how they are stored and for how long, and how they are used and transferred’ (81).

Again, this type of suggestion will require considerations of the viability of such requests, the infringement on sensitive business models and, perhaps most crucially, an understanding of what is meant by ‘processing transparency’ i.e. what data was used? what parameters structured the algorithm? what were the outcomes of the processing? what action was taken as a result of this processing? etc.

Similar arguments can be made regarding ‘capturing attention’, which is a concern thought of in terms of ‘digital addition’ (82). By employing behavioural scientists and machine learning, which reinforces and seeks to manipulate user activity so that as much time as possible is spent on the platform, digital operators enhance their market position and data collection. The report recommends:

‘Digital service providers should […] keep a record of time spent using their service which may be easily accessed and reviewed by users, with periodic reminders of prolonged or extended use through pop-up notices or similar. An industry standard on reasonable use should be developed to inform an understanding of what constitutes prolonged use.’ (87).

Again, it is difficult to understand how this translates into ‘design’ terms: in the case of social media platforms, the success of the business model is premised on the use and time spent on the platforms themselves. ‘Algorithmic curation’ is raised (88) and the report pertinently points out that it is not clear what the algorithms are being optimised for (89) i.e. time spent on the platform, or, the wellbeing of the user, or, promoting the user to make purchases, etc. (90). This touches on whether it is democratic, or in another sense appropriate, for the state to direct a private company’s activity. For companies, transparency issues of ‘commercial sensitivity’ (92) are of real concern: this is of course mitigated by general public concern (or indeed, concern for the public).

The plausibility of regulation in the digital world requires and depends upon a reasonable judgement of what is to be expected of digital operators and what we, as a society, desire. Indeed the report has highly ambiguous statements such as ‘We recommend that regulation should follow the precautionary principle to ensure ethical design while also recognising the importance of innovation and entrepreneurship’ (119). By recommending ‘impact audits’ (99), it must be asked if such things are even possible – given the pervasiveness of these technologies and how short-term the analysis would be. Teams of social scientists spend years studying such impacts (through social science, peer-review, etc.): to expect a company to report on such things year on year, and for such reporting to be trusted, is unreasonable. Additionally, there is the issue of what is considered a reasonable level of digital literally on the part of the public. By presenting various analytical conclusions, or legal texts (in the form of consent options), is it reasonable to expect that the public can understand these and act in a genuinely ‘informed’ manner? The report recommends a ‘plain English’ approach; however, given the very nature of the subject, what this constitutes is likely to be highly subjective (which will have knock-on regulative/enforcement consequences).