Translating Algorithm Ethics into Engineering Practice (UCL Digital Ethics Forum).
The Digital Ethics Forum held its fifth workshop at the UCL Centre for Artificial Intelligence (4th February). Titled Translating Algorithm Ethics into Engineering Practice, the workshop considered the growing foundational AI research around topics such as privacy, fairness and transparency, its engineering applications and cross-sector implications.
The workshop was organized in collaboration with UCL Centre for Artificial Intelligence and supported by the EPSRC Impact Acceleration Account and the Cisco Research Centre grants. In addition to UCL faculty, participants from industry (Cisco, Barclays, Hirevue, etc.) and civil service (ONS, CDEI, HMRC, etc.) were present. Reflecting a culture of openness and dialogue, it was stressed that the discussion cannot be restricted to university borders, but rather requires branching out to regulators, industry practitioners and the wider public.
Below is a summary of the day:
Introductions were made by Dr. Zeynep Engin (UCL CS, Digital Ethics Forum Founder) and Prof. David Barber – UCL). Zeynep introduced the forum and David the AI Centre. In particular, Prof. Barber emphasised that computer science is currently undergoing a ‘golden age’ and that this is a great time for computer science-humanities interdisciplinarity.
The presentations were then delivered:
- Algorithmic Fairness (Dr. Luca Oneto – University of Genoa)
Luca spoke of Learning under fairness constraints, giving examples from facial recognition and recidivism. He then explored approaches to fairness from the perspective of computer science and highlighted such things as the need for a formal definition, an analysis of the ‘unfairness process’ and whether it is possible to impose fairness during the model creation. In closing, he noted that the field is in the early stage and that there is much to debate and question.
- AI Transparency / Explainability (Prof. Yvonne Rogers– UCL)
Yvonne presented a talk titled ‘Human-Centred Explainable Artificial Intelligence’, in which she challenged the ‘black box’ metaphor – highlighting that transparency is a delicate balance of appropriate kinds of explanation that are sensitive to the context i.e. how, and to whom and for what is an explanation being given. This leads to questions such as what an explanation should look like, and what to do in cases when a system is not explainable. Yvonne when fleshed out the case study of emotional AI, where recognition of emotional states is used in cases such as job interviews, advertising and security. Prof. Rogers closed the presentation by highlighting a workshop held in 2019 on XAI that concluded “There is not one single good explanation, there are many good explanations for each individual involved, for each algorithm used. Understanding these requires a concerted effort from us to study the people, algorithms and the environment they are in.” i.e. a shift from XAI to YAI.
- Privacy in AI systems (Prof. David Barber– UCL)
David presented a talk titled ‘Private Machine Learning using Randomised Response’. In it he challenged and provided an alternative to the popular approach of ‘Differential Privacy’. Centrally, Prof. Barber develops a strategy for machine learning driven by the requirement that private data should be shared as little as possible and that no-one can be trusted with an individual’s data, neither a data collector/aggregator, nor the machine learner that tries to fit a model.
- Integrated AI/ML Systems (Prof. Jon Crowcroft– UCL, Cambridge and The Alan Turing Institute)
Jon presented on ‘Engineering Privacy Property’. He began with a conceptual note regarding the philosophical notion of privacy and noted that the concept is not an invention of the recent centuries. Discussing ‘Human Data Interaction (HDI)’, the following points were raises; i. Legibility – Aka transparency: who knows what? Ii. Agency – Aka power: who has rights for what? Iii. Negotiability – How do we change; permissions; and, ownership. Pointing out that data is not a ‘property’ in any conventional sense of the term, Jon argued that the ‘default should afford equal power – to data subject, source, originators’. He also pointed to the themes of ‘Agency – can I share or delete data?’; ’Negotiability – can I charge, refuse, revoke access’; ’Legibility – Do I know and understand all this?’; ‘where is data?’. Prof. Crowcroft raised concerns of political economy, with the notion of ‘Data as Payment’ in his close.
- Industry Perspective (Pete Rai – Cisco)
Pete’s presentation was titled ‘The Rocky Road of Digital Ethics’, in which an industry perspective was discussed. He began by raising how it is that ‘digital ethics’ has emerged as a problem and noted that success, in the context of industry, would be:
- Clear rules and structures which enable safe and ethical use
- A balance of the best individual outcomes with wider societal needs
- Working in partnership with citizens, governments and businesses
- Full regulatory and legal compliane
- The ability to make mistakes and iterate to find the best solutions
Pete then turned to what ‘bad outcomes’ would look like and noted such things as compromising of reputation and values; burdensome regulation; unclear guidelines and jurisdictional variance of guidelines. Turning to Cisco’s focus on human agency, robustness, fairness and explainability. Pete then fleshed out points concerning the standard by which Cisco is judging itself (not just legal, but moral values are in concern), if it is even possible to have an ethical by design approach, and the broader question of the diversity of the teams building products and conducting research. This led to wider questions of the extent to which a private company can be responsible for how their technology is used and sold on by second and third parties.
- Translating Algorithm Ethics into Engineering Practice (Adriano Koshiyama – UCL)
Adriano presented his future post-doctoral work on Translating Algorithm Ethics into Engineering Practice. The main idea behind his work is the view that algorithmic decisions are not ethical by default, like no product is secure by default. His team plan to create an AI-based toolkit to ‘police’ algorithms assessing bias, legality, fairness, performance etc. His work underpins the interdisciplinary UCL Digital Ethics Forum, addressing the emerging ‘digital ethics’ field from both engineering and policy viewpoints; through a toolkit and associated framework.
The seminar then closed with discussions and final remarks.
This article was written by Dr. Emre Kazim; for any enquires please contact firstname.lastname@example.org