In this blog, Henry Ashton presents challenges that Computer Scientists need to address with respect to algorithmic crime in finance. Henry Ashton is a PhD student at the EPSRC Centre for Doctoral Training in financial computing and analytics. His research interests span financial crime from market abuse to financial statement fraud. Prior to this, he spent a decade in an Emerging Markets equity hedge fund. Henry can be contacted at henry.ashton.17@ucl.ac.uk

The study of algorithmic crime is a natural fit in the digital ethics forum since many crimes are unethical and many unethical activities are crimes. Moreover, since my interest is finance and many would say ethics in finance is an oxymoron, focussing on algorithmic crime in finance seems like studier ground. Below I will present three challenges which I think computer scientists need to tackle and why.

Much debate on algorithmic ethics has revolved around hypothetical autonomous cars making decisions about who to crash into in doomsday no-win road scenarios. Whilst this will be an important question to be resolved in the future1, finance has some more immediate questions on the subject requiring an immediate societal response. Consider that our financial markets have been populated by trading algos for about two decades (let’s call them tradebots for ease2). Crude at the outset, these tradebots have gradually been given greater levels of autonomy and agency. They’ve been so successful, in fact, that they execute the majority of trades on most exchanges. A tradebot doesn’t need sleep, has a perfect memory, operates at superhuman speed and won’t leave the company if it isn’t paid. It is the ideal worker.

Through AlphaZero, Deepmind were able to teach an algorithm how to beat the world Go champion, just by giving it the rules of the game and telling it to go away and practise with itself3. The result was a robot champion which developed entirely new strategies (since adopted by human players). This is an example of a type of machine learning method known as Reinforcement Learning (RL). Given a state of the world (an arrangement of pieces on a Go or Chessboard), some actions (placing of pieces) and a reward (win the game), the algorithm learns how to maximise the reward by coming up with a policy that takes as an input the state of the board and outputs the best possible move (or action).  Certain trading tasks can be viewed as games, and tradebots can be trained to master them via RL. The key difference between Go and trading though is that certain trading activities are outlawed. This brings us to the first challenge as computer scientists: Work with tradebot owners to convert trading laws into a language that a tradebot can understand (so it knows not to break them). This is an area of study in computer science known as constrained optimisation and it is a hard problem.

Implicit in this challenge is the question surrounding tradebots and the law. Tradebots are not people and they are not companies, so they are in a funny position where they can break the law but not commit a crime in doing so. With previous generations of tradebot, one could easily argue that the intentions of the programmer were transmitted through the tradebot by its code and therefore the programmer or owner of the tradebot was culpable4. With the next generation, this connection is stretched if not lost, by giving a tradebot a high-level objective like ‘make money trading this thing in a consistent way’ and telling it to go away and figure out how to do that. The successful conviction of a crime within the common law system will often require the establishment of Mens Rea or ‘a guilty mind’. Whilst tradebots don’t have a mind (yet), those taught through Reinforcement Learning will have a policy function through which they map an input to action. With this, it might be possible to project legal concepts like ‘intent’ onto the actions of a tradebot. This is our second challenge: Work with the legal profession to establish concepts of guilt and intent in the context of autonomous algorithmic agents.

A tradebot trained through RL will be a neural network powered black box. It will absorb information at one end and spit out trade instructions out the other. We are beginning to address the issue of interpretability of neural networks however, thus far the debate has been mainly concerned with understanding the output of classification algorithms5. Questions of why a loan was given to person A and not person B or why the sentencing algo advises 5 years of prison for person A and a fine for person B. Imagine then the problem of the trade surveillance team at the exchange or at the regulator who have to interpret whether a tradebot is doing anything untoward within the million trades that it submits each and every day. This is an interpretation problem of much greater complexity. There is a risk that tradebots will learn to commit crimes, with the implicit blessing of their owners because there is currently an asymmetry between risk and reward. The actions of a tradebot are difficult to interpret because they are on a scale too large for the human mind alone. If there is an investigation, it can initially be side-tracked by technical waffle and the claim that its innards are protected through intellectual property laws. Finally, the impenetrability of the black box means that the owner has a comfortable distance between themselves and the criminal actions of its tradebot: “It’s just doing what the data told it to do”. This is the third challenge: Work with the regulators to create the tools and framework needed to successfully detect and investigate the algorithmic crime.

In the beginning, I mentioned that autonomous trading agents had been a fact of life in markets for many years now. Are there any observations worth highlighting which might have an impact on society when autonomous algorithmic actors appear in other walks of life? One subject which I find interesting is the tradebot as the victim of crime. Tradebots require substantial R&D investment and ongoing expenses to ensure access to the best data. The cost of computing power has fallen considerably in recent years and some tradebot strategies are vulnerable to adversarial approaches. In other words, they can be fooled into acting a certain way by other market participants6. Some regulators have taken a dim view to such activities and have been happy to prosecute human on tradebot crimes or alter regulation to make it illegal7. Because many tradebots pay exchanges high fees for data access and this is very lucrative for both parties, some suspect there has been a degree of regulatory capture in the protection given to tradebots. Tradebot owners want to protect their investment after all. Legislators must guard against this happening in new areas where industry concentration is high and lobbying pockets are deep.

In summary, in tackling algorithmic crime in finance and beyond, we as computer scientists must:

  1. Work with tradebot owners to convert trading laws into a language that a tradebot can understand (so it knows not to break them).
  2. Work with the legal profession to establish concepts of guilt and intent in the context of autonomous algorithmic agents.
  3. Work with the regulators to create the tools and framework needed to successfully detect and investigate the algorithmic crime.

1 This might be further in the future than widely expected: Stilgoe, J. (2018). Machine learning, social learning and the governance of self-driving cars. Social Studies of Science, 48(1), 25–56. https://doi.org/10.1177/0306312717741687

2 Inspired by the Arb-Bot in: Wellman, M. P., & Rajan, U. (2017). Ethical issues for autonomous trading agents. Minds and Machines, 27(4), 609-624.

3 The Economist (2017) https://www.economist.com/science-and-technology/2017/10/21/the-latest-ai-can-work-things-out-without-being-taught

4 And even then, it might not be so easy: Bloomberg (2019), https://www.bloomberg.com/news/articles/2019-04-12/spoofing-mistrial-shows-limit-of-dodd-frank-on-fake-trade-orders

5 For a review see Gilpin et al (2019)Explaining Explanations: An Overview of Interpretability of Machine Learning. https://arxiv.org/pdf/1806.00069.pdf

6 Adversarial learning is another nascent area of AI research, and will soon usher in the fascinating phenomena of Algo on Algo crime. See: Yuan, X., He, P., Zhu, Q., & Li, X. (2019). Adversarial examples: Attacks and defences for deep learning. IEEE transactions on neural networks and learning systems

7 Arnoldi, J. (2016). Computer Algorithms, Market Manipulation and the Institutionalization of High-Frequency Trading. Theory, Culture & Society, 33(1), 29–52. https://doi.org/10.1177/0263276414566642