The Future of AML: Old Ethics VS New Technology - P2.5
- Codex Compliance
- Sep 11, 2019
- 5 min read
Updated: Sep 25, 2019
I will continue where I left off, with a conflict in the realms of AI implementation but I’ll dip into a slightly morose real world perspective and deathly probability. There is a potentiality in the future of tech whereby there is no intervention or oversight from humans. For example in the case of a financial crime investigation, it may soon be researched/investigated by AI and then, another instance of AI could be involved in the legal process as well, if the case is straight forward then they might be all that is required. So the overarching question is, is it ok for a computer to decide the fate of a person? We are already hearing that AI is well suited to analysing credit-worthiness of borrowers and deciding, in a flash, whether to lend or not without human oversight (1). In addition the application of AI in the legal sector to peruse case law documents and cite past suits to ascertain a persons fate in a court of law. What if it’s a US state with the death penalty, then what? Quite the ethical dilemma.

A common example of this so called ethical dilemma involves a trolley and a problem, the trolley problem, and it’s not the wonky wheel on your supermarket trolley we are talking about here!
*You see a runaway trolley on a train track moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single child lying on the other side track who will be hit.
You have two options:
- Do nothing and allow the trolley to hit the five people on the main track.
- Pull the lever, diverting the trolley onto the other track where it will hit the child.
Which is the more ethical option? What would you do? What would a machine do?
That’s old hat, but it’s an uncomfortable one. Ok, moving into the future now, what if the trolley is a self-driving car with 5 people in it and there is one person crossing the road? The AI has to make a decision as to the evasive manoeuvre it will execute (intentionally chosen word). This is a huge problem that can’t really be solved in a suitable manner, either outcome would/could be deemed wrong. In addition, by putting the AI in control of that situation, you stick a computer directly in the realm of ethics, in an ethical dilemma, what rules or logic would it follow? Is it a question of probability then or a simple right and wrong? So, we say, kill 1 person over 5, is it ok for a machine to make that decision? If you’re a family member of that one person, you might have a problem with the fact that a car made the decision to kill a member of your family. It’s just not morally acceptable at this point in time, and quite frankly it’s difficult to process.

Of course we hope that the technology is far superior to us, that its vision and reaction speed will be exponentially greater than our own, limiting the potential occurrence of these kinds of situations but it’s certainly something to consider, and it is being considered. There is the other side that says if all cars are connected on the same network/communicating etc, then accidents are much less likely to occur, as there will be an element of cooperation and synchronisation. I think that has to be a condition of roads occupied by driverless cars, there isn’t an acceptable scenario whereby some cars are on real autopilot and others manned, there is too much room for error and blame.
What if the AI car is connected to all the databases- criminal records, social media, credit, spending, markets, personal data, and is programmed to assess worth of those who are to be involved in the imminent crash, who is/are most worthy of being alive? Who has the most utility, or value? Even, who has been the better person up to this moment, or has the most potential? If logical analysis of an individuals worth is the premise, then an AI might have to venture into these realms to reach a conclusion - I’ve gone a bit Bladerunner but you get what I’m getting at. What about if it’s me, with a penchant for dabbling, no facebook account, and a problem with civilisation and authority, would I be seen as valuable, or would I be lined up to be hit by the proverbial AI controlled bus, even over the chicken who’s crossing the road?
So if a car could decide who lives and dies, and an AI can decide if you get a loan, could an AI system investigate a (financial) crime, prosecute and sentence a defendant? I’d say the judge has to be human, or a cyborg at least. I feel some anger building inside me when I think about the lack of ethics in law at times, when defence lawyers use loopholes and the nuances of the rule of law to successfully defend a criminal who is in fact guilty, getting them off on a technicality. It’s another instance of a Leviathan at work, a system that is working but doesn’t work, it isn’t quite right, corrupted by the human condition. There are various shades of grey here but often it’s crystal clear and things still go wrong - sometimes it’s a disgrace.

Could an AI Lawyer prevent such situations by conducting more thorough investigation, without bias or money as a motivator, and being able to more effectively search case law. In this case would the machine be more ethical than a human? Would it be the right thing to do, again, to let a machine determine the fate of a person. Should the machine just do the research and leave the rest to the (un)trusty human?
It boggles my mind to think that one day an intelligent machine will make a decision based on it’s understanding of ethical facts and language - prescriptivism, and consider the realms of subjectivity and emotivism before making a decision as to whether a news article should be deemed adverse or not, or whether someone should go to jail. It’s certainly not unthinkable if progress continues to be made. I think the ethical and human conflicts in these spaces, as in everyone agreeing on the right and wrong of things, when creating AI programs may cause a drawn out darkness before the real dawn of the machines.
Al
#moneylaundering #aml #antimoneylaundering #kyc #AI #financialcrime #luxurycars #crime #narcos #organisedcrime #dirtymoney #thedirtstring #artificialintelligence #machinelearning #robots #ethics #deeplearning #trueAI #terrorism #taxevasion #FATF #future
Reference, Links and Related Articles.




Comments