‘The AI body snatchers have already taken over’

Until rules and guidelines are written that govern how artificial intelligence software makes decisions, there will be grave risks to using it, including utter ineffectiveness, warns Nicolas Economou of The Future Society.

The society is a nonprofit that began at the Harvard Kennedy School, and Economou is the founder of the society's Science, Law and Society Initiative, an international forum that works on AI governance and policy to ensure that humanity reaps the benefits of AI while mitigating its risks.

Economou, who is also the CEO of the legal tech company H5, participated last week in a Global Governance of AI Roundtable in Dubai, at which policymakers crafted recommendations and began creating a road map for global cooperation on setting standards. Participants included execs from global tech companies including Microsoft and Facebook, as well as representatives from government and academia.

Economou discussed the kinds of issues he and others raised at the roundtable. He urges that careful work be done to vet AI technology for accuracy and fairness and that a methodical, multidisciplinary approach be taken to establish a legal and moral framework for the powerful technology.

A transcript of the interview is below; it was edited for length and clarity.

What do you see as the biggest risks or ethical concerns of AI?

NICOLAS ECONOMOU: For me, the biggest one is the fact that we’re surrendering human agency to machines without really thinking about it.

Something that’s often missed in the discussion of AI is that artificial intelligence is not really artificial humanity. When humans make decisions, they use not just rational thinking but also a sense of empathy, our moral sense of right or wrong, and professional ethics. We can be held accountable.

If you have no agency — the freedom to make choices and act — then you can’t be held accountable. When you surrender this human agency to nonethical agents — machines who have no ability for empathy, no sense of right and wrong — it has serious implications that must be considered and generally are not. This is not to say that there isn’t huge value in AI and that it cannot be harnessed for the betterment of humanity. It just means it needs to be governed adequately.

Nicolas Economou

Can you think of any examples where machines have displayed a lack of ethics and human decency in their decisions?

Let’s recognize that there are very few areas where machines make decisions entirely on their own. AI tends to be combination of machines and humans, a social-technical system that combines technology with a human at the wheel. But I can give you an example not in the finance world that I think is germane to it.

In the Wisconsin v. Loomis case, a judge in Wisconsin relied on an automated black box — a secret, undisclosed algorithm — to sentence a man who was assessed by that black box as a high risk to society. We as a society have no way of knowing what that algorithm was, how it operates, whether the judge was competent to understand it, and how the decision was reached. This example illustrates that decisions are being made today by machines we don’t understand, and that affects the liberty, the rights, the freedom of people.

In that way, the AI body snatchers, so to speak, have already taken over, absent a conversation. That is an incredibly significant risk because this progressive surrender of decisions to machines is happening in every aspect of life, it’s irreversible, and it’s being made without a consensus of norms.

In the financial services world, you could say that the same applies in credit scores. Increasingly in the future, an ever more fine-grained set of assessments will be made of you by algorithms you don’t understand, and those credit scores serve as a proxy for your identity for all institutions of society, and you have no way to challenge them, no control over them, and they include very little human agency or norms as to how they conduct their assessments. Maybe they’re all correct, maybe they’re not. Maybe they have biases, maybe they don’t.

Fundamentally, society never had this dialogue about how do we govern artificial intelligence. We had 3,000 years to decide how we try to govern human intelligence; we have laws, norms and governance around that. But we don’t have the same around AI.

With the use of AI in credit scoring, are you suggesting the biggest risk is that we just don’t know what the consequences are?

It’s the question of, how do we know that the scoring actually is effective? That it doesn’t replicate biases that exist in underlying data so that certain individuals in the population suffer discrimination? What are the auditing mechanisms? Why should society trust that these secret algorithms are an accurate reflection of our person and that they don’t prevent access to opportunity, to mortgages, to whatever credit we need. How do we know that credit practices are fair?

Are there any other uses of AI in financial services that you worry about?

One can think back to the 2008 mortgage crisis. This is the quintessential example of a socio-technical system, where the use of very technical algorithms with human decision-making was a complete black box that no one could understand, and that behaved on its own up till the point where it was too late. This was truly catastrophic, as we all know. If you think to the future, the ascent of AI will be far more pervasive — in trading systems, in how the financial markets work. I’m not sure the norms of adoption [and] the consequences are well understood. It’s not my impression that legislation has stepped up with the advance of technology.

AI is being used in banks for money laundering and know-your-customer work, filtering through millions of small clues to find egregious behavior, by looking at networks of people, looking at people’s backgrounds, finding patterns that indicate something untoward is going on. Do you worry that innocent people could get caught up in a dragnet?

How do we avoid innocent people, or people who shouldn’t be affected negatively by automated systems, being affected by them? Today no one has good answers.

Most people will deploy AI with good intent. But most deployments of AI don’t involve very clear measurements of efficacy. How do we know at the corporate individual level that AI works as it should when it’s applied and most importantly for the financial system, why should we believe that the surrender of decisions to machines actually can be trusted and works? That’s a very difficult question.

Another thing we’ve written about is that there are companies that openly go after elite groups, such as the HENRYs (high earners, not rich yet). They cherry-pick young people with great grades from good schools who have terrific jobs. Are you concerned that AI-driven decisions could exclude struggling people, such as the underbanked, from credit?

What you’re talking about is the risk that you create in that example and many others, of creating two classes of citizens: the techno-savvy elite who can understand AI, benefit from the transparency of AI because they can understand algorithms, who can use AI to serve their purposes, and everyone else, who may not understand AI, may not have access to it, [and] if they do, it’s only to check a box that says you’re surrendering all your privacy rights to a technology giant.

Every company that does AI will always say they take into account all the factors they can, that their systems work. But the question is not why this company should believe their AI systems work fairly — the question is why should society.

I truly believe that in the financial services world AI can be a force for good in democratizing access to, say, 600 million people in India who today don’t have access to credit and can’t borrow enough for anyone to care about them. If you had a truly effective, automated system, that could open up a world of financial credit to billions of people who today don’t have it. That’s a powerful incentive to try to create normative, governance approaches to it that allows the world to reap those benefits.

What do you think is the right way to develop standards? Is that people in an industry coming together and agreeing, we’ll ok this and block these other things?

One has to be extremely cautious before jumping to conclusions around what the right policy recommendations would be. But to arrive at the proper framework, you need a certain process. It starts with: What values do we believe in and aspire to as a society?

From those values, what ethical principles really matter in the norms of adoption of AI. The very commonly spoken of are accountability, transparency, honorable intent — are you going to do the right thing? One I think is really important is measurements of efficacy in the real world. Let’s have a simple measurement that allows you and me, when we go to the doctor, the court, and AI is being used to assess our case — does this thing work or not? How can we know that? Another ethical principle is competence. Who truly is competent to use AI today?

It has to go well beyond industry because industry is an interested party in this. For decisions of such magnitude, you need every element of government and civil society to present it. You need psychologists, scientists, philosophers, ordinary citizens. You need a way to conduct this dialogue, make sure you can ask the right questions, collect right inputs from folks and engage in a dialogue that is purposeful over time.

Editor at Large Penny Crosman welcomes feedback at penny.crosman@sourcemedia.com.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Bank technology Corporate ethics Credit scores
MORE FROM AMERICAN BANKER