Two robots holding hands

Could AI have the potential to show us the way to a better world?  Some computer scientists think it could, they are just not sure if it will.  In our last blog post, Timothy and I were warning of the dangers AI could pose humanity, from normalising bigotry to sidelining human thought.  This time, I would like to explore the potential AI has for reducing inequality and making the world a fairer and more efficient place, free from unhelpful conflicts. 

 

The problem with people

Psychology shows that people in general are fundamentally racist, tribal, and catastrophically bad at making good decisions about how to share limited resources fairly.  Most resource sharing situations involve two groups of people - from individuals to nation states, who pose a serious threat to one another if they came into conflict, and so the game theory suggests the optimal strategy is to avoid the conflict and competition that humans instinctively turn towards, preferring to take turns and share resources without conflict rather than each group attempt to grab as much as it can and sit on it.  From debates to traffic lights, sophisticated turn-taking models maximise efficiency and minimise mutually harmful conflict.  AI also challenges the hierarchical management models of organisations where power and pay are concentrated at the highest levels and where creative and disruptive original thinking is routinely discouraged has been argued to be used not because it is helpful but because it is simple and people struggle to manage complexity.  

 

Comparing people and computers through serious play

One powerful way to see how AI and people differ in how they share limited resources is to get them both to play a game of musical chairs, but where competing for a chair means both people lose.  This model elegantly combines many earlier games from game theory and highlights how much more complex and unerringly fair AI can be in resource allocation.  

There are challenges, though – because what constitutes a resource, a group of people, and many more variables are unclear to the AI and are likely to be decided either by fallible or potentially corrupt humans or shaped to prop up the existing status quo.  The challenge is less whether this limited form of AI will try to doom us all and much more whether humans will seek to shape the questions put to it such that they get the answers they want.   

This is exacerbated by AI offering solutions so complex that humans will struggle to understand them.  Becoming obedient servants of a benevolent AI despot is one thing but if the assumptions the AI is told to work on ensure it increases inequalities rather than eliminating them, AI could (and this is a recurring theme when discussing the perils of AI) end up giving a veneer of objectivity to what has been dubbed “algorithmic totalitarianism” - making people slaves to the judgements of an AI.  Worse, countries tend to all try to do the same thing rather than encouraging a diversity of responses to a challenge, meaning confirmation bias, being reassured that every country in a region is coming up with the same approach rather than what is truly fair, may lead AI models astray.  If AI models are manipulated to appear fair but consolidate inequaliites, it risks amplifying and validating discrimination and inequality.  In this way, simple group think might lead us away from the best and fairest solutions and then silence criticism of the result because the computer is presumed to be unbiased and to know best. 

 

tl;dr -  

I’ve rambled on for a bit, so in summary what are the risks? 

  • Humans will stop thinking creatively and disruptively in ways that move society forward and drive progress, innovation, and change, leading to further consolidation of existing inequalities 

  • Tools are only as good as the people who control them – AI could lead to greater fairness and efficiency or make a bad situation much worse 

  • AI is not impartial – it only does what it is told to do, so a diverse range of experts with different cognitive approaches still have to look carefully at how it has been set up and look for problems in how it works 

  • AI works in mysterious ways – it's possible to see what goes into AI and look for flaws and biases; harder to judge the output because the thinking in between is both very different from human thought and too complicated to understand 

  • Open and frank debates on the best approaches with diverse and genuine worldwide scrutiny will be necessary to keep the existing invested financial and political interests developing AI from corrupting a promising technology 

Contributed by

David E Bennett, Assistant Librarian (Promotions)

 

All views expressed are the authors own and may differ from those of the University.