"Geisha robot" - robot in a dress set against a traditional printed Japanese landscape

You may have read a lot recently about tools such as ChatGPT for text, Midjourney for art, or some of the many others, and be either horrified, excited or resigned to the impending wave of what has been dubbed the latest industrial revolution, sweeping out of the lab and into the mainstream in an attempt to provide ever more sophisticated tools to support everything from writing to self-driving cars.

Given it has been fed vast amounts of data in the past, AI can be useful to help prompt you to research arguments or ideas you might not already have thought of and has long been used to help improve the clarity of what you write.  It can show you standard templates and even attempt to summarise research on the web.  As with any major technological innovation, what AI can do is less of a concern as how it could be used.

 

Normalising discrimination

AI intensifies and even appears to give an objective justification for existing biases in the world.  The myth that technology is somehow neutral and objective means the biases of the past could normalise discrimination in the future.  AI cannot think for itself and so has simply regurgitated discriminatory litanies it has been fed.  Little care was taken, certainly in the early days, to ensure balance, let alone mutual respect and inclusion, famously illustrated by the incident where an AI photolabelling bot created by Google labelled a Black couple as "gorillas".  Three years later, Google appeared to have only made a single tweak to their AI engine to prevent it doing the same thing again: they prevented it from attempting to label anything as a gorilla.  

AI generated image of a white girl kissing a gorilla

 

Training companies to manipulate you

As AI learns how you think, it learns how to manipulate and control you.  First, it reflects back to you how you think, and then it gradually shades this towards how those who colour the AI's output want you to think.  From indoctrination to encouraging and facilitating group delusions, including racism and the urge to exclude and blame minorities for problems, AI has the potential to make the bad things in our society much worse.  

Image of an android holding his own detached face in his hand, wires hanging from his hollow head

 

Trained through copyright theft?

AI produces images and text that uses others work, often without acknowledging it, and so great care needs to be shown when using AI content in your assignments to avoid plagiarism.  The major AI engines of today were trained using artist's work without their consent, and now seek to profit from creating derivative works that emulate art styles that took half a lifetime of dedication to create.  Artists are now launching a class action alleging that the use of their work to create generative AI works without consent amounted to copyright theft. 

An undiscerning, unaware audience delighting in free sources of artist quality images has only just begun to question whether AI threatens the very continuity of human art and expression.  Since art is the food of individual thought, giving control of art over to AI threatens to undermine human individual expression, the ability to think independently, and be free.  

Cute white robot with large, round, blue eyes holding a paintbrush apparently painting an abstract colour wash

 

Fabricated results

AI learn which words most often go together and use these predictable patterns to write new works.  The problem with this is that they cannot think and so make up things on a regular basis, including references pointing to sources that simply do not exist.  When challenged, ChatGPT habitually apologises, then does the same thing again, offering more fake references.  Chat GPT does not know they are fake, it does not know what a reference is, and does not think.  It is simply stringing together sentences in a way that follows existing patterns it has seen.  Searching the internet using Google points you to sources mostly written by people, so you can make your own mind up how reliable they are.  Relying on the AI summary of what others write introduces another layer of potential distortion and bias.

Crane carrying a letter "C".  Caption reads "under construction".

 

A tool or a crutch?

I remember when graphical calculators were new being told by my Maths teacher that this new technology was potentially a great help to good students, but that there was a risk that bad students would attempt to use the same technology to do their work for them, without understanding what they were doing.  The same is true for generative AI.  If you already know what you are trying to create and are using AI to do some of the spadework and get there faster, this might be a good thing: it might help you do more, faster.   On the other hand, if you come to rely on AI in an attempt to do you your work for you, this could even harm your ability to think for yourself and develop your own solutions. 

Casio graphical calculator

 

tl;dr -

Technology is as good and as wise as the people who use it.  AI is just another tool, and despite the name, is in no way actually 'intelligent'.  It is up to us to use AI with discernment and be watchful for its biases, particularly where they feed into our own.  So if you use generative AI - use it wisely, use it well. 

 

Finally, here are some general tips for using AI: 

  • Think what a good response might look like before you type a prompt – this will help you formulate your own ideas and help spot errors, inconsistencies and incomplete answers. 

  • Use it as a tool, not a substitute for doing the work. 

  • Don’t assume you can write a good prompt without some practice or training. 

  • Fact check everything, particularly references. 

  • Ensure that you reference any output correctly and acknowledge your use of AI, following the University's guidance.