AI Literacy

Why do I need to learn this?

With the rapid development of generative AI, or generative artificial intelligence, since the end of 2022, tools using Large Language Models or generative art have become freely available and offer images and text which students may want to use in their study and their assignments.  You may also find they are increasingly used in the workplace and need to be prepared for that.

It is important to grasp how these AI tools work in order to avoid any possibility of plagiarism.  It is also important to understand whether it is ethical to use them at all for a variety of reasons.

These pages will help you:

  • recognize AI in the world around you and have an appreciation of its impact;
  • recognize different kinds of AIs;
  • understand how AI works at a basic level and how much intelligence AI has;
  • understand AIs’ strengths and weaknesses;
  • better develop your prompt engineering;
  • consider various ethical issues such as hallucinations, bias, accountability, transparency, and reasonable academic use.

See also our Library Guide on Using Generative AI in coursework or research:

 

However, there are considerable ethical considerations to think about before using generative AI.

 

  • what text or images has the AI been trained on?  Can it legally be used in this way or are, for example, artists being plagiarised by image AI?  Are copyrighted texts included in the training datasets?  Have the authors given permission for their work to be used in this way?
  • has the AI you are using been trained/reviewed using humans who are often poorly paid and also expected to identify objectionable material (with ongoing mental health impacts because of that)? [1]
  • are you aware that the information you provide the AI in the form of prompts is also being used to train the AI and may not be kept private?  You may consider this unpaid labour and/or a breach of privacy.
  • the computing power LLMs take is considerable and has dramatic implications for energy use and potentially contributing to global warming.
  • how are the algorithms built?  Do they privilege the English language?  Do they replicate the biases of their creators and disadvantage minority viewpoints?  Companies offering LLMs do not usually release information about this.
  • is payment required or access limited?  Does this further disenfranchise marginalised groups?
  • are generative AI models likely to lead to job losses? (artists are already finding this to be the case)
  • as deepfakes become easier and more prevalent, how will that affect society, trust and our perception of reality?
  • are they genuinely producing anything new or simply regurgitating a compilation of what is already on the internet?
  • as generative AI increasingly uses AI output in its training data, will it affect the quality of output?
  • are they reinforcing societal structures or prejudices that already exist?
  • and in an academic context particularly, are you using generative AI in a way that meets the University’s standards for research or assignments being your own work?

Consider your relationship with generative AI and your use of it in the light of the above.  Consider looking for AIs which attempt to be more ethical, such as Claude.  If you’re developing AIs for others to use, you might want to consider adopting responsible AI practices rather than simply reinforcing any harmful status quo.  If you’re near graduation, think about how AI might be impacting the workplace you are considering moving into and how AI is used or going to be used in many jobs.

Search the Library Catalogue for many books and ebooks on the subject but three titles to explore:

Coeckelbergh, M. (2020). AI ethics. MIT Press

Crawford, K. (2021). Atlas of AI: power, politics, and the planetary cost of artificial intelligence. Yale University Press.

McQuillan, D. (2022). Resisting AI: an anti-fascist approach to artificial intelligence. Bristol University Press.

This series of short videos on the ethics of generative AI may also prove interesting.