AI Literacy

Why do I need to learn this?

With the rapid development of generative AI, or generative artificial intelligence, since the end of 2022, tools using Large Language Models or generative art have become freely available and offer images and text which students may want to use in their study and their assignments.  You may also find they are increasingly used in the workplace and need to be prepared for that.

It is important to grasp how these AI tools work in order to avoid any possibility of plagiarism.  It is also important to understand whether it is ethical to use them at all for a variety of reasons.

These pages will help you:

  • recognize AI in the world around you and have an appreciation of its impact;
  • recognize different kinds of AIs;
  • understand how AI works at a basic level and how much intelligence AI has;
  • understand AIs’ strengths and weaknesses;
  • better develop your prompt engineering;
  • consider various ethical issues such as hallucinations, bias, accountability, transparency, and reasonable academic use.

See also our Library Guide on Using Generative AI in coursework or research:

 

AI depends on data.  Lots of it.  Amazon is good at recommending books because so many people have read and/or reviewed books and also viewed or bought similar titles.  Large Language Models such as ChatGPT have simply been fed a lot of text from the internet and are very good at predicting the next word or phrase using statistical algorithms.  No actual ‘intelligence’ is involved, however clever or friendly they may appear.  This raises ethical questions if the AI has been trained on data that has been acquired without permission.  An example are image generators which have been trained on copyrighted pictures and artwork.

Another problem with AI is that the companies who produce them may not be very transparent about how their algorithms make decisions.  They may not even be sure exactly how the AI makes its decisions.  For research purposes in academia, this can mean that it is difficult or impossible to reproduce results but it can also mean that there is no way of checking to see if the algorithms are biased in their decision making or if the underlying data they are basing their decisions on is biased.  In this context you might see the expression ‘Explainable AI’ or XAI which tries to get inside the blackbox.