Why do I need to learn this?
With the rapid development of generative AI, or generative artificial intelligence, since the end of 2022, tools using Large Language Models or generative art have become freely available and offer images and text which students may want to use in their study and their assignments. You may also find they are increasingly used in the workplace and need to be prepared for that.
It is important to grasp how these AI tools work in order to avoid any possibility of plagiarism. It is also important to understand whether it is ethical to use them at all for a variety of reasons.
These pages will help you:
- recognize AI in the world around you and have an appreciation of its impact;
- recognize different kinds of AIs;
- understand how AI works at a basic level and how much intelligence AI has;
- understand AIs’ strengths and weaknesses;
- better develop your prompt engineering;
- consider various ethical issues such as hallucinations, bias, accountability, transparency, and reasonable academic use.
See also our Library Guide on Using Generative AI in coursework or research:
You may see references to Narrow AI, General AI and Super AI.
Narrow AI (sometimes called ‘weak AI’) is aimed at solving one particular problem – the chatbot offering help with Information Services queries is an example, Amazon uses AI to recommend books (and other items) to you, or perhaps your medical professional is using it to help diagnose certain diseases.
AI is not as new as you might think. In the 1950s Alan Turing proposed a test of machine intelligence called The Imitation Game. (You may have seen a film with that title which looks at some of Turing’s life). Arthur Samuel, a computer scientist, created a computer program to play draughts (or checkers) and it learned to play. The first workshop on the subject was in 1955 which was the first use of the phrase.
The 1960s and 1970s saw developments in the field with programs created for specific purposes and AIs use in robotics (with the first industrial robot starting work on an assembly line in 1961). An AI boom occurred in the 1980s which saw Deep Learning and Expert Systems become more popular - computers learning from mistakes and making independent decisions.
In 1997, Deep Blue beat Gary Kasparov, then the world chess champion, the first computer to do so. In 2002 the first Roomba (vacuum cleaner) appeared and a year later NASA landed two rovers on Mars which explored without direct human oversight. In 2011 Apple released Siri, the first popular virtual assistant.
General AI has become more mainstream recently and aims to be able to respond to any questions or problems put to it. Self-driving cars, Siri or Large Language Models such as ChatGPT or Bard are examples. Arguably, we are not there yet and there are considerable limitations to such technology.
Super AI would be able to surpass human intelligence and as yet only exists in fiction such as HAL from 2001: A Space Odyssey, Skynet from the Terminator films, or Isaac Asimov’s Bicentennial Man.
For more on the subject, search the Library catalogue or Discovery for artificial intelligence or machine learning or generative AI.
Or view some short videos introducing the subject.
AI depends on data. Lots of it. Amazon is good at recommending books because so many people have read and/or reviewed books and also viewed or bought similar titles. Large Language Models such as ChatGPT have simply been fed a lot of text from the internet and are very good at predicting the next word or phrase using statistical algorithms. No actual ‘intelligence’ is involved, however clever or friendly they may appear. This raises ethical questions if the AI has been trained on data that has been acquired without permission. An example are image generators which have been trained on copyrighted pictures and artwork.
Another problem with AI is that the companies who produce them may not be very transparent about how their algorithms make decisions. They may not even be sure exactly how the AI makes its decisions. For research purposes in academia, this can mean that it is difficult or impossible to reproduce results but it can also mean that there is no way of checking to see if the algorithms are biased in their decision making or if the underlying data they are basing their decisions on is biased. In this context you might see the expression ‘Explainable AI’ or XAI which tries to get inside the blackbox.
There are three basic machine learning approaches: supervised learning, reinforcement learning and unsupervised learning.
Supervised learning - the AI is given labelled training data to learn the relationship between the inputs it receives and what kind of output it should give
Reinforcement learning - the AI software is trained to make the most optimal decisions mimicking the trial and error that humans use
Unsupervised learning - the AI is given data to discover patterns and insights without any human oversight or instruction
You probably encounter versions of ‘artificial intelligence’ every day in all walks of life – from chat bots to email assistance, from Google maps monitoring traffic to banking fraud detection, from Amazon or Netflix recommendations to new features on smartphones offering to help. Two of the best-known examples which have been around for a while are the predictive text features on smartphones and Google Translate offering the ability to read (or generate) text with varying degrees of accuracy in a large number of languages.
What is meant in these pages by ‘AI’ is the more limited application of ‘large language models’ or ‘generative AI’ (sometimes generative image AI or text gen AI) which produce text or images in response to user inputs, or prompts. Some will accept speech input as well as typed text. These hit the headlines and gained millions of users when ChatGPT 3.5 was released in November 2022. Others followed, for example Bard and Gemini. These generative AIs can have the appearance of chatting with you or writing fully fledged answers or essays, but it should always be remembered that LLMs in reality have simply processed a lot of text and have good statistical models on what words are likely to go with other words. Or, in the case of images, have been trained on billions of images in order to generate pictures which can look photorealistic.
Such generative AI is developing quickly and they have the potential to change how we work, create and even think. It’s likely that a digital Jeeves (a clever and resourceful butler in P.G. Wodehouse novels) is coming to a device near you very soon if it’s not already there. Learning how to prompt such AIs to get useful results will become increasingly important.
It should always be remembered that generative AI is in no way being genuinely creative. Ted Chiang reminds us that such tools have no actual intention to communicate. In an excellent essay on the subject which is worth reading [1] he points out "it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm can never do".
However, there are considerable ethical considerations to think about before using generative AI.
- what text or images has the AI been trained on? Can it legally be used in this way or are, for example, artists being plagiarised by image AI? Are copyrighted texts included in the training datasets? Have the authors given permission for their work to be used in this way?
- has the AI you are using been trained/reviewed using humans who are often poorly paid and also expected to identify objectionable material (with ongoing mental health impacts because of that)? [1]
- are you aware that the information you provide the AI in the form of prompts is also being used to train the AI and may not be kept private? You may consider this unpaid labour and/or a breach of privacy.
- the computing power LLMs take is considerable and has dramatic implications for energy use and potentially contributing to global warming.
- how are the algorithms built? Do they privilege the English language? Do they replicate the biases of their creators and disadvantage minority viewpoints? Companies offering LLMs do not usually release information about this.
- is payment required or access limited? Does this further disenfranchise marginalised groups?
- are generative AI models likely to lead to job losses? (artists are already finding this to be the case)
- as deepfakes become easier and more prevalent, how will that affect society, trust and our perception of reality?
- are they genuinely producing anything new or simply regurgitating a compilation of what is already on the internet?
- as generative AI increasingly uses AI output in its training data, will it affect the quality of output?
- are they reinforcing societal structures or prejudices that already exist?
- and in an academic context particularly, are you using generative AI in a way that meets the University’s standards for research or assignments being your own work?
Consider your relationship with generative AI and your use of it in the light of the above. Consider looking for AIs which attempt to be more ethical, such as Claude. If you’re developing AIs for others to use, you might want to consider adopting responsible AI practices rather than simply reinforcing any harmful status quo. If you’re near graduation, think about how AI might be impacting the workplace you are considering moving into and how AI is used or going to be used in many jobs.
Search the Library Catalogue for many books and ebooks on the subject but three titles to explore:
Coeckelbergh, M. (2020). AI ethics. MIT Press
Crawford, K. (2021). Atlas of AI: power, politics, and the planetary cost of artificial intelligence. Yale University Press.
McQuillan, D. (2022). Resisting AI: an anti-fascist approach to artificial intelligence. Bristol University Press.
This series of short videos on the ethics of generative AI may also prove interesting.
If you’ve decided to use AI in your studies or research (see AI in Practice), plan your usage:
Familiarise yourself with our Information Literacy pages as much of that will be applicable here.
Start by scanning for what generative AI you can access. Some are free, some you have to pay for and some expect payment for advanced features or extended use. Some limit how many prompts you can give it in a certain timeframe. You may already be using one or two different models, but be aware that others may give different results. Understanding how they work will enable you to choose the right one for your purposes.
You will probably have heard of several: ChatGPT and Gemini, for example, but there are others and some are built into software that you may already be using (e.g. Copilot in Microsoft Office). There are specialised models which produce images (e.g. DALL-E, Midjourney & Stable Diffusion for example), computer code (such as OpenAI Codex), video (e.g. Sora), music, molecules (e.g. AlphaFold) and so on.
If you’re using more than one AI, know which models come from the same family to avoid getting similar results. The three main generative AIs are ChatGPT (by OpenAI), Llama (by MetaAI) and Gemini (by Google). There are many others such as Grok (by xAI) and Claude (by Anthropic).
Consider your prompts (the text you type or speak into an AI) and think about the level of detail you want, the kind of output you want. You can get entirely different results by changing the language you use and the focus of your requests. Be careful with what information you put into the AI. It is unwise to provide any sensitive data and you may also be giving away your own intellectual property. Consider asking the AI to respond in a certain manner or at a certain level of scholarship. Some example prompts can be seen on this page
Analyse the results carefully:
- does it make a general kind of sense? Such models are now quite good at being grammatically correct and sounding as if they know what they’re talking about.
- does it contain factual inaccuracies (sometimes called ‘hallucinations’), if your knowledge of the subject is limited be very careful with this, including any data it might offer.
- does it offer references and are they real? AIs can invent extremely convincing reading lists of completely non-existent resources.
- might you do better by searching paid-for Library resources of academic quality?
- can you identify the strengths and weaknesses of the results or are you too reliant on them?
And in the realm of art:
- does the image ‘make sense’ if it needs to?
- look at the hands in particular as AI can be particularly poor at these. Are they what you want?
- does the AI give away that it has plagiarised others’ work by including signatures or logos? (Even if it doesn’t, it does not mean that the image hasn’t been plagiarised).
Consider the originality of the results. Remember that the AI is offering a statistical ‘best fit’ with words which algorithms suggest go together. It is unlikely to be offering you anything that is remotely novel or creative.
Consider again the biases mentioned in the section on ethics to see if there is anything evident in the results you’ve been offered that gives cause for concern.
Prompt the AI further to either revise its suggestions or to go more deeply into a particular aspect. The first response is not always the best and can often be rather bland; prompt it to do better.
Don’t forget to reference your usage. See this page for help on this.
If you are using outside sources in your work, you should be referencing it. This includes generative AI which must be referenced if you use it. You may also wish to describe your usage of such tools in any sections of your work explaining methodology.
See here for help with referencing generative AI usage
You will also find links there to the University of Portsmouth guidance on the use of AI.
BUT
Check your lecturers will accept such usage. They are marking your work and deciding what counts as plagiarism, not the Library! Also note that there are tools which may detect the use of generative AI.
Be very careful about using AI to suggest references to read or to cite. Be aware that ChatGPT and other AI services currently produce incorrect or made-up references that cannot be sourced, although there are signs that they are becoming better at this.
Some individual parts of an AI-generated reference may be accurate (such as the journal name, article or book title or an author) but the whole reference does not always exist and so cannot be found by our Library team. These are sometimes called ‘hallucinations’ but it is not true to say that LLMs are misrepresenting the world as they see it, it is more that they are not attempting to provide truth but to provide something that looks correct. See Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics & Information Technology, 26(2), 1–10. https://doi.org/10.1007/s10676-024-09775-5 for more on this.
If you have a reference that you cannot find, our team may ask where it came from so that we can ensure it is legitimate before attempting to locate it.
1. What does AI stand for?
a) Artificial Integration
b) Automated Intelligence
c) Artificial Intelligence
d) Advanced Intelligence
2. True or False: AI systems always make decisions that are unbiased and fair?
3. Which of the following is a common application of AI in everyday life?
a) Predictive text in smartphones
b) Microwave ovens
c) Bicycle gears
d) Mechanical clocks
4. True or False: AI is a new field of study that only began in the 21st Century?
5. What is the Turing Test designed to measure?
a) A machine’s ability to perform calculations quickly
b) A machine’s ability to exhibit intelligent behaviour indistinguishable from a human
c) Speed of data processing
d) The power consumption of an AI system
6. True or False: The Library holds no books on generative AI in case students use them to plagiarise their contents?
7. What is one of the ethical concerns associated with AI?
a) AI can be used to enhance video game graphics
b) AI might reduce the need for physical textbooks
c) AI could potentially be biased or used for malicious purposes
d) AI will eliminate all manual labour jobs
8. True or False? AI can be used in healthcare to help diagnose diseases?
9. True or false: Reinforcement learning involves training algorithms using rewards and punishments?
10. What is ‘big data’?
a) A large collection of small data sets
b) Data sets that are so large and complex that traditional data-processing software cannot handle them
c) Data stored in large physical locations
b) A type of computer storage device
ANSWERS: 1c. 2F. 3a. 4F. 5b. 6F. 7c. 8T. 9T. 10b.
Example prompts for using AI to help plan your work:
Can you describe the research processes I must go through to submit an assignment?
Suggest a schedule for a week of study arranged around [this class time] and these [part time job hours].
Can you mindmap an undergraduate overview of topic X? (It will not produce an actual mindmap but a text based version of one).
I have a deadline in X weeks, can you recommend the steps I need to take to get the necessary work done in good time, perhaps with achievable milestones along the way?
How can I effectively plan and coordinate a group project?
I often procrastinate when I have to start working on my assignments. What are some strategies I can use to start working on them earlier and stay focused?
Example prompts for using AI to understand a subject:
Imagine you are a university lecturer for an undergraduate course. Can you simply explain topic X which is causing some difficulty? (Further prompts may be needed on particular detail)
I’m writing an assignment in favour of topic Y, can you suggest opposing viewpoints which I can counter?
Can you provide some example problems or case studies related to topic Z for me to solve or study?
Example prompts for using AI for info retrieval:
How can I efficiently use the resources available to me, like the library and online databases, for my assignments?
Can you recommend some key resources for understanding…? (be very careful these are real and not invented)
Can you recommend some key journals for research on…? (be very careful these are real and not invented)
Can you suggest some search terms for finding scholarly information on…?
Can you suggest synonyms for this keyword or phrase…?
Can you create a search string for those terms? (be aware that you may get better results, however, from fully understanding your search strings and being able to tweak them in the way you want).
Whatever your views on using generative AI in education or the ethics of using it at all, it is clear that such tools have arrived and are not going to go away. They are only going to become more common in the workplace, in education and in the day-to-day technology we use and carry with us. They are likely to become more powerful and ever more convincing. Authenticity will become critical. Assessing how trustworthy results are will be paramount - this is particularly true in an education environment where your marks may depend on it.
Just as we’ve become more and more dependent on having ‘always on’ internet connections, it is likely we will become more and more dependent on AI ‘assistants’ of one kind or another. This has implications for degrading our trust in authority and potential malicious attacks from bad actors. Consider the issues around deep fakes or the problems that the British Library and various schools and universities have had in the face of cyberattacks taking down their infrastructure – from payroll databases to door locks. However, the positive uses in, for example, healthcare (drug modelling, simulating biological processes, personalised treatment plans) or as personal tutors (advising and guiding or making education more accessible globally) could lead to significant advancements in enhancing life and learning.
Given the above, it becomes important for everyone to become AI literate and to know what to use, when to use it and how to use it wisely. It is also important for us to be aware of what policy makers are allowing or restricting with regard to AI and to lobby for better practice. Understanding the future implications of AI models is crucial. These pages are merely a start and we would encourage Library users to stay curious and stay engaged in AI developments in a rapidly evolving landscape.
Resources and further learning
Basics of Generative Artificial Intelligence by Paula García Esteban (1 hour 32 mins)
Generative AI: Introduction to Large Language Models by Frederick Nwanganga. (1 hour 36 mins)
Introduction to Prompt Engineering for Generative AI by Ronnie Sheer (44 mins)
For a good starting point on the subject of generative AI:
Wikipedia - Generative Artificial Intelligence
GOV.UK have made a series of online courses freely available.
An Introduction to the use of generative AI tools in teaching
Alto, V. (2023). Modern generative AI with ChatGPT and OpenAI models: leverage the capabilities of OpenAI's LLM for productivity and innovation with GPT3 and GPT4. Packt Publishing. https://prism.librarymanagementcloud.co.uk/port/items/1532985
Cremer, D. et al (2024). Generative AI: The insights you need from Harvard Business Review. Harvard Business Review Press. https://prism.librarymanagementcloud.co.uk/port/items/1529168
Hiran, K.K. (2023). Handbook of research on AI and knowledge engineering for real-time business intelligence. IGI Global. https://prism.librarymanagementcloud.co.uk/port/items/1500412
Kanbar, V. (2024). The AI revolution in project management: elevating productivity with generative AI. Sams. https://prism.librarymanagementcloud.co.uk/port/items/1531544
McQuillan, D. (2022). Resisting AI: an anti-fascist approach to artificial intelligence. Bristol University Press. https://prism.librarymanagementcloud.co.uk/port/items/1523575
Miller, D.J. (2024). Adversarial learning and secure AI. Cambridge University Press. https://prism.librarymanagementcloud.co.uk/port/items/1520197
Schmarzo, B. (2024). AI & data literacy: empowering citizens of data science. Packt Publishing. https://prism.librarymanagementcloud.co.uk/port/items/1533049
Shrier, D. (2024). Welcome to AI: a human guide to artificial intelligence. Harvard Business Review Press. https://prism.librarymanagementcloud.co.uk/port/items/1531127
Sloot, B. (2024). Regulating the synthetic society: generative AI, legal questions and societal changes. https://prism.librarymanagementcloud.co.uk/port/items/1534130
Yanev, M. (2023). Building AI applications with ChatGPT APIs: master ChatGPT, Whisper, and DALL-E APIs by building ten innovative AI projects. Packt Publishing. https://prism.librarymanagementcloud.co.uk/port/items/1533095
Zinke-Wehlmann, C. (2023). First Working Conference on Artificial Intelligence Development for a Resilient and Sustainable Tomorrow: AI Tomorrow 2023. Springer Vieweg. https://prism.librarymanagementcloud.co.uk/port/items/1533806
Abbas, M., Jam, F.A. & Khan, T.I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21. https://doi.org/10.1186/s41239-024-00444-7
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20. https://doi.org/10.1186/s41239-023-00411-8
Damiano, A.D., Lauría, E.J.M. Sarmiento, C. & Zhao, N. (2024). Early perceptions of teaching and learning using generative AI in higher education. Journal of Educational Technology Systems, 52(3), 346–375. https://doi.org/10.1177/00472395241233290
Duah, J. E., & McGivern, P. (2024). How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies. The International Journal of Information and Learning Technology, 41(2), 180–193. https://doi.org/10.1108/IJILT-11-2023-0213
Johnston, H. et al. (2024). Student perspectives on the use of generative artificial intelligence technologies in higher education. International Journal for Educational Integrity, 20. https://doi.org/10.1007/s40979-024-00149-4
Kelly, A., Sullivan, M. & Strampel, K. (2023). Generative artificial intelligence: university student awareness, experience, and confidence in use across disciplines. Journal of University Teaching and Learning Practice, 20(6).
Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y., Kohen-Vacs, D., Gal, E., Zailer, G., & Barak-Medina, E. (2024). Strategies for integrating generative AI into higher education: navigating challenges and leveraging opportunities. Education Sciences, 14(5), 503. https://doi.org/10.3390/educsci14050503
Maphoto, K.B. et al. (2024). Advancing students’ academic excellence in distance education: exploring the potential of generative AI integration to improve academic writing skills. Open Praxis, 16(2), 142–159.
Pavlenko, O., & Syzenko, A. (2024). Using ChatGPT as a learning tool: a study of Ukrainian students’ perceptions. Arab World English Journal, 252–264. https://doi.org/10.24093/awej/ChatGPT.17
Walczak, K., & Cellary, W. (2023). Challenges for higher education in the era of widespread access to Generative AI. Economics & Business Review, 9(2), 71–100. https://doi.org/10.18559/ebr.2023.2.743
Yusuf, A., Pervin, N., & Román-González, M. (2024). Generative AI and the future of higher education: a threat to academic integrity or reformation? Evidence from multicultural perspectives. International Journal of Educational Technology in Higher Education, 21(1), 1–29. https://doi.org/10.1186/s41239-024-00453-6
Zhu, W. et al. (2024). Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An Ethical Perspective. International Journal Of Human-Computer Interaction. https://doi.org/10.1080/10447318.2024.2323277