Search suggestions update instantly to match the search query.
Generative AI is everywhere - you'll no doubt see it in search results, phone apps and even in some of our University databases such as Ebook Central, where the 'Research Assistant' will offer to summarise, explain and much more. We, of course, encourage you to use generative AI wisely and within the University guidelines of such use for academic work. When people rely on AI to do their thinking for them, this has also been shown to stunt their ability to think and create.1,2
It's particularly important to note that generative AI engines still 'hallucinate' and may give incomplete, incorrect, or misguided answers. Not that AI can actually think or hallucinate as you or I might - the output of generative AI is simply words and images compiled by brute force statistical analysis that analyses what you asked for and predicts what you will find to be an acceptable output, regardless of whether it is right, true, fair, representative, or even based in reality. The outputs are often useful and, within the limits of the training the AI received, may be accurate, but there is increasing concern from those involved in the AI industry and elsewhere that we no longer understand how AI is reaching its conclusions: the more complex the decisions entrusted to AI, the less clarity we have over precisely how it reached those judgements. Even if we asked AI to explain its 'reasoning' it could not explain precisely how it arrived at the conclusions it has because the statistical models that power AI are so very different to human thought as to be unintelligable to us.3
It's also worth considering where the AI responses are coming from:
To help you explore these kinds of questions, we've created a new reading list on Decolonising Generative AI. This small collection of books, articles, and web pages highlights critical and global perspectives on AI. We hope this offers you a richer and more inclusive take on how these technologies are built and used. We certainly hope that it helps show how generative AI is neither neutral nor created in a political vacuum: it is created by people, shaped by power, and the current direction of travel simultaneously threatens to intensify and normalise the inequalities and discrimination found in the societies from which its training data is drawn.
Hopefully, seeing the limitations and dangers inherent in the indiscriminate use of AI will inform how we understand and use it, and how it subsequently impacts people differently around the world. This does not mean rejecting new technology but to challenge the unthinking acceptance of all that is new as good and find ways to make the AI we are developing inclusive, transparent, and accountable.
If you have any suggestions for additional books, articles or web pages, please let us know.
1. Georgescu, R. I., Bodislav, D. A., & Andrei, I. V. (2025). The acceleration of education: professors, AI platforms and the transformation of learning and human growth. Theoretical & Applied Economics, 32(4), 143–154. https://research.ebsco.com/linkprocessor/plink?id=ffb6f8a8-49ad-3db6-a473-a3b088b5e59d
2. Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11(1), 1–37. https://doi.org/10.1186/s40561-024-00316-7
3. McQuillan, D. (2022). Resisting AI : An anti-fascist approach to artificial intelligence. Bristol University Press. https://ebookcentral.proquest.com/lib/portsmouth-ebooks/detail.action?docID=7042128.
Timothy Collinson - Faculty Librarian (Technology)
David E Bennett - Assistant Librarian (Promotions)
The views expressed in this post are the independent professional thought of its authors and not necessarily the same as those of the University.