Generative AI tools such as ChatGPT and Copilot are already having a significant impact on how teaching is delivered across the university sector.
These tools pose questions around how Imperial assesses students and ensures academic integrity, but they also present opportunities. For example, students can enhance the way they study, while as a staff member it is possible that you could spend more time on teaching than administrative tasks.
This introduction is intended as a short primer for generative AI, exploring some of the main points and areas relevant to using AI in an educational setting. Whilst you do not need a detailed technical understanding of this technology to make use of it, some understanding helps you understand its strengths, weaknesses, and the issues to consider when using it during your teaching.
This is a fast-changing topic. We aim to update this webpage regularly to take into account significant developments. Let's start by looking at various AI text generators, also known as Large Language Models (LLMs), and conclude with details regarding the responsible use of generative AI.
ChatGPT
ChatGPT has grabbed most of the headlines since its launch in November 2022. It was created by a company called OpenAI, which started as a not-for-profit research organisation (hence the name) but is now a fully commercial company with heavy investment from Microsoft. It is available as a free version, plus a premium version at £20 a month, which provides faster, more reliable access, as well as access to its latest language models and features, including plugins, which changes its behaviour significantly.
ChatGPT is based on a machine learning approach called ‘Transformers’, first proposed in 2017, and is pre-trained on large chunks of the internet, which gives it the ability to generate text in response to user prompts, hence the name ‘Generative Pre-trained Transformer’. Whilst OpenAI provided some information on the approach for training ChatGPT, they have not so far released any information about GPT4, the latest model released in early 2023.
In its standard mode, without plugins, ChatGPT works by predicting the next word given a sequence of words. This is important to understand, as it is not in any sense understanding your question and then searching for a result and has no concept of whether the text it is producing is correct. This leads it to be prone to producing plausible untruths or, as they are often known, hallucinations.
As it stands today, the free version of ChatGPT does not have access to the internet, so cannot answer questions beyond its training data cut-off date of September 2021. Users paying for the ChatGPT Plus service have access to a version that can access the internet.
ChatGPT Plus customers also have access to plugins which extend ChatGPT’s functionality. For example, a Wolfram plugin allows users to ask questions which are answered by Wolfram Alpha, which excels at mathematical and scientific information. Initial testing suggests this might resolve the issue of ‘hallucination’ in these domains. Many other plugins are available, and more are being developed.
OpenAI makes its service available to other developers, so many other applications make use of it, including many writing tools such as Jasper and Writesonic, as well as chatbots in popular applications such as Snapchat.
Microsoft’s Copilot, Google’s Gemini, Meta’s Llama and Anthropic’s Claude
Although ChatGPT has received most of the attention, there are other developers in this space, and this number is likely to increase. The developers of major AI services such as OpenAI make their services available to other developers. One of the most significant announcements has been from Microsoft, who are incorporating generative AI across Microsoft 365 tools under the name ‘Copilot’. All staff and students already have access to the platform. You can access additional guidance by visiting ICT's walkthrough webpage.
Google has made similar announcements about their office tools, and a number of their Google Workspace generative AI tools are available to try. Gemini is Google’s ChatGPT equivalent and is available for testing. Like Copilot, it can access the internet, but unlike Copilot, it does not provide references for the sites it has used to give its answers.
Claude is similar to ChatGPT and is produced by Anthropic, and is likely to be built into many applications going forward.
Meta’s Llama is slightly different in that it has been made available as an open-source model, meaning that you can run it yourself. Open-source AI models often differ from open-source software though, and it is not possible to fully understand how the Llama model works, or modify it yourself, from this release.
A summary of key capabilities, limitations, and concerns around Large Language Models
In considering generative AI, it is important not only to understand its capabilities but also its limitations. Some of the key themes are summarised here:
Capabilities | Limitations | Concerns |
|
|
|
Image Generation
It is not all about text – image generation tools have made huge progress too, particularly with Midjourney, DALL-E and Stable Diffusion.
These work in a similar way to text generators – the user gives a prompt and one or more variations of images are produced. Image generation capabilities are being incorporated into general AI services, so Copilot, for example, can also generate images, using OpenAI’s DALL-E.
Approach to misuse of AI and plagiarism detection tools
Academic integrity is at the heart of all we do at Imperial. Students submitting work and assessments created by someone or something else, as if it was their own, is plagiarism and is a form of cheating and this includes AI-generated content. Please refer to the university’s Academic Misconduct Procedures for further information. To ensure quality assurance is maintained, departments may choose to invite a random selection of students to an ‘authenticity interview’ on their submitted assessments. This means asking students to attend an oral examination on their submitted work to ensure its authenticity, by asking them about the subject or how they approached their assignment. It should be made clear to students that being invited to an authenticity interview does not mean that there is any specific concern that they have submitted work that is not their own.
At this time, we do not intend to deploy any additional AI detection functionality due to concerns regarding the maturity of these products and their ability to accurately identify incidents of students utilising AI outside the parameters of what has been agreed for their programme.
Our current approach, in line with many other universities in the UK, is to train staff to understand AI, identify its various uses, set parameters for those uses within students’ programmes, and be alert to the common features of AI-generated work. In turn, students should expect to receive support from Imperial to help them proactively stay informed about the latest capabilities of AI platforms.
This approach is not prejudicial to Imperial deciding to review this decision in future, should we and the wider university sector have greater confidence in any technological solutions which may become available to detect the misuse of AI.
Sign Up
If you are interested in receiving the Learning and Teaching Newsletter, please email ltstrategy@ic.ac.uk.
You can also view the issue archive online.