Skip to main content
Centre for Online and Distance Education

Sustaining Innovation in Research: Innovations and Issues around Generative AI

Date

Written by
Clare Sansom

The annual conference of the Centre for Online and Distance Education (CODE; formerly CDE) has taken place each year since 2006. This seventeenth conference in the RIDE series was the first to be held in a completely hybrid format. The whole meeting took place in its pre-COVID venue of Senate House, University of London, with all keynotes and many of the parallel sessions also available online. The topic chosen was ‘Sustaining Innovation in Research and Practice’, with the first day focusing on research and the second on practice, although, as CODE director Linda Amrane-Cooper mentioned in her welcome, the borders between the two areas are inevitably blurred.

The first, research-focused day of the meeting began with a welcome from Linda Amrane-Cooper to both in-person and online delegates. She explained that CODE has now grown into a ‘community of practice’ with a core of 42 Fellows, all experts in a wide range of areas around distance and technology-enhanced education. The Centre is expanding its international links, and Linda gave a particular welcome to three visiting fellows from the Open University in China and a group of Nigerian colleagues who had stayed on after a symposium in London. She also mentioned CODE’s 20 Student Fellows, many of whom would be presenting at the conference, and she passed on a welcome and apology from the Vice-Chancellor of the University of London, Professor Wendy Thomson, who would join the meeting later.

Turning to the meeting’s theme, Linda characterised the few years since the start of the pandemic as ‘innovative… but very stressful’ for academic staff, students and technologists alike. The conference programme had been designed to explore how innovation in education can be made sustainable for ‘people, organisations and the planet’.  And when the conference committee had planned the programme at the end of 2022, they could not have guessed the speed of innovation in or the depth of controversy over the topic chosen for the first keynote presentation: generative artificial intelligence, and its uses in academic teaching and research.

Generative AI for Academic Writing and Assessment: Innovations and Issues

Linda then introduced the keynote speaker: Mike Sharples, emeritus professor of educational technology at the Open University in the UK. Mike has had a stellar career in the technology of teaching and learning; he was the academic lead for the OU’s FutureLearn platform and led the design of its social learning approach, founded the Innovating Pedagogy report series, and is the author of over 300 publications.

Mike began his presentation by, again, emphasising this speed of innovation. This is a talk that he has given in outline numerous times, but he has had to change it on each occasion. He then asked delegates to study a student essay for a Master’s in Education and decide on a mark and some suitable feedback. As all delegates will have guessed, this essay was not written by an actual student but by the artificial intelligence program ChatGPT. He had asked it for a ‘high-quality 500-word essay on a critique of learning styles, with academic references’ and that is what it had produced. The current – and very recently released – version of this program, ChatGPT4, has been trained on almost the whole of the textual Internet, can write in any style requested and in multiple languages, and can produce up to 25,000 words: that is, a whole dissertation. It is best defined as a general-purpose language tool, and one that can now interpret text and images.

Text produced by ChatGPT and its competitors presents an unprecedented challenge in academic practice because it is generated, not copied, and so evades plagiarism checking services. Some AI detection tools have been developed, including one by Turnitin, but these are unreliable with high or unknown false positive and false negative rates. 

Mike returned to the ‘student essay’ demonstrated at the start of the talk, and pointed out that it includes a completely invented reference. In a way, this is not surprising: ChatGPT was doing what it has been programmed to do, which is to generate plausible language. It has been described as ‘hallucinating’: in human terms it is amoral, with no inbuilt knowledge of how the world works. Furthermore, as the version used was trained on ‘the Internet’ as it was in September 2021, it has no knowledge of anything written after that date. And this, of course, includes plenty of incorrect and even toxic material.

Despite these disadvantages, however, it is still an exceptionally powerful tool and one that we can increasingly expect students to use. It is not yet clear how academic staff can best respond. Should we ban it, in which case confident students are bound to challenge our decisions; evade it, using expensive invigilated exams, or adapt our assessment methods, policies and guidelines to work with it? 

This last option offers by far the most constructive solution. Mike suggested that generative AI, used appropriately, can be ‘an empowering, joyful tool for creativity’. He went on to suggest three examples of how this can be achieved in a higher education context:

  1. An educator or student uses AI to generate several possible answers to an open question. The students then critique these and synthesise some or all of them into their own answers.
  2. An educator sets a class of students a project topic. Each student generates their own report with the help of AI and acknowledges the contribution that it has made correctly.
  3. Students are asked to engage with a program like ChatGPT in a conversation or ‘Socratic dialogue’, and then each student writes an argumentative essay based on that dialogue.

There are clearly many other possibilities, but it is also clear that generative AI must be used with care. Mike suggested a set of principles that could be adopted to make sure that teaching staff do this and that, as far as possible, they encourage their students to do so:

  • Rethink written assessments and exam questions, to make the answers harder for AI to generate.
  • Beware of AI for strictly factual writing, as the programs’ vast ‘training sets’ contain wrong as well as out-of-date information.
  • Explore the use of AI in creativity, argument and research (as in the examples above).
  • Develop AI literacy in students and staff.
  • Introduce and negotiate guidelines for the use of AI throughout the institution.

These can be complemented by a similar set of guidelines for students on the use of AI in their work, some of which come under the umbrella of ‘AI literacy’:

  • Understand the limitations of AI in general, and of the systems that you use.
  • Use the AI system as a supplement to your own work, not to do it for you.
  • Use multiple sources of information, not only generative AI.
  • Cite those sources correctly (students will need to be taught how to do this, as they are general referencing)
  • Be aware of ethical concerns around generative AI. 
  • Proof-read and edit anything produced by an AI program carefully.
  • Embrace the opportunity to learn.

Mike ended his presentation by looking ahead, as far as this is possible in such a fast-moving environment. Current developments include Microsoft’s embedding generative AI into their universal Office suite via ‘Microsoft Copilot’, which will make it even harder to avoid. It is being increasingly incorporated into guides, courses, videos and even reports on current events. He recommended a book he recently co-authored with Rafael Pérez y Pérez: ‘Story Machines: How Computers have become Creative Writers’ [Sharples, M. & Perez y Perez, R. (2022). Story Machines. Routledge, UK. ISBN 9780367751975]. This covers the past, present and future of machines as authors of fiction, including the development of generative AI up to ChatGPT3. It also explores what is meant to be a creative writer, and whether humans, themselves, are ‘story machines’.

World Café

Mike’s presentation was followed by a ‘world café’ session in which delegates both in the room and on Zoom moved between tables, or breakout rooms, to discuss four ‘provocations’ in small groups. After about 10 minutes, a bell was rung to encourage everyone to move between groups. There were three cycles, so each delegate was able to explore three of the provocations. These were:

  1. How do we use generative AI creatively in distance education: for student assignments, creative work, academic research or…?
  2. What are the possible futures for generative AI, and how should we prepare for them?
  3. Do we need ‘AI literacy’ and if so, what would it look like?
  4. What policies, practices and sanctions should we adopt for students who employ generative AI to write their assignments?

Linda then invited Mike back to lead a brief discussion of the points raised in the world café. Several delegates remarked that we have been here before: education is facing another ‘calculator moment’ (referring to the controversy over introducing calculators into maths lessons). AI can, if used properly, help students to be more creative in their thinking. Taught well, students will learn to ‘bounce ideas off’ AI chatbots to improve their writing. The workplaces of the future will need graduates who prioritise critical thinking, evaluation and ethical understanding, and our students will need to work with AI to develop these. It is our task to find the best ways to help them do so.