theory into practice
blog
where educational theories meet educational practices
Introduction In 2020, the world, Higher Education (HE) and English for Academic Purposes (EAP) went through the shock of the Covid-19 pandemic, which saw a mass turn to online teaching and learning. We all had to adapt to a new modality during ‘the pandemic pivot’, demonstrating a resilience in the face of unprecedented challenges (Bolster & Levrai, 2022). That resilience is being tested again, by Generative Artificial Intelligence (GenAI), as exemplified by the release of ChatGPT by OpenAI in November 2022. To adapt to this new circumstance, HE and EAP will have to fully engage with GenAI and support students towards ethical, developmental usage, which we propose could be aided by the AI Quality of Engagement Matrix (AIQEM). AI is not a new concept, but rather a technology anticipated by Alan Turing, with the term first coined by McCarthy in 1956 (Crompton & Burke, 2023). A general definition for AI describes “computing systems that are able to engage in human-like processes such as learning, adapting, synthesizing, self-correction and the use of data for complex processing tasks” (Popenici et al., 2017, p. 2) but the focus of this post is GenAI, which is more specifically “technology that (i) leverages deep learning models to (ii) generate human-like content (e.g., images, words) in response to (iii) complex and varied prompts (e.g., languages, instructions, questions)” (Lim et al., 2023). The ability of GenAI to produce human-like content means GenAI algorithms can write whole essays and articles, engage in debate and discussion and, potentially, replace swathes of human endeavour. Within HE, GenAI also opens opportunities for teaching materials to become more personalised and adaptable to the needs of the student. Students can also engage with a GenAI to brainstorm ideas or get feedback on their work, and it can additionally operate as a tutoring system (Michel-Villarreal et al., 2023). However, concerns abound regarding problematic, plagiaristic use and GenAI having a negative impact on student learning (Michel-Villarreal et al., 2023; Neumann et al., 2023). This has led to a lot of discussion within education about the implications of GenAI and, at a recent training day, we considered with colleagues how we wanted students to engage with AI when it comes to assessment. One of the resources that guided the discussion was the AI Assessment Scale (AIAS), which provides a framework for ethical AI use in HE and EAP assessment (Perkins et al., 2024). Rather than a binary allowed/not allowed, Figure 1 illustrates the AIAS, a scale of ethical AI usage from fully human generated work, with no AI input at all, to full use of AI, where AI can be used throughout the assessment process without limitation. After the discussion with colleagues, we (the authors) reflected further on this scale, on our own practices, and what we tell our students. We began to question if quantity was the best metric, or the one that we were most concerned about. This is not least because GenAI is becoming integrated into systems and tools where it becomes near impossible to mandate quantity of use in extended assignments that do not take place under exam conditions. When you open your phone, a search engine or a word processing document, GenAI is increasingly there. Further to this, the capability of GenAI is a moving target, with improvements being made at dizzying speed. Indicative of this rate of change is the fact that when we returned to the discussion of AI in assessment at a training session a month after the first, the AIAS had already been updated (Furze, 2024), with the revised version shown in Figure 2. We realised, through our practice, that we were more interested in why students are using AI and how they are using it, rather than how much they were using it, particularly given the difficulties of knowing and/or controlling AI use. In this sense, motivation and criticality seem to be the key to identifying the quality of student engagement with AI. Motivation Drawing on the classics like Gardner and Dörnyei, Goldfrad et al. (2023) discuss motivation as encapsulating desire and “what encourages a learner to take a course of action” (p. 20). AI is a rich resource that can be used for different purposes or different courses of action. If we think about developing an essay, a student could use AI to brainstorm ideas, to debate a topic, to offer critiques of a perspective. It could be used to proofread and edit. The motivation for using the AI tool is positive and developmental, meaning the student is going to the AI to help them make their work better. A student may also wake up in a cold sweat realizing they have an essay due that next morning and AI writes the essay in full, or the student may be disinterested in the topic and saves time by getting AI to fill in the gaps. The motivation here is negative, with the student taking a shortcut and circumventing the process of doing the work and doing the thinking themselves, which can have a deleterious impact on their learning. Criticality Criticality is defined by a Cambridge University Press paper as involving “analytical skills and critical thinking” (Li, 2019, p. 2). Critical thinking is an oft-used term with myriad definitions and having a clear conception of it can be challenging. Mulnix (2012) explores the various definitions of critical thinking and contends, “critical thinking is a process, a skilled activity of thought. It includes a commitment to using reason in the formulation of our beliefs” (p. 471). In this sense, criticality is the thought, evaluation and reasoning students engage in when determining what to do with GenAI output. If they accept it unquestioningly, that is problematic. Even if they have a positive motivation for using AI, by accepting the argument of an AI and amending work based on its recommendation without question, the student is not engaging critically with the AI output. In this sense they are following and deferential to the AI. Alternatively, a student can treat AI output critically, considering what it suggests, doing further research and coming to a reasoned decision. Rather than blindly following the AI, the student is using it as another resource to be analysed and incorporated into their work as they see fit. The AI Quality of Engagement Matrix These two metrics, motivation and criticality, can be put on axes to form a matrix, indicating why and how students engage with AI, the AIQEM (see Figure 3). Rather than absolute values, each axis should be seen as a continuum, where students may have differing motivations and levels of criticality at different times, for different tasks. In this conception, the most desirable AI use is when students utilize AI for developmental purposes and engage with the output critically. This can be transformative, as the student’s knowledge and thought processes are strengthened through the use of AI, and they can develop their evaluative judgement, meaning they better understand the quality of their own work and others (Tai et al., 2018). Alternatively, students can engage with AI with the best of intentions but if they do not engage with the AI output critically, this could be deformative, distorting the development and learning that could take place. The least desirable use is when students are using AI as a shortcut and accept the AI output without question. We describe this as ‘malformative’, as it is the most negative outcome since the student is totally reliant on the AI, circumventing the learning process but producing an end-product of sorts. A student who is also using AI as a shortcut but is more critical with the AI output has a more productive use of AI, although it could be considered performative. Seen positively, such use could be a potentially developmental step during the preparation of the assignment where the student does not do the initial research themselves but engages critically with the GenAI output. However, it holds more negative connotations as it could also encapsulate a student doing just enough to pass off GenAI work as their own. What this means in practice This means rather than telling students how much they can use AI, we need to be asking students to consider their motivation for using AI and help them understand the ways in which AI use can be beneficial and developmental. We need to help them see AI not as a shortcut or a means of outsourcing the work, which limits their learning and development, but rather as a tool that can sharpen their work and help them make their own work better. Rather than asking a GenAI for information, students can use a persona prompt, asking the GenAI to engage them in Socratic debate and interrogate their ideas, or act as a tutor and provide guidance and feedback on their work (Cioban, 2024). Students could also take on the role of teacher, giving the GenAI the persona of a student, thereby developing their knowledge and understanding through teaching the GenAI’s about a topic. It is also essential to help students critically engage with GenAI output and apply an evaluative eye. The limitations of GenAI, that “they do not think, reason or understand” (Floridi, 2023, p. 15) and fabricate falsehoods with confidence (Emsley, 2023), need to be made clear to students. This can be done by asking for a summary of a recent TEDTalk or news event based on the title alone and comparing the summary to the reality, or even asking how many ‘rs’ there are in the word ‘strawberry’ (Eaton, 2024). Students tend to be surprised by the certainty ChatGPT has when claiming there are only two. Figure 4 depicts such an enlightening interaction with ChatGPT (OpenAI, 2024). We could also ask students to prompt a GenAI to generate an essay and then evaluate it in terms of academic credibility and argumentation. We must also recognise that the AIQEM has broader implications beyond students. It is as relevant to us, as teachers and HE professionals, as it is to our students. We need to explore GenAI to see how it can aid our own professional development and understand when it is and is not of benefit. Better understanding our own motivations and engagements with AI helps us understand the issues students are facing. Conclusion It is undeniable that GenAI does pose challenges for HE and EAP. It is a fast-developing technology that is outpacing the research and professional development training. Nonetheless, we need to ensure our students are engaging effectively with GenAI, meaning they understand that it provides the opportunity for dialogue to enrich their knowledge and thinking rather than a font of all knowledge that they turn to for the answer. Whatever the developments that come, the AIQEM can potentially encourage more beneficial, educational and developmental AI use, equipping our students to navigate an AI-rich future. With that in mind, we would encourage you to use the AIQEM, discuss it with colleagues and students, and share your experiences with it. At the moment, it is a tool born of our personal practices that can only be enriched by the experiences and expertise of fellow professionals. References
About the contributors Peter Levrai is working in University of Turku as a University Teacher of English and is pursuing his PhD through University of the Basque Country. The title of his dissertation is Exploring Practitioner Collaborative Assessment Identity to Develop a Principled Multi-lens Approach to Assessing Collaborative Assignments in English for Academic Purposes. This reflects his interest in collaborative learning and work towards finding a fair means of assessing group assignments. He is also a keen material developer, co-author of the British Council ELTons winning Develop EAP: A sustainable academic English skills course, which engages students with the UN’s Sustainable Development Goals. LinkedIn profile: https://www.linkedin.com/in/peterlevrai/ Research and materials: https://developeap.weebly.com/ Averil Bolster works full-time as a University Teacher of English at the University of Turku in Finland while studying part-time for her PhD with the University of the Basque Country. Her dissertation title is, A Qualitative Grounded Theory Study of EAP Practitioners and Collaborative Learning: Identity and Beliefs, which incorporates key areas of interest, namely collaborative learning, teaching and learning in EAP and the belief that research and practice should be mutually supportive. LinkedIn: https://www.linkedin.com/in/averilbolster/ and https://developeap.weebly.com/research.html Recent publications include a book chapter co-authored with Peter Levrai, Key takeways from "Fastforwarding to the future of EAP teaching in 'the happiest country in the world, about our experience as EAP practitioners in Finland during the 2020 pandemic and a book chapter, The Journey of Develop EAP: From a Single Step to a More Sustainable and Shared Practice, which is a reflection on my co-developing of the award-winning materials, Develop EAP: A sustainable academic English skills course, and making it freely available to other practitioners. Bolster, A. (2024). The Journey of Develop EAP: From a Single Step to a More Sustainable and Shared Practice. In Breen, P. & le Roux, M. (Eds.), Social Justice in EAP and ELT Contexts: Global Higher Education Perspectives. Bloomsbury. Bolster, A. & Levrai, P. (2022). Fast-forwarding toward the future of EAP teaching in “the happiest country in the world”. In J. Fenton, J. Giminez, K. Mansfield, M. Percy & M. Spinillo (Eds.), International Perspectives on Teaching and Learning Academic English in Turbulent Times. Routledge.
4 Comments
Evangelia Tsimpoukli, UCL IOE associate lecturer
12/16/2024 03:12:48 am
[as long as]"...they understand that it provides the opportunity for dialogue to enrich their knowledge and thinking rather than a font of all knowledge that they turn to for the answer". I agree and this is the way I have been guiding my students to see and use AI, starting with demonstrating and having them explore its limitations in interactive tasks first. It is rewarding to say they can see the value of it all in the way they end up approaching both their studies and lives/lifestyles, changing their critical perspective towards AI. This is why I'm all for AI. It saves time on "administrative" staff when writing an essay (to give a simple example): students can use an AI tool to highlight language inconsistanceis or inaccuracies in their text in comparison to that used in a sample or model while exploring the suitability of suggested or other versions themselves and repeat the process as many times as needed. This saves so much time from marking on our part as well as tutorial time for both Ss and teachers as, then, this tutorial time can be devoted to eliciting from the student the justification and support for their discourse choices in communicating content succinctly and precisely. They learn so much more, so much faster. So, criticality and the development of all critical skills is the answer to AI, too!
Reply
12/17/2024 12:02:47 am
Criticality is essential, although I do have the concern that we'll be pushing students to do higher order thinking tasks sooner and more often than before the advent of AI. If we think about it in terms of Bloom's taxonomy, we've already outsourced a decent amount of remembering to Google and Gen AI can arguably do the lower order skills of remembering, explaining and applying. This means rather than asking studens to summarise (which AI can do) we'll be asking them to analyze and evaluate. Yet without the lower order thinking it's going to be harder to operate at higher levels. I think that's why the motivation aspect of the matrix is also essential, in that students are looking for opportunities to enrich their knowledge or thinking.
Reply
I think these two dimensions of criticality and motivation apply well to a lot of degree subjects. It seems to me, though, that the problem for EAP in university settings is a different one:
Reply
1/20/2025 05:13:08 am
We might be coming at it from a different angle, working in the Finnish context where our students have a typically have a higher language level and therefore should be able to make judgements about text that comes out of an AI.
Reply
Your comment will be posted after it is approved.
Leave a Reply. |
Theory into Practice (TiP)is a collaborative space for those teaching in Higher Education Archives
May 2025
Categories |