Parting thoughts: some reflections on the proliferation, use and misuse of (generative) AI and – in overview – what HE needs to think about with regard to student learning and assessment.
Date 28 August 2025
28.08.2025This guest post from Robin Crockett, who's about to retire after 30 years at the University, reflects on the proliferation of AI and possible ways forward for the (higher) education sector to maintain quality and student attainment – and confidence – whilst not losing sight of student experience.

This blog started out some months back as reflections on the increasing unreliability and ultimate pointlessness of AI-text detection tools as a means of supporting academic integrity but, as ever, events intervened and things have continued to move on apace over the period.
However, in that context, I’ll majorly paraphrase an earlier draft and observe that AI-text detectors – which are combinations of elegant mathematics and programming – effectively depend on systematic differences in the short-range phrase-structures between AI-generated text on the one hand and typical/aggregate human-written text on the other. Whether these intrinsic short-range phrasing-patterns in AI-text are a consequence of the underlying AI programming or actively programmed as so-called watermarking, these patterns are essentially what AI-text detectors key on relative to human-text. Typically, AI-generated text is more uniform (less mixed-up, lower entropy, perplexity etc.) than human-written text, but those differences are getting ever smaller with every development in generative AI. The logical endpoint is when AI-generated text becomes fully equivalent to typical/aggregate human-written text, with human-equivalent variation in the short-range phrase-structures, human-equivalent mixed-upness.
At that point, deliberate watermarking aside, AI-text detectors will have nothing systematic, reliable to key on.
At that point, in the academic integrity context, a misused generative AI will have become just another assignment ghost-writer… and that point is probably not far into the future.
A misconception that I’ve frequently encountered – and not always successfully managed to dispel – is that AI-text detection is deterministic: it isn’t, it’s probabilistic, and that’s the case for watermarked as well as ordinary AI-generated text. Trying to explain the distinction isn’t helped by the resemblance of many AI-text detector reports to the long-familiar text-matching/similarity-checking reports – which are deterministic. The probabilistic nature of AI-text detection has always been an issue and, as AI-text continues to become increasingly human-like, the underlying probability of misclassification necessarily continues to increase. That cuts both ways: both misclassification of human-text as AI (false positives), potentially leading to false accusations, and misclassification of AI-text as human (false negatives), potentially allowing (serious, major) academic misconduct to go undetected. AI-text detectors can be configured to minimise false-positive or false-negative rates – not both, and minimising one necessarily maximises the other – but that doesn’t alter the underlying probability.
As well as the problems associated with AI-text quality becoming ever more human-like, there are numerous AI-text rewriters – so-called humanisers, software, also improving. These algorithmically alter the original AI-text markers but, being software, also leave their own markers. Such markers are probabilistically detectable, subject to the same provisos as AI-text detection (see above). Also, there are humans who advertise that they rewrite students’ AI-generated assignments – for money, of course – potentially undetectable human-humanisers.
Another problem with AI-text detection, and one that’s always been present and which will never go away, is that some humans write in what we could now term AI-like styles (e.g. neutral impersonal ‘tone’, precise invariant syntax and grammar, repetitive, verbose, algorithmically consistent sentence and paragraph structures…). Put differently: there are humans who inadvertently but very effectively AI-watermark their text.
A more recent and developing problem is that of AI-generated content being present in sources but not tagged as being AI-generated, and such sources being accessed by students assuming those to be human-written. Depending on a student’s editing and paraphrasing skills/style and whether they’re quoting or not, such content can be correctly detected as AI-text but incorrectly implicate a student in misuse of AI.
For all such reasons, AI-text detection can’t be regarded as a realistic option in the academic integrity/misconduct context. There’s too much uncertainty around false positives – or real-positives due to student use of untagged AI-text from sources – for decisions that can lead to major sanctions, including expulsion, and too much uncertainty around false negatives allowing AI-compromised assignments to slip under the radar undetected.
So, where does (higher) education go, almost three years into the AI era? We’ve had two-to-three decades of resource-efficient electronically submitted word-processed assignments, with tools to help with the detection of copy-and-paste plagiarism and contract cheating. Now we have to ask ourselves whether such assignments continue to be a robust means of assessment in the AI-era.
In broad terms, what we have to factor-in with regard to (generative) AI includes the following (not an exclusive list, and we need to anticipate future developments):
- Vast, powerful standalone AIs such as OpenAI’s, Anthropic’s, Google’s and DeepSeek’s online offerings, with some options/access free of charge (your information, including anything personal/confidential you enter, funds your use?), others paid-for.
- Accessible free and paid-for AI-enabled online services, including both a variety of above-board sites to help with e.g. information search, retrieval and collation on the one hand and a variety of somewhat less above-board ‘cheat’ sites aimed at students (and others) on the other.
- Standalone desktop AIs, albeit smaller and less general-purpose than the online AIs, but increasingly capable (and free), that users can install on their own PCs/laptops – and I mean typical users – not just ‘geeks’ or ‘nerds’ – just need sufficiently up-to-date hardware.
- Embedded AI in operating systems and standard office/productivity software, e.g. Microsoft Windows & Office, Apple MacOS & iOS, Grammarly, SoftMaker, OnlyOffice & WPS Office suites (alternatives to Microsoft Office), Google Chrome, as well as AI embedded in specialist software for design, coding etc. Web search-engines also have embedded generative AI: e.g. DuckDuckGo has a user-controllable ‘AI Assist’ feature but Google’s ‘AI-Overview’ feature is not turn-off-able.
- AI front-ends such as apps on phones, which send information/data back to AIs for analysis and recording: raising questions as to where that information goes and who actually controls or has access to it?
- AI front-ends such as wearable AI recorder pendants etc. (e.g. Limitless, Bee, Omi) which send information/data back to AIs for analysis and recording – essentially everything the wearer says, plus everything that’s said to them by others during conversations, meetings, phone-calls etc. plus ambient conversations in the background, basically always on and listening. Again raising questions as to where that information goes and who actually controls or has access to it? Plus questions around privacy and consent.
Just as with other (commercial) apps/services: if the product is free, you – your data – are the product?
That’s essentially where we find ourselves, but think how rapidly this transition has happened over the past few years. While the pace of future developments is uncertain – we’ll know only when developments have happened – what is certain is that we have entered an AI-era and AI will continue to develop and proliferate even if the current phase turns out to be a bubble that bursts.
In the education and academic integrity contexts, there are signs that this is leading to changes in the nature of ‘routine’ plagiarism. The old-style copy-and-paste plagiarism, which escalated as sources started to go online and Internet access became the norm, entails the finding of sources from which material is copied and pasted, with whatever (perfunctory) editing to (try to) evade similarity-checking. That takes time and some degree of understanding on the part of the plagiarist. For disengaged students motivated by grades but not by learning, there’s no advantage in going to that degree of effort when it’s increasingly easy to ask an AI to summarise a specified article, or summarise key points from selected sources on a given subject, and paste the (unique) AI-generated output into an assignment, possibly via a humaniser. Equally worrying is the fact that it’s not always necessary to login and prompt a specific AI: web search-engine AI functionality can provide chunks of text, easily copied and pasted into documents. More worrying is that it’s increasingly easy and reliable to prompt an AI to generate an entire assignment, with references, as a downloadable document ready for submission, then download and submit that completed assignment, i.e. commission an AI to ghost-write an assignment.
Then there’s embedded AI, another aspect of the proliferation and increasing ubiquity of AI. How many students – and staff – are using AI without realising that they are doing so? Operating systems… Word-processors… Web-browsers… Search-engines… All such routinely-used software, and others, increasingly have embedded AI functionality. How clear is that clear to a user? How much control, if any, does a user have? It’s possible to police institutionally-controlled environments, although only with planning and interventions, and no absolute guarantees. However, it’s simply not possible to effectively police the wider learning environment with its myriad hardware-software combinations with hugely varying AI capabilities/functionalities.
Lamenting the proliferation of AI and its encroachment into hitherto robust learning and assessment practices isn’t going to make the issues disappear: AI is not going to go away and we need to adapt, just as we adapted to word-processors and spelling and grammar checkers. (I’m a gentleman of advancing years: I can remember the very real and sometimes acrimonious debates about whether to ban the use of spell-checkers because e.g. they facilitated students in not having to learn how to spell correctly. We didn’t ban, and that would have been a ban we wouldn’t have been able to police… By the time grammar-checkers came along, we’d largely adapted…)
Viewed objectively, spell-checkers and grammar-checkers are assistive technologies available to everyone. Whether everyone makes appropriate good use of such technologies is another matter, and whether we view their use as fair or unfair is another matter again. Those technologies exist and their use – outside e.g. exam halls – is not policeable, and the general expectation in the wider world is that people will use them. We have to view AI in the same light, not least because our students are increasingly graduating into workplaces which routinely use AI and expect graduates to be AI-literate – know how to use AI with insight and integrity. Thus, ignoring AI and deluding ourselves that we can ban students from using it doesn’t serve those students – or us – well. In short, now is not a time to try to muddle-through: institutions need to properly update their regulations and governance and not persist with trying to patch pre-AI versions, and innovate forms of learning and assessment that factor-in AI rather than continue with out-of-date forms that increasingly disserve students and institutions alike. The alternatives to so doing include being hit with informed FOI requests as the media and other interested parties home-in on the real issues rather than the superficial ‘how many students use AI to cheat’.
Rather than rueing developments in AI, viewing AI as a new and incoming assistive technology or, more correctly, a collection of technologies with assistive capabilities, is a more constructive mindset to have. Just as we adapted to the proliferation and ubiquity of spelling and grammar checkers we need to adapt to AI, but doing so fully aware of the scale of the difference between AI and assistive technologies that have existed hitherto. In stark terms: spelling and grammar checkers can improve and enhance what a student writes for assessment, whereas (generative) AI can replace the student as the writer, at least for some types of assessment.
There is a huge opportunity for us to realise the potential afforded by AI: e.g. to teach our students how to use AI to assist and improve their own learning and critical thinking/evaluation skills rather than displace/replace those with reliance on AI for critical capabilities. Employers need graduates who understand what they’re doing rather than simply button-click a software option ‘because it worked before’. With regard to AI, we have to shift our emphasis from ‘policing’ to ‘enabling’ – but not lose sight of the need to ‘police’. The longer we – the sector – leave such matters unaddressed, the longer the disservice to our students will persist.
Thus, we need to examine what it is we are actually designing an assignment to assess: what is the actual purpose of any given assignment and does it address appropriate learning outcomes? Whilst the familiar word-processed assignments are losing validity as a general ‘default’ means of assessment, that doesn’t mean that they’re becoming redundant. Such assignments have potential to assess students’ skills and abilities in achieving outcomes – aims and objectives – in, to use a phrase, tech-savvy ways rather than assessing their unaugmented knowledge and understanding. Achieving outcomes is valuable to employers, with the proviso that we need to teach students how to do this with insight and integrity, including how to check that AIs haven’t hallucinated. That checking can be tedious but it is necessary: I recently (late-August 2025) asked a few different AIs to search for and list recent published articles on a somewhat niche aspect of my current research and all provided reference lists containing small numbers of relevant articles (many on preprint servers so easily accessible but not peer-reviewed), relatively many peripheral articles with only keyword associations and, worryingly, relatively many hallucinated fake references.
That noted, students have always been taught, encouraged and expected to check their work, so teaching, encouraging and expecting them to check their AI-generated content isn’t introducing a new skill, but is a longstanding skill which needs to evolve for the new academic environment.
That leaves those old-style assignments now vulnerable to AI misuse. In essence: that is off-campus unsupervised study-time assignments and variants of online tests/exams intended to assess students’ unaugmented knowledge and understanding. Such assignments will have to be redesigned and reconfigured – and properly resourced noting at least some redesigned/reconfigured assessments will almost certainly be more resource-intensive than the assignments being replaced. As well as managing the risk of some students actively misusing AI to obtain high(er) grades by unfair/invalid means, we need to manage the risk of students inadvertently using embedded AI or untagged AI source materials in assignments where AI is not to be used. We also need to mitigate the risk of students with AI-like writing styles being incorrectly intercepted for misuse of AI. In essence: if we don’t want students to have access to AI for a specific assignment, or want to constrain or supervise/invigilate their access to AI, then we have to conduct that assignment in an institutionally-controlled and supervised environment.
And… Let’s not forget contract cheating and essay mills: these might have dropped off the media’s agenda but they have emphatically not gone away. Indeed, assignment providers continue to adapt and vary their offerings and, in addition to the human-rewrite services noted above, the latest ‘service’ appears to be the provision of ‘handwritten’ documents, no doubt in response to some demand somewhere. Are some institutions reverting to handwritten assignments in the (naive) hope it’s a silver-bullet for AI-misuse? Just to be clear: supervised/invigilated in-class/in-person handwritten assignments might (help) prevent AI-misuse and, just to be equally clear, handwritten alone doesn’t and won’t make any difference. The key is ‘supervised… invigilated… in-class… in-person…’, not ‘handwritten’. Remember that contract cheating existed long before word-processors and electronic submission when cheating students often had to hand-transcribe commissioned work.
Robin Crockett is Academic Integrity Lead, University of Northampton