Welcome to Eureka Street

back to site

Universities struggle to keep pace with AI integrity challenges

1 Comment

 

During a university tutorial this year, I showed students a newspaper report on the best places to have coffee in Melbourne and a similar story which listed the same cafés. I asked them which article they thought was more engaging. The students decided that the first was better because they liked the writer’s turn of phrase.

I revealed that a journalist had written the first article and AI generated the second. ‘Oh,’ they said, ‘you can tell. The first is so much better.’ The second, they said, was bereft of a personal touch and humour. When I pushed them further, they said humans can generate emotion in their writing by using their unique experiences. This helps them to connect with their readers, something generative AI can’t do. I would add that when people read a book or article without looking at the author’s name, they can often identify the writer if it is one of their favourites. The thoughtful placement of words in sentences and caring for their audience characterises what human writers do. Blandness underpins generative AI.

But here’s the thing: many tertiary students use the popular AI tool ChatGPT in their assignments despite knowing that it will only give them dowdy writing and perhaps incorrect facts just or a shallow report. One detail illustrates the point: the AI article I gave my students did not list the locations of the cafés, whereas the first did because the journalist had thought, ‘What would my reader want to know?’

Eighteen months ago OpenAI, an American company, launched ChatGPT , an AI tool that people can use to generate human-like text, video and images. Five days after its launch, it had more than a million users. Now it’s estimated that the ChatGPT website receives about 600 million visitors a month. It’s also available in 188 countries.

Unsurprisingly the tool, Chat Generative Pre-Trained Transformer, has become prevalent among tertiary students because it can write essays for them, proofread their work and come up with ideas for projects. Surveys by two Australian academics undertaken last year show the increasing popularity of generative AI tools.

Jemma Skeat from Deakin University and Natasha Ziebell from the University of Melbourne surveyed the use of AI among students and staff between the first half of 2023 and the second half of the year. When they compared the results, they found over half the students surveyed in the first half of 2023 had tried or used generative AI whereas in the second semester, the figure had increased to 82 per cent.

The rapid advancement of AI-generated technologies has produced headaches for universities. A Queensland University of Technology and Griffith University survey, currently being conducted among university staff, points to the issues confronting universities. In particular, the survey seeks to understand if staff think AI in higher education can lead to academic misconduct, whether it poses a threat to critical thinking and whether it should be acknowledged when AI is used.

Universities have already had to deal with students using contract cheating sites. These websites use persuasive strategies and messages that seduce vulnerable students into paying ghost writers to do their assignments for them. A 2018 University of Queensland study Just Turn to Us: The Persuasive Features of Contract Cheating Websites found these sites were ‘fronts for sophisticated commercial operations’ that try to lure students into thinking their academic problems will be solved if they pay for custom-written assignments. Similarly, students can approach AI thinking their academics problems will be solved if they use it.

 

'In a bid to outsmart AI, more emphasis could be spent on examining what makes effective academic writing.'

 

How, then, are universities dealing with concerns about academic integrity and AI?

In September last year the Group of Eight research intensive universities released a statement on AI generative tools and acknowledged that the tools could also help enhance teaching practices. However, most of the statement’s principles relate to upholding academic integrity, and asks individual universities to devise their own policies. An examination of the eight university websites shows they each have individual policies but some universities are more strident in spelling out how using AI can be considered cheating. The University of Melbourne policy states:

 

If a student uses artificial intelligence software such as ChatGPT or QuillBot to generate material for assessment that they represent as their own ideas, research and/or analysis, they are NOT submitting their own work. Knowingly having a third party, including artificial intelligence technologies, write or produce any work (paid or unpaid) that a student submits as their own work for assessment is deliberate cheating and is academic misconduct.

 

The University of Adelaide, a G8 university, provides examples based on real experiences of students who have breached academic integrity guidelines using AI-generated material. Of the four examples, two are about international students, which is unsurprising because a significant number of students caught up in contract cheating have been students from overseas.

This adds another dimension to how universities grapple with AI-generated materials when international students are involved. Some students use ChatGPT to generate sentences because they are not confident using English. Some international students write in their own language and then get an AI tool to translate it into English. An Adelaide University example points out that doing a translation of a piece of work a few times can change the ideas, which is considered cheating.

Most universities have now put together AI statements as part of their academic integrity policies. However, it’s all very well having these policies, but students will still use AI. There are also questions about which subjects it is appropriate to use AI and in which courses it should be banned because original work is a must. But first of all universities need to be able to detect if students are using AI when they have been told not to.

Universities now use software to help staff detect if students are plagiarising work. Turnitin is a popular tool and it now can also identify AI-generated work. According to the company, the AI detection tool ‘looks for highly predictable language patterns to identify passages potentially generated by AI’. But staff at universities are warned that this feature can produce false positive results when the student has used less than 20 per cent of generated AI material. Staff are also advised that the AI detection tool cannot be used alone to allege a student has cheated. They are urged to examine how the student’s writing differs from their in-class work, the exercise similar to the one I gave my tutorial group.

University websites that provide information about AI also suggest tutors can devise assessments that lower the risk of students using AI and thereby reducing academic integrity issues. Universities also suggest that teaching staff discuss with their students what the implications are of using generative AI.

In a bid to outsmart AI, more emphasis could be spent on examining what makes effective academic writing. When I challenged AI to write an academic article of 1000 words on ‘What is culture’s relationship to power and identity?’, I got jargon word after jargon word, which made the piece almost impenetrable. But it was indistinguishable from  many other journal pieces written by humans on cultural studies. If an academic produces an article so mired in dense phrases, how on earth is anyone to work out if there is any substance? Or if it’s AI?

Research areas have their particular vernacular, but these words can be arranged alongside clear language within a stylish and human-sounding piece of writing — something that AI can’t do at the moment. The Australian text How To Fix Your Academic Writing Trouble is a start.

Whatever universities are doing now, they will have to keep updating their AI policies because future students will be better versed in tools like ChatGPT. This is partly a result of the new Australian Framework for Generative Artificial Intelligence in Schools, which contains six principles and 25 guiding statements that seek to guide the ethical and responsible use of AI. The framework suggests that AI can enhance teaching and learning. Some states had previously banned ChatGPT but now schools across the country will introduce it and devise their own policies on its use.

In May, OpenAI launched ChatGPT-4o, which now supports more than 50 languages and has made text more conversational. The company says it will also be rolling out ‘more intelligence and advanced tools to ChatGPT Free users over the coming weeks’. Universities may have to update their AI policies again.

 

 


Dr Erica Cervini is a freelance journalist and sessional academic.

Topic tags: Erica Cervini, AI, University, Academia, ChatGPT, Integrity

 

 

submit a comment

Existing comments

'In May, OpenAI launched ChatGPT-4o, which now supports more than 50 languages and has made text more conversational.'

Given that equipping every floor of the Tower of Babel with ChatGPT-4o will remove the problem that the Tower of Babel created, perhaps a cleric for his (or her) next sermon might wish to prompt ChatGPT-4o to suggest what to do when technology advances to such a stage that Man can ostensibly dispose of what God proposes.


roy chen yee | 04 September 2024  
Join the conversation. Sign up for our free weekly newsletter  Subscribe