Quantcast
Channel: Universities South Africa
Viewing all articles
Browse latest Browse all 147

Open student engagement about AI would be more beneficial than policing them on its use – a senior Rhodes University scholar advises

$
0
0

Professor Sioux McKenna, a professor of Higher Education Research at Rhodes University, recently led an online workshop titled Future-Proofing Postgraduate Education in the Age of AI.

She was the first to admit that she is not an expert on using artificial intelligence (AI). “As someone who loves technology, …I just click and figure it out, and if I get stuck, I’ll ask YouTube. I generally don’t watch the tutorials that come with all of these AI; people with less ADHD [attention deficit hyperactivity disorder] than I would probably do it the right way,” she said.

The workshop, integrated into the meeting of Universities South Africa’s (USAf’s) Community of Practice on Postgraduate Education and Scholarship on 8 May, unpacked practical examples of AI usage in an academic context. Professor McKenna shared her insights into the AI value in that setting.

“I’m not here to convince you to use AI,” said Professor McKenna (right). “I’m here to say that we must talk to our students about AI. We need to expose them to conversations that go far beyond ‘we will catch you if you use AI’; and, instead, engage with why you should bother to have an intellectual engagement with knowledge in the world where AI can do it for you.”

She first asked the 159 delegates about the purpose of postgraduate education, adding that this was essential in interrogating AI and its implications, trends and fluctuations.

McKenna referred to two roles of higher education raised in the White Paper of 1997: a) to develop people who are critical citizens and so can speak truth to power, and b) for universities to serve the public good. “To what extent are we ensuring that when we supervise our postgraduate students, when we curriculate postgraduate programmes, we develop this criticality, this scepticism, and readiness to challenge, to question, and not take things for granted? And are we helping them to understand the responsibility of creating knowledge that should serve a good?” she said.

She elaborated: “At master’s, they need to master the methods for producing that knowledge, and at the doctoral level, they are meant to contribute to the frontiers of the field. That’s what the Higher Education Qualifications Sub-Framework (HEQSF) tells us. To what extent are we managing to do this? And where does AI fit into this? How can it help students contribute to knowledge, to the frontiers of the field efficiently and more powerfully? Or conversely, to what extent might the use of generative AI prevent them from contributing to the frontiers of the field?”

This would apply to both the body of knowledge – the thesis, publications, creative output and what McKenna referred to as “the knower,” that is, the postgraduate students becoming researchers.

“If students tick yes to using AI to create new knowledge, then it’s achieved that goal. I see no problem with that. But if AI has prevented them from becoming these masters of methodologies, from contributing to their fields in ways that cannot yet be imagined, and if the AI use enables the production of good knowledge but does not enable the nurturing of the knower, then it’s a problem,” she said.

AI tools for higher education

McKenna mentioned Zotero as her recommended software tool for citations; it is free and works well with AI.

She cautioned delegates about the difference between paid and free versions of their choice of AI, saying South African students are likely to use the free versions, which only widens the digital divide between the Global North and the Global South as the paid versions allow for more sophisticated searches.

She primarily demonstrated three AI tools:

  • Claude – a large language model McKenna prefers to the better-known ChatGPT because it is more sophisticated, “especially for academic conversations”, she said.
  • Elicit – the literature tool ideal for reviews, searches, summaries and comparisons. It is trained on peer-reviewed published materials and draws conclusions from those publications.
  • Julius – for qualitative and quantitative data analysis. It can also visualise data and present it as infographics, mind maps and graphs. Its advantage, for those who are coders, is that it shows the coding, allowing it to be tweaked rather than using English words and prompts.

She also looked at Jenni AI and mentioned thesisai.io, which does not have a free version but can write an entire thesis in about 20 minutes, with references. “You need to become familiar with this simply because your students are familiar with it. If your students are doing this, they are missing out on the whole purpose of higher education,” said Professor McKenna.

How best to use Elicit with students

As the best way to use Elicit with students, Professor McKenna recommended using it on a reading the class has already discussed and is very familiar with, because that enables them to compare their understanding with the summary produced by the AI. “99% of the time, they’re going to go ‘wow, this is perfect’.  The 1%, or more, of the time when they go ‘actually, you know what, they missed something,’ is important.  

She said it was important for students to start seeing that when they read an article and engage with it using human intellect, they are often in the process not just of understanding it – which is what is expected of undergraduates –  but of making connections between that article and others, and “very importantly, making connections between that article and your viewpoints, and that article and your study. That’s where the AI can fall short. It’s going to offer a student a summary, but if all the student reads is that summary, they may have missed the very gem in that paper they can draw on to build a claim of their own,” she said. 

McKenna said telling students about this aspect of AI is not as effective as actually demonstrating it, with everyone using different AI, such as DeepSeek, ChatGPT, Jenni, Elicit, Claude, all uploading the same article they are already familiar with, prompting for summaries, and comparing them. She said her real worry was that “handing over that intellectual work, that cognitive load to the technology, actually prevents students from making the personal relationship with the text, with knowledge”. 

How protected is data uploaded onto AI?

In response to a few delegates’ questions relating to the format and security of the data being uploaded to AI tools, McKenna responded:

  • Data can be uploaded in various formats, such as spreadsheets and Word documents.
  • One of the biggest problems with AI is that it was trained on materials that are copyrighted, but Julius states that what is uploaded is used only for the purposes of that specific query, whereas ChatGPT uses whatever is uploaded as part of its training material. 
  • If universities’ ethics clearance policies state that for qualitative studies, only the student and supervisor will see the data, but it’s then uploaded to various AI agents, is it a blatant breach of the signed ethics contract? It is being shared with a machine as a piece of software, not with a third party, but if that data becomes part of the database of that software, it might become accessible to others. “If students are going to use AI, they need to be able to say that in the ethics application and specify which AI, and the extent. They can quote from the Julius platform to say, ‘this is how it’s secured, this is how I can establish it cannot be accessed by anyone else,” said McKenna.

AI needs to be part of the curriculum

“We need to have critical AI literacy development as part of our curricula but I don’t think it’s useful to take that position as a policeman and say, ‘I will hunt you down and catch you and you will fail’, which seems to be the dominant discourse in our universities,” said Professor McKenna.

She said she believes future graduates, particularly doctoral graduates, will be expected to be able to use discipline-specific AI in critical ways. “My view is that I need to make spaces to help students learn how to use it. And when I say, ‘learn how to use it’, I’m not talking about the technicalities of how to write a good prompt. They can learn that from YouTube. I’m talking about learning how to use it as researchers, in terms of critical conversations about the purpose of higher education. What is the purpose of knowledge? What is the purpose of research? And what might be down-the-line implications if you’ve used AI in ways that have robbed you of that transformative relationship with knowledge?”

She concluded that advocacy for the power of knowledge is even more important now because of generative AI. “And that can’t be an add-on conversation the librarians or the academic development office offer. It has got to be embedded in our courses. It can’t be ad hoc. This is an ongoing conversation,” said Professor McKenna.

Gillian Anstey is a contract writer for Universities South Africa.

The post Open student engagement about AI would be more beneficial than policing them on its use – a senior Rhodes University scholar advises appeared first on Universities South Africa.


Viewing all articles
Browse latest Browse all 147

Trending Articles