This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
About Us | Contact Us | Print Page | Sign In | Join now
News & Press: Data, Analytics & AI

Cutting through the hype - where does AI fit

23 May 2023  
Mark Carrigann: Beyond the AI hype

Rebecka Isaksson with an illustration of someone working on a laptop in the background

The launch of Open AI’s ChatGPT has put AI in the public consciousness like never before. But for professionals working with knowledge and information, Generative AI is something of an enigma – bringing opportunity and concerns. Here, researcher In Digital Education at the University of Manchester Mark Carrigan speaks to Rob Green about what we should be thinking about.

ACCORDING to many tech evangelists in Silicon Valley, AI has the potential to transform lives – not just in changing how we work and live, but creating something fundamentally different. It is easy to be seduced by the hype surrounding Open AI’s ChatGPT and Google’s Bard alternative, but University of Manchester’s Mark Carrigan says there are no guarantees about what Generative AI will deliver.

While the interface and computing power is new, the technology that underpins these systems is relatively well established. Many knowledge and information reliant professions are already using some form of AI in their practice, and so the rise of ChatGPT may not be the revolution some imagine.

Mark, who also leads an MA programme in Digital Communication Technology for Education at University of Manchester, says: “I’m interested in how we develop a purposeful, deliberate relationship with new technologies – how do we do that socially and organisationally?

Cultural impact

“Rather than leave adaptation to individuals experimenting, I want to know how organisations like universities can support the generation of cultures that enable us to talk reflectively about how we want to use these new technologies. Not just getting stuck into it and getting caught in the ‘shock of the new‘, where we are so absorbed by the dazzle of the emerging technology, we don’t stop to ask ‘is this useful to us, is it a threat, what are we using it for?‘ I’m interested in how technologies like Generative AI are received culturally.”

Specific Generative AI tools already exist, helping to sift and summarise large texts. Legal firms use them, academics and researchers use them – and for those familiar with them, they do not hold a mystical quality but are instead viewed as a useful tool, among many.

What is new about ChatGPT, and similar Large Language Model based AIs is that they can now utilise huge computing power, that essentially allows them to learn from a snapshot of the entire internet. Couple this new-found processing power with a chatbot interface, and generative AI suddenly becomes a much broader tool – and one that has captured the public’s attention.

Mark explains his own interest in it came professionally, but like many other users who ‘played’ with it he soon found his personal interest piqued as well, saying: “I was very concerned that our students might be using paraphrasing software – and the question of how, if at all, we could know they were using it. Through that process of investigating paraphrasing software, I started to get interested in generative AI and it was around this time that Open AI launched ChatGPT. It felt like I was just starting to get to grips with one big problem, when another much bigger problem started to hit the scene.

“As well as that, I was playing around with image generation software. In my non-academic career, I worked in social media for quite a long time as a strategist and consultant, so I had this interest in content generation and it was clear that it was going to have a big impact on how social media would work.

“When ChatGPT came out, as soon as I started playing with it I was hooked.”

Organisational approach

And Mark recommends that everyone starts ‘playing with’ Generative AI tools as soon as possible, saying: “‘Playing’ is not the most professional sounding term, but I think it is important to play or tinker with new technology. It’s open-ended experimentation and it gives you a chance to explore its capabilities and reflect on what they might mean for your sphere of activity.

“One of the things that fascinates me about ChatGPT and comparable systems is how broad the functionality is. The range of ways they could have an impact on existing practice is very difficult to articulate at the moment.

“So, when you are ‘playing’, you start to get a sense of what it can do – where it is strong and where it is less convincing. For reflective practice, with any digital tools, we have to have a sense of its capabilities. Ideally, we have to come to that sense in our own terms. The kind of things you are inclined to ask as an individual are very important in learning to use it and I’m a big advocate of just getting on with it and trying to find out what it can do for you.”

Having said that, the potential impact of these systems means that there needs to be an organisational approach, as well as a broader collaborative approach, saying: “It’s worth having formalised discussions about how we use them. The multi-purpose character of ChatGPT means it could touch on almost any aspect of existing practice that involves text. We need a way of negotiating the range of impacts that emerge from that and start to have more fruitful discussions.

“There is an analogy with how the role of learning technologists changed during the pandemic into a role that involved greater collaboration in teams. I wonder if we need more proximity between academics, learning technologists, learning designers and information professionals in universities. Those collaborations already happen, but we need a new depth of collaboration to emerge.”

Training needs

Mark also sees value in “collating the resources that are emerging in these debates, helping everyone make sense of the issues and allowing them to formulate responses”. He has some sympathy for calls for a hiatus on the technology, but admits it is not practical and also questions the motives of some of those who signed an open letter recently. Many have a vested interest in slowing development down as they look to play ‘catch up’. However, for users the idea of time to get to grips with the technology would be immensely valuable.

“The problem is everything is moving so fast, and this is why a six month hiatus could be important, if we could stop and reflect on where we are – but that is probably not desirable in practice,” Mark says. “There is an issue with the speed of implementation. In a university setting it means the process of formulating policy is being left behind by the technology. Universities are still trying to formulate policy for ChatGPT 3.5, when ChatGPT 4 has already hit with expanded functionality. That is a wider societal problem with technology that is developing more quickly than our capacity to formulate policy or ethical framework standards related to it.”

The key to getting to grips with new technology goes beyond the question of “what can it do”. There needs to be an understanding of what it can do for the organisation, what it can do for individuals in the organisation, and what steps need to be taken to ensure everyone understands how to use it practically and ethically. Training will be a crucial part of implementing AI successfully, but before that can happen organisations need to look at their own needs.

Mark, who is a sociologist and philosopher by training, says that it is this intersection between technology and wider organisational, cultural and societal needs that is often hardest to pin down.

“My core interest is how we work purposefully with new technology,” Mark says. “New technologies often leave us confused as to how we might use them. We start adapting what we do to meet the requirements that we perceive in the technology itself.”

The danger is that technology leads the user and so there needs to be a concerted effort to share learning and understanding. Mark says: “That conversation is urgent – the different professional groups that relate to knowledge are all going to have the nature of their work change because of these technologies. We need to have a collaborative and logical way of making sense of these changes and what it means for our work. If existing silos are entrenched because of this, rather than being broken down, that could be really bad. We wouldn’t get to grips with it collectively.”

Hype and technology often go hand in hand, and Mark points to the over-inflated promises surrounding Crypto and Web 3.0 that have failed to materialise. He questions whether broad AI tools will deliver on the hype, saying: “I’ve yet to find many practical uses that I can make of it in my actual day job – and I found that quite telling.

“As a working academic, I’m keen to incorporate it into my assessments and train my students on the digital literacy aspect of it, in terms of my day-to-day work and practice it is not that useful for me. If other people are having that same experience, then that is something of a check on the hype surrounding it.”


CILIP Conference

Join us at CILIP Conference 2023 to hear Mark Carrigan.

CILIP Conference 2023 takes place on 12 and 13 July in Birmingham.

Mitigation

He feels that there are areas where it will be useful – particularly in content generation and creative industries. He is also quick to point out that “I’m inherently sceptical of anyone who confidently forecasts where this will be in the next five years. Technologists might be able to make forecasts about the technology, but it is the interplay between the social, political, economic factors and the technology that is fiendishly complex.”

This moves the conversation away from “how can we use the technology”, to a theme of “how do we mitigate its effects?” Like any new technology, training and understanding are key pillars. Mark looks at digital literacy and the need for those teaching curricula to encompass AI. Not just in how it is used, but also how it works.

“The field of AI Literacy that has been developed over the last few years is critical. There is lots of conceptual, educational and practical work being done about what it means to have an AI system and how you use that – how it can be taught, measured, and rolled out. These are the conversations that are taking place in universities – what is the baseline level of understanding we need for staff and students.

“Universities have an important role to play there because students leave university and then they exist in wider society. Ensuring that baseline level of AI literacy is important. We also need to look at research to add to that AI Literacy ­because the interaction between the social context and technology is changing so fast. We will need to continually update things and part of this is understanding the limitations of these systems – things like hallucinations; understanding how the systems work.”

AI literate

There have been well documented cases of ChatGPT and other systems not working quite how they are expected to. From erratic responses akin to anger, to so-called hallucinations, and Mark says understanding that these behaviours exist are part of the learning process.

“It will offer internally coherent but factually inaccurate responses under certain conditions. From a university perspective and thinking about the integrity of assessments, this could be a huge problem in terms of AI literacy. It’s frequently wrong and it is very difficult, without existing factual knowledge, to be sure when it is wrong and when it is not.”

Mark’s own experience with ChatGPT threw up some intriguing hallucinations about his own biography – claiming that he was the author of several books that do not exist. And the hallucinations go beyond merely being incorrect, with added plausibility coming from the system.

Mark explains: “When I ask it to describe these books it gives detailed summaries that make perfect sense to me. In one case I’ve had the idea that I want to write this book – it is the perfect continuation of my work that brings together multiple strands of my research.”

The other major concern comes from image and video creation using AI tools, and the assumptions people hold about content they view or read. Mark says: “The assumption is that if we see a text it is written by a human being, but that is breaking down.

“And there is much more to this technology than just text. We have text-to-image and the development of text-to-video, which could have a huge impact on society. The image of the Pope in the puffer jacket – that was possibly the first Generative AI image that went viral because it was on the cusp of being believable. These types of images are being spread and the impact is going to be huge. If you look at social media companies laying-off trust and security staff, we are potentially going to see an avalanche of misinformation and fake images at exactly the moment when society is least equipped for it.

“Politically, with elections coming up that could have a huge impact. You can intellectually know that images and videos can be fabricated, but whether people act on that knowledge remains to be seen.”

There are clearly questions still to be answered, but it is also clear that individuals and organisations need to be engaging with AI technology. There needs to be dialogue and collaboration, as well as self-reflective practice. Institutions need to be thinking about potential impacts as well as potential uses and look at how they develop staff and other potential users.

Join Mark and keynote speakers, including Masud Khokhar, Keeper of the Brotherton Library,University of Leeds; Dr Navina Evans, Chief Workforce and Training Officer, NHS England, Sathnam Sanghera, author and journalist; and Rebecka Isaksson, Partner and Director Content and Collaboration AI at AI Lab Sweden AB.

Mark will be looking at Educational trends and their impact for University Libraries, exploring how technology is re-shaping academia, and what this means for librarians and information specialists in Higher Education.

CILIP Conference 2023 takes place at the Hilton Birmingham Metropole Hotel on 12 and 13 June. Book your place now.

 


Join CILIP

CILIP campaigns and advocates for librarians, information specialists and knowledge managers.

Join us today to support us in our mission, gain a boost in your career, or to start making a bigger impact in your industry.


Published: 23 May 2023


More from Information Professional

News

In depth

Interview

Insight

This reporting is funded by CILIP members. Find out more about the

Benefits of CILIP membership

Sign up for our fortnightly newsletter