CILIP CEO, Nick Poole gave this speech at IFLA World Library and Information Congress, Rotterdam on 22nd August 2023
Thank you to my colleagues in the IFLA Government and Parliamentary Libraries Section for your kind invitation to speak here at the World Library and Information Congress.
I have been asked to address the theme of ‘AI: Partner or Rival’. This is a timely question – we have all to some extent been caught by the rapid arrival of the new wave of generative AI, and it is vitally important that we take
time to explore the implications together.
However, my aim this morning is to try and reframe the question – away from how we respond to the first wave of generative AI tools such as Chat GPT and towards how we might take a leading role in guiding the next wave.
The first wave, the one which we have been in really since 2021, is characterized by technologies that are big, expensive, noisy, environmentally-costly and prone to making stupid mistakes. They are also hugely impressive
in their scope and capabilities.
I believe that our professional community could hold the key to a second wave that is more person- and community-centred, smaller, more elegant, more democratic and more sustainable.
I want to explore this potential role through four lenses:
- Where we are today
- How we should respond as librarians and information professionals
- What the implications are for Government, and;
- Where we might be going next
The state-of-the-art
When I first studied AI and Natural Language Processing, in the early 1990’s, our work was guided by Alan Turing’s original vision of a universal computer – one which it would be impossible to programme and
which would therefore need to learn through input and feedback.
Back then, the models on which AI was being trained were tens, perhaps hundreds of thousands of words. They were essentially competing for the ability to hold a ‘human-like’ conversation via chat interfaces
over a period of a few minutes.
Fast-forward to today and AI has undoubtedly moved on immensely, but really only in one direction – brute force. Instead of more elegant models based on the distillation of knowledge, today’s LLMs really
depend on ‘extreme’ computation, datasets of trillions of words and huge server farms.
We are, in reality, no closer to a Generalised Artificial Intelligence than we were in the 90’s. Instead, we are using brute-force computation to create convincing patterns.
The Anatomy of AI project by Ars Electronica has done an incredible job of showing the real environmental, social and economic cost of this
brute-force approach. Your smart-home device is just a dumb client which sends voice queries to distant server farms for processing and response.
As Ars Electronica writes in their long-form essay, “Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary
network, fuelled by the extraction of non-renewable materials, labour, and data.”
As ethical information professionals, we must not just learn to accommodate AI in our work. We must also push for responsible AI that minimizes these damaging impacts.
Join CILIP
Join CILIP and help us campaign advocate for librarians, information specialists and knowledge managers on an international level
Join us today to support us in our mission, gain a boost in your career, or get more involved in your industry.
A student with flawed teachers
This generation of AI tools operates in two main modes. On the one hand, they use gigantic training datasets to try and predict the next word in a sentence. On the other, they throw
everything at solving a problem, and through feedback over many generations gradually home in on a solution.
Generative AI such as Chat GPT can only really obtain its training data from two sources – either via paid-for, curated training datasets or via open-access, messy data that can
be unstructured, semi-structured or highly structured – the open Web.
Once it has developed a model, the next step is to train it through iterative feedback. This is the step we often fail to consider. We are not the first generation of users
of generative AI, we are its first teachers. Our job is not to use Large Language Models as though they were a market-ready utility or service, but to train them for future
generations, in full awareness of their current limitations.
Generative AI is not a destination we have reached, it is a journey on which we have only just embarked. The real question isn’t ‘partner or rival’, it’s whether we will
help guide that journey in a positive direction.
Our response as librarians
This generation of AI is not really a radical departure. It is part of a continuum of digital change that has developed over the past 200 years.
Our role as librarians and information professionals is not to problematise new technologies but to engage with them, understand them and to help improve them
in order to maximise their value to information users.
Our response to generative AI, therefore, should derive fundamentally from our professional ethics and values. Critically, these values focus on the empowerment
of the information user, on equity and on the integrity of the information source – all vital design principles for responsible AI.
A manifesto for responsible AI
I would like to propose a manifesto of sorts, not just for how we respond to AI, but how we might take a leading role in improving and harnessing
it.
Firstly, I would like to see us de-mythologise AI both for our users and ourselves. Generative AI is not human-like intelligence and we ought
to arm ourselves with knowledge and understanding in order to assess its capabilities critically.
I think there is a central role for us in helping to train better AI with Open Data. We can help to ensure that there is a mass corpus
of structured and semi-structured, reliable data on which to train Large Language Models so they are less susceptible to bias.
Visit the AI Hub
The CILIP AI hub is the resource centre for everything libraries, archives and knowledge & information management sectors.
We can and we must teach our users algorithmic literacy & computational sense. We are undoubtedly in the middle of a ‘second great
literacy project’ akin to the first (reading literacy) in scale and urgency. Any vision of a just and equal society in future depends
on empowering all users with these critical literacies.
We must help to teach the next generation of AI common sense, norms and morals so that it is better-equipped to engage with the
real world. This will require us not to abandon tasks to AI, but to work side-by-side with AI to develop augmented services
with a feedback loop that keeps them accountable.
As librarians and information professionals, we can help guide society toward informed AI regulation based on a real understanding
of its capabilities and limitations.
We can and must help our institutions leverage AI for the good of our users, understanding where and how to situate AI,
data and machine learning within the existing technology base of our libraries.
One of the key problems with ‘extreme computation’ is that it is so resource-intensive that it situates the control
of AI within a very small group of corporate entities. As ethical information professionals, we must support the
movement to democratise AI and computation as public goods – helping to decentralize AI in the same way we have
the open Web.
And finally, we must help our users and institutions to resist the urge to outsource accountability. We can outsource
computation, the brute-force organisation of data and algorithms, but we must ensure that there is always a
‘person in the loop’ who remains accountable for the outcomes and impacts.
A better future is possible
I believe that the possibilities of AI to be a force for good are immense. As librarians we should aspire
to be part of the movement to create better AI that is more elegant, less resource-intensive, more
open and democratic and fundamentally more accountable to the societies we serve.
Our approach to AI should first and foremost be positive, optimistic and professional, guided by our
ethics and commitment to empowering our users.
We can and must take a lead in defining a benign and beneficial future role for AI in the lives
of the communities we serve.
Download Nick's Presentation Slides
CILIP Membership
Join CILIP and join a community of like-minded professionals & have your voice heard.
As well as individual membership, organisational membership gives you new opportunities
to connect your library or business with the wider sector for growth and development.