This website uses cookies to store information on your computer. Some of these cookies are used for visitor analysis, others are essential to making our site function properly and improve the user experience. By using this site, you consent to the placement of these cookies. Click Accept to consent and dismiss this message or Deny to leave this website. Read our Privacy Statement for more.
About Us | Contact Us | Print Page | Sign In | Join now
News & Press: Profession

Why every librarian needs to be involved in shaping your AI Policy

01 February 2024  
Why every librarian needs to be involved in shaping your AI Policy

As the initial hype around Chat GPT and large-language models (LLMs) subsides, many organisations and regulators find themselves wresting with the same question – “now we better understand the risks and capabilities, how do we harness the potential of AI safely and successfully for our users?”.

This article is featured on the AI hub.

As the initial hype around Chat GPT and large-language models (LLMs) subsides, many organisations and regulators find themselves wresting with the same question – “now we better understand the risks and capabilities, how do we harness the potential of AI safely and successfully for our users?”.

The answer - according to the BBC and the UK Information Commissioner’s Office (ICO) at least – is that we need to use this brief hiatus in disruptive announcements to develop and share effective policies for the use of AI by staff.

GDPR and AI-assisted decision-making

In their guidance on AI policies and procedures, the ICO sets out the key reasons why every organisation should be developing their AI Policy as a matter of priority:

“Your policies and procedures are important for several reasons, they:

  • help ensure consistency and standardisation
  • clearly set out rules and responsibilities
  • support the creation/ adoption of your organisational culture.

This idea of ‘explainability’ is central to the ICO approach – and to the legal requirements on organisations. Specifically, the ICO states that General Data Protection Regulations (GDPR) create specific requirements around the provision of information about and the explanation of AI-assisted decisions.

We are currently in a ‘Wild West’ era of AI adoption in organisations, with individual staff frequently trying out AI-assisted tools as part of their workflow outside the organisation’s IT policies. A February 2023 study by Fishbowl found that as many as 68% of employees in their sample group had failed to disclose their use of AI to managers.

As a result, many organisations are operating with a significant degree of risk in the impact of AI on their compliance with GDPR. This is a primary reason why every organisation should develop and communicate their AI Policy before the ICO begins to deploy a stricter regime of controls and potentially fines.

Transparency, trust and creativity – Generative AI at the BBC

In October 2023, the BBC published three top-level principles that will shape and drive their use of Generative AI across the whole of their output. The three principles are (in summary):

  • We will always act in the best interests of the public
  • We will always prioritise talent and creativity
  • We will be open and transparent

These principles are important because they lay a foundation which will enable the BBC to develop and share AI Policies that are compatible with their public purpose as defined under their Charter.


CILIP Employers' Forum

This year's emloyer's forum brings together library and information professionals to hear from speakers in the industry on how they developed their artificial intelligence policies, and participate in a workshop to develop your own policy on artificial intelligence.


At the same time, it puts an important marker down that the BBC’s own use of AI should protect and promote the rights of creatives – absolutely critical in a time when several LLMs have been revealed to have been trained using large volumes of other people’s copyrighted material.

The librarian as the bridge to ethical AI

CILIP believes that there is a central role for ethical library, information and knowledge management professionals to ‘be the bridge’ between the capabilities of new technology and the needs and rights of information users.

Our professional values mean that we are ideally-placed to help organisations harness AI in ways that minimise the risk of exploitation and bias. Looking down the list of potential benefits of adopting an AI Policy, it is clear that this should rapidly become a key part of the skillset of our professional community:

An AI Policy will help your organisation (whether your library or information service, or your wider parent institution) to:

  • Ensure GDPR compliance and good practice in the use of data
  • Improve your awareness of the impact of AI on data privacy and security
  • Enable your organisation to adopt ethical approaches to harnessing AI
  • Strengthen trust and confidence for your service users
  • Deliver real improvements in performance while reducing risk
  • Ensure that you can innovate and adapt without creating new categories of risks

Not only this, but the process of developing an AI Policy will help your organisation both to learn more about its own systems and data - its values and ‘red lines’ – and to begin to develop training and support for the staff that will be putting the Policy into practice in their daily work.

In an age in which many librarians, information and knowledge professionals are being called upon to demonstrate the currency and relevance of our professional skillset – being ready to help our organisations develop a robust and effective AI Policy could just be the way into the hearts and minds of senior managers that we have been looking for.

If you have found this article useful, why not take it further by attending the CILIP Employers Forum event ‘Developing your AI Policy’. This practical event will take place in Worcester and online on the 28th February, featuring contributions from leading organisations that have been working on AI Policies of their own.


Develop your AI policy

Register now to join the discussion, and set you and your library up for developing an AI policy at the CILIP employers' forum.


Published: February 2024


More from Information Professional

News

In depth

Interview

Insight

This reporting is funded by CILIP members. Find out more about the

Benefits of CILIP membership

Sign up for our fortnightly newsletter