New research maps the extent of web filtering in public libraries

Icons illustrating web being filtered

Following on from Jacqueline Mays’s post on Internet Service Provider’s (ISP's) and default web filtering, a recently published dataset reveals the extent of content filtering in public libraries in the UK. This builds on the work of the MAIPLE project (Managing Access to the Internet in Public Libraries) and prompts some important questions about the use of content filtering in libraries. 

How was the dataset gathered?

Many institutions providing public access to the internet will employ the use of content filtering software. Along with the ability to block specific URL’s (e.g. facebook.com), typically the software will offer the institution a list of categories that can be blocked e.g. “pornography, gambling, nudity”. The institution concerned will then select which categories users should be prevented from being able to access, and this list can often vary between user profile (e.g. children typically have more categories blocked than for adult users).  

Using the Freedom of Information (FOI) Act, volunteers from the Radical Librarians Collective (RLC) contacted over 200 councils in the UK to ascertain which of these categories were blocked in their public libraries. Other information was also gathered in this process, such as the name and cost of the content filtering software used.  The requests were made using the website “What do they know” (see some examples of the requests made here) and the results were collated and made available as an open dataset

What are some of the results?

The results are still being analysed in preparation for a journal article. Some of the initial findings are:

  • At least 98% of public libraries filter categories.
  • This list of categories differs between each council, and includes categories such as “Abortion”, “LGBT”, ”alternative lifestyles”, “questionable”, “tasteless”, “payday loans”, “discrimination”, “self-help”and “sex education”. 
  • 56% also block URLs in addition to categories. 
  • The privatisation of the IT services of some councils means they were under no obligation to provide this information since the FOI Act only applies to public authorities, and indeed, didn’t. 

What are some of the issues this research raises?

As is so often the case, this research raises more questions than it answers. Significant work needs to be done to work out who makes these filtering decisions, and what they are based on. Who decides to block abortion websites, and why? Furthermore, work needs to be done measuring the impact these filtering decisions have on users of the network. 

During the course of this research it was also revealed that many libraries have no way of anonymously reporting or requesting access to a blocked website. This means users may have to identify themselves as somebody who wishes to access a certain website, with no clear policy or guidance on how to do this. The imperfect nature of content filtering software against the dynamic and ephemeral web mean that such filters can only ever over-block or under-block. Anecdotally, LGBT and information websites about sexuality can be erroneously categorised as pornography, and therefore blocked. What does a user do in this situation?

Indeed, should anything be blocked at all? Beyond security-based categories such as malware and phishing, the author of this post questions whether the internet in a library should be controlled in this way. Whilst one can imagine many filtering decisions are made out of a desire to protect children, some of these decisions can also cause their own kind of harm. Rather than give a controlled and filtered world to the public, shouldn’t the world be given to them as it is, along with the necessary skills to navigate and understand it? If there was ever an appropriate place for that to happen, it must surely be the library…

 

Read our blog comment guidelines