Where is library technology heading in the next few years? What are the emerging tools and technologies that we should be paying attention to, in order to be ready when the time is right to adopt them? Those are the questions contributors to The Top Technologies Every Librarian Needs to Knowwere asked to address.
Read on for a description of some of the technologies included in the book and what they mean for libraries.
Augmented Reality, or AR, is technology that provides digital overlays to reality that add information. Google’s Glass eyewear is perhaps the most commonly known example of this technology, but AR applications exist for smart phones as well.
There are a number of inexpensive tools that libraries can provide their patrons to help them in their research and use of the library’s physical resources. They might help guide a user to the right section of the stacks, or provide additional information to the individual as they conduct their research.
At the simplest end of the spectrum, public libraries could place QR Codes – graphical symbols that, when photographed with an appropriate application on smart phone, open a specified link in a web browser – to provide additional information about physical spaces in the library. For example, descriptions of the kinds of items in a range of shelves, or details about an art work on the wall. Smart phone applications like Layar(available for iPhone and Android) take a photograph of a physical object and return “layers” of information about it. If you take a photo of the U.S. edition of the book with the Layar application, you receive additional information about the book.
Other AR tools can aid research in libraries. An example of this is the SCARLET Project, a JISC-funded initiative developed at the University of Manchester. When users of this tool read digitized materials, the SCARLET tool provides additional information about the document (text, images, audio, etc.) to enrich the experience.
Augmented Reality has additional uses for libraries with local history or other special collections. Using applications like Layar, a history buff could take a picture of a building and see, superimposed over it, links to documents about historical events or people connected with the building. Or, see photographs of that same location (using GPS information) as it looked decades before.
One of the most essential tools libraries offers to researchers is the research database -- the many products created to amass all the publications a researcher might want to look at, with search interfaces for each.
Discovery, or research, has evolved from being primarily independent “ponds” of data -- separate databases, each individually maintained and with its own unique interface -- to being collected in oceans of bibliographic records and full-text articles. We started the “ocean” phase with federated search (often called metasearch), in which multiple independent databases were searched at the same time, and then a set of results returned.
We have recently seen the emergence of web-scale discovery systems, vast single indexes of the content from myriad smaller database tools. The trend we are seeing now is the move to streams of information, tailored dynamically, in a context-aware way, to the information need of the researcher.
For example, tools like Summon offer RSS feeds for any search conducted. A “null search” –click the search button without typing anything – brings up everything the library is entitled to. From there, you can use facets to create a subject and date search (say, everything from 3 months ago to present) for peer-reviewed materials. You can then use the RSS feed to find new materials, or simply bookmark the page.
More advanced searches might use ISSNs of journals to focus in on very specific subject areas and keywords, making searches exceptionally precise, while still fostering a bit of serendipity through the catchall approach discovery engines take.
A handful of projects over the past decade have involved the mass digitization of books. Google’s project is perhaps the best known, but others have been undertaken by Microsoft, the Internet Archive, and at smaller scales by library consortia or individual libraries. The recent availability of large collections of scanned, digitized, and OCR processed books has led to several interesting and groundbreaking changes.
The first is the largest collection of digitized books, the HathiTrust, which now holds almost 12.5 million volumes total, 4.5 million of them in the public domain. Now that a significant number of open-access and public domain books exists, libraries can begin to assess the ongoing needs for immediate access to their physical collections. In most cases, a digital copy serves researchers’ needs. This means that libraries can coordinate storage for single copies of many titles, for long-term preservation and access to the original, but provide digital access to the text from the HathiTrust. Farther down the road, improved search engines can find books that match abstract criteria (as search engines become more adept at discerning characteristics of text, and not just identifying words on the page).
In parallel with the rise of large-scale corpuses of digitized text is the development of tools to analyze, sift, and sort them. These new, open-source text-mining tools are opening up new avenues for scholarly inquiry, particularly in the humanities. When a scholar can search across large numbers of works by contemporary authors, analyze dozens, hundreds, or thousands of books for similar phrasings, word selections, or any other characteristic, what can be learned?
The theme of “open” runs through these technologies. Perhaps the most intriguing is the rise of open hardware – that is, commodity-priced computer chips that can be easily programmed and networked together to bring low-cost computing power into the library.
Parallel to open-source software (software that is available freely, for modification and adaptation), open-source hardware is on the verge of changing computing in general.
Rather than purchasing expensive, vendor-provided hardware for counting traffic through the library’s front door, for less than $100 a library could build a small sensor that did the same thing. A network of such sensors, using cheap hardware and software downloaded from sharing sites, could provide detailed information about what parts of the library are used during what time of the day, without using library staff to patrol and count heads.
What do you think are the key trends in library technology? Let us know in the comments below
Related knowledge and skills