Search
Close this search box.
Search
Close this search box.

As the needs and expectations of library users change in the digital environment, libraries are trying to find the best ways to define their user communities, understand what they value, and evolve digital library collections and services to meet their demands. In part, this effort requires a closer, more formal look at how library patrons use and respond to online collections and services.

To synthesize and learn from the experiences of leading digital libraries in assessing use and usability of online collections and services, the Digital Library Federation (DLF) undertook a survey of its members. From November 2000 through February 2001, the author conducted interviews with 71 individuals at 24 of the 26 DLF member institutions (representing an 86 percent response rate at the 24 institutions). Participants were asked a standard set of open-ended questions about the kinds of assessments they were conducting; what they did with the results; and what worked well or not so well. Follow-up questions varied, based on the work being done at the institution; in effect, the interviews tracked the efforts and experiences of those being interviewed.

The results of the survey reveal the assessment practices and concerns of leading digital libraries. They are not representative of all library efforts; however, they do show trends that are likely to inform library practice. The study offers a qualitative, rather than quantitative, assessment of issues and practices in usage and usability data gathering, analysis, interpretation, and application.

1.1. Report Structure

The survey indicates significant challenges to assessing use and usability of digital collections and services. The rest of Section 1 summarizes these challenges. Subsequent sections elaborate on these challenges and draw on examples from the assessment efforts of DLF libraries. Sections 2 and 3 describe libraries’ experiences using popu lar methods to conduct user studies, such as surveys, focus groups, user protocols, and transaction log analysis. The report explains what each of these methods entails, its advantages and disadvantages, why and how libraries use it, the problems encountered, and the lessons libraries have learned from experience. Section 4 covers general issues and challenges in conducting research, including sampling and recruiting representative research subjects, getting Institutional Review Board (IRB) approval to conduct research with human subjects, and preserving user privacy. Section 5 summarizes the conclusions of the study and suggests an agenda for future discussion and research. Appendix A provides a selected bibliography. A list of institutions participating in the survey appears in Appendix B, while Appendix C lists the interview questions. An overview of more traditional library input, output, and outcome assessment efforts, and the impact of digital libraries on these efforts, is provided in Appendix D; this information is designed to help the reader position the information in this report within the context of library assessment practices more generally.

To preserve the anonymity of DLF survey respondents and respect the sensitivity of the research findings, the report does not associate institution names with particular research projects, incidents, or results. The word “faculty” is used to refer to teachers and professors of for-credit academic courses. The word “librarian” is used, regardless of whether librarians have faculty status in their institutions, or, indeed, whether they hold an MLS degree.

1.2. Summary of Challenges in Assessment

DLF respondents shared the following concerns about the efficiency and efficacy of their assessment efforts:

  • Focusing efforts to collect only meaningful, purposeful data
  • Developing the skills to gather, analyze, interpret, present, and use data
  • Developing comprehensive assessment plans
  • Organizing assessment as a core activity
  • Compiling and managing assessment data
  • Acquiring sufficient information about the environment to understand trends in library use

Collecting only meaningful, purposeful data. Libraries are struggling to find the right measures on which to base their decisions. DLF respondents expressed concern that data are being gathered for historical reasons or because they are easy to gather, rather than because they serve useful, articulated purposes. They questioned whether the sheer volume of data being gathered prohibits their careful analysis and whether data are being used to their full advantage. Working with data is essential, time-consuming, and costly-so costly that libraries are beginning to question, and in some cases even measure, the costs and benefits of gathering and analyzing different data. Respondents know that they need new measures and composite measures to capture the extent of their activities in both the digital and traditional realms. Adding new measures is prompting many DLF sites to review their data-gathering practices. The libraries are considering, beginning, or completing needs assessments of the data they currently gather, or think they should gather, for internal and external purposes. If such information is not needed for national surveys or not useful for strategic purposes, chances are it will no longer be gathered, or at least not gathered routinely. However, deciding what data should be gathered is fraught with difficulties. Trying to define and measure use of services and collections that are rapidly changing is a challenge. The fact that assessment methods evolve at a much slower rate than do the activities or processes they are intended to assess compounds the problem. How can libraries measure what they do, how much they do, or how well they do, when the boundaries keep changing?

Developing skills to gather, analyze, interpret, present, and use data. Several DLF respondents commented that they spend a great deal of time gathering data but do not have the time or talent to do anything with this information. Even if libraries gather the right measures for their purposes, developing the requisite skills to analyze, interpret, present, and use the data are separate challenges. For example, how do you intelligibly present monthly usage reports on 8,000 electronic journals? The answer is you don’t. Instead, you present the statistics on the top 10 journals, even though this severely limits the dissemination and application of data painstakingly gathered and compiled. Though DLF respondents indicated that they are learning slowly from experience how to make each research method work better for their purposes, many said they need methodological guidance. They need to know what sampling and research methods are available to recruit research subjects and assess use and usability of the digital library, which methods are best suited for which purposes, and how to analyze, interpret, present, and use the quantitative and qualitative data they gather to make effective decisions and strategic plans.

Developing comprehensive assessment plans. Planning assessment from conception through follow-up also presents challenges. Ideally, the research process should flow seamlessly-from deciding to gather data to developing and implementing plans to use the data. In reality, however, DLF respondents reported frequent breakdowns in this process. Breakdowns occur for a number of reasons. It may be that something went awry in the planning or scheduling of the study. People assigned responsibility for certain steps in the process may lack the requisite skills. Staff turnover or competing priorities may intervene. Respondents also made it clear that the more people involved in the research process, the longer it takes. The longer the process takes, the more likely it is that the results will be out of date, momentum will be lost, or other phenomena will intrude before the results are implemented. Finally, if the study findings go unused, there will be less enthusiasm for the next study, and participation is likely to decrease. This applies both to the people conducting the study and to the research subjects. Conducting a study creates expec tations that something will be done with the results. When the results are not applied, morale takes a hit and human and financial resources are wasted. Participants lose confidence, and the study planners lose credibility.

Organizing assessment as a core activity. DLF respondents well understood that in an environment of rapid change and limited resources, libraries cannot afford these outcomes from their assessment efforts. They also seemed to understand that the way in which an assessment is organized affects the outcome. At some institutions, user studies are centralized and performed by recently hired experts in the field. At others, user studies are decentralized and performed systemwide; they involve efforts to teach librarians and staff throughout the organization how to conduct research using different methods. Still other institutions, sparked by the interests of different personnel, take an ad hoc approach to user studies. A few libraries have established usability testing programs and laboratories. If the goal is a culture of assessment, then making assessment a core activity and allocating human and financial resources to it is essential. The key is not how a study is organized, but that it is organized and supported by commitment from administrators and librarians. Comments from DLF respondents suggested that given sufficient human and financial resources, requisite skills could be acquired, guidelines and best practices developed, and assessments conducted routinely, efficiently, and effectively enough to keep pace with the pace of change.

Compiling and managing assessment data. Many DLF respondents expressed concern about the effort required to compile and manage data collected by different people and assessments. Libraries need a simple way to record and analyze quantitative and qualitative data and to generate statistical reports and trend lines. Several DLF sites have developed or are developing a management information system (MIS) to compile and manage statistical data. They are wrestling with questions about how long data should be kept, how data should be archived, and whether one system can or should manage data from different kinds of assessments. Existing systems typically have a limited scope. For example, one site has a homegrown desktop reporting tool that enables library staff to generate ad hoc reports from data extracted and to update them regularly from the integrated library system. Users can query the data and run cross-tabulations. The tool is used for a variety of purposes, including analysis of collection development, materials expenditures, and the productivity of the cataloging department. Reports can be printed, saved, or imported into spreadsheets or other applications for further analysis or manipulation. New systems being developed appear to be more comprehensive; for example, they attempt to assemble statistical data from all library departments. The ability to conduct cross-tabulations of data from different departments and easily generate graphics and multiyear trend lines are important features of the new systems.

Acquiring sufficient information about the environment to understand trends in library use. Several DLF respondents noted that emerging new measures will assess how library use is changing in the networked environment, but these measures will not explain why library use is changing. Academic libraries need to know how students and faculty find information, what resources they use that the libraries do not provide, why they use these resources, and what they do with the information after they find it. This knowledge would provide a context for interpreting existing data on shifting patterns of library use and facilitate the development of collections, services, and tools that better meet user needs and expectations. Library user studies naturally focus on the use and usability of library collections, services, and Web sites. The larger environment remains unexplored.


 

Skip to content