Search
Close this search box.
Search
Close this search box.

5. Conclusions and Future Directions


Libraries face five key challenges related to assessment:

  1. Gathering meaningful, purposeful, comparable data
  2. Acquiring methodological guidance and the requisite skills to plan and conduct assessments
  3. Managing assessment data
  4. Organizing assessment as a core activity
  5. Interpreting library trend data in the larger environmental context of user behaviors and constraints

Libraries urgently need statistics and performance measures appropriate to assessing traditional and digital collections and services. They need a way to identify unauthenticated visits to Web sites and digital collections, as well as clear definitions and instructions for compiling composite input and output measures for the hybrid library. They need guidelines for conducting cost-effectiveness and cost-benefit analyses and benchmarks for making decisions. They need instruments to assess whether students are really learning by using the resources libraries provide. They need reliable, comparative, quantitative baseline data across disciplines and institutions as a context for interpreting qualitative and quantitative data indicative of what is happening locally. They need assessments of significant environmental factors that may be influencing library use in order to interpret trend data. To facilitate comparative assessments of resources provided by the library, by commercial vendors, and by other information service providers, DLF respondents commented that they need a central reporting mechanism, standard definitions, and national guidelines that have been developed and tested by librarians, not by university administrators or representatives of accreditation or other outside agencies.

Aggressive efforts are under way to satisfy all of these needs. For example, the International Coalition of Library Consortia’s (ICOLC) work to standardize vendor-supplied data is making headway. The Association of Research Libraries’ (ARL) E-metrics and LIBQUAL+ efforts are standardizing new statistics, performance measures, and research instruments. Collaboration with other national organizations, including the National Center for Education Statistics (NCES) and the National Information Standards Organization (NISO), shows promise for coordinating standardized measures across all types of libraries. ARL’s foray into assessing costs and learning and research outcomes could provide standards, tools, and guidelines for these much-needed activities as well. Their plans to expand LIBQUAL+ to assess digital library service quality and to link digital library measures to institutional goals and objectives are likely to further enhance standardization, instrumentation, and understanding of library performance in relation to institutional outcomes. ARL serves as the central reporting mechanism and generator of publicly available trend data for large research libraries. A similar mechanism is needed to compile new measures and disseminate trend data for other library cohort groups.

Meanwhile, libraries have diverse assessment practices and sometimes experience failure or only partial success in their assessment efforts. Some DLF respondents expressed dismay at the pace of progress in the development of new measures. The pace is slower than libraries might like, in the context of the urgency of their need, because developing and standardizing assessment of current library resources, resource use, and performance is very difficult. Libraries are in transition. It is hard to define, let alone standardize, what libraries do, or to measure how much they do or how well they do it, because what they do is constantly changing. Deciding what data to collect and how to collect them are difficult because library collections and services are evolving rapidly. New media and methods of delivery evolve at the pace of technological change, which, according to Raymond Kurzweil (2000), doubles every decade.8 The methods for assessing new resource delivery evolve at a slower rate than do the resources themselves. This is the essential challenge and rationale for the efforts of ARL, ICOLC, and other organizations to design and standardize appropriate new measures for digital libraries. It also explains the difficulties involved in developing good trend data and comparative measures. Even if all libraries adopted new measures as soon as they became available, comparing the data would be difficult because libraries evolve on different paths and at different rates, and offer different services or venues for service. Given the context of rapid, constant change and diversity, the new measures initiatives are essential and commendable. Without efforts on a national scale to develop and field test new measures and build a consensus, libraries would hesitate to invest in new measures. Just as absence of community agreement about digitization and metadata standards is an impediment to libraries that would otherwise digitize some of their collections, lack of community agreement about appropriate new measures is an impediment to investing in assessment.

Despite the difficulties, substantial progress is being made. Consensus is being achieved. Libraries are slowly adopting composite measures, such as those developed by John Carlo Bertot, Charles McClure, and Joe Ryan, to capture traditional and digital library inputs, outputs, and performance. For example9

  • Total library visits = total gate counts + total virtual visits
  • Percentage of total library visits that are virtual
  • Total library materials use = total circulation + total in-house use of materials + total full-text electronic resources viewed or downloaded
  • Percentage of total library materials used in electronic format
  • Total reference activity = total in-person transactions + total telephone transactions + total virtual (for example, e-mail, chat) transactions
  • Percentage of total reference activity conducted in virtual format
  • Total serials collection = total print journal titles + total e-journal titles
  • Percentage of total serials collection available in electronic format

Analysis of composite measures over time will provide a more comprehensive picture of what is happening in libraries and will enable libraries to present more persuasive cases to university administrators and other funders to support libraries and their digital initiatives. Perhaps a lesson learned in system development applies here. Interoperability is possible when a limited subset of metadata tags and service offerings are supported. In the context of assessment, a limited subset of statistics and performance measures could facilitate comparison yet also allow for local variations and investments. ARL is taking this approach in its effort to develop a small set of core statistics for vendor products.

Reaching a consensus on even a minimum common denominator set of new statistics and performance measures would be a big step forward, but libraries also need methodological guidance and training in the requisite skills. Practical manuals and workshops, developed by libraries for libraries, that describe how to gather, analyze, interpret, present, and apply data to decision making and strategic planning would facilitate assessment and increase return on the investment in assessment. ARL is producing such a manual for E-metrics. The manual will provide the definition of each measure, its rationale, and instructions for how to collect the data. ARL also offers workshops, Systems and Procedures Exchange Center (SPEC) kits, and publications that facilitate skill development and provide models for gathering, analyzing, and interpreting data. However, even if libraries take advantage of ARL’s current and forthcoming offerings, comments from DLF respondents indicate that gaps remain in several areas.

“How-to” manuals and workshops are greatly needed in the area of user studies. Although DLF libraries are conducting a number of user studies, many respondents asked for assistance. Manuals and workshops developed by libraries for libraries that cover the popular assessment methods (surveys, focus groups, and user protocols) and the less well-known but powerful and cost-effective discount usability testing methods (heuristic evaluations and paper prototypes and scenarios) would go a long way toward providing such guidance. A helpful manual or workshop would

  • Define the method
  • Describe its advantages and disadvantages
  • Provide instruction in how to develop the research instruments and gather and analyze the data
  • Include sample research instruments proven successful in field testing
  • Include sample quantitative and qualitative results, along with how they were interpreted, presented, and applied to realistic library concerns
  • Include sample budgets, time lines, and workflows

Standard, field-tested research instruments for such things as OPAC user protocols or focus groups to determine priority features and functionality for digital image collections would enable comparisons across libraries and avoid the cost of duplicated efforts in developing and testing the instruments. Similarly, budgets, time lines, and workflows derived from real experience would reduce the cost of trial-and-error efforts replicated at each institution.

The results of the DLF study also indicate that libraries would benefit from manuals and workshops that provide instruction in the entire research process-from conception through implementation of the results-particularly if attention were drawn to key decision points, potential pitfalls, and the skills needed at each step of the process. Recommended procedures and tools for analyzing, interpreting, and presenting quantitative and qualitative data would be helpful, as would guidance in how to turn research findings into action plans. Many libraries have already learned a great deal through trial and error and through investments in training and professional development. Synthesizing and packaging their knowledge and expertise in the form of guidelines or best practices and disseminating it to the broader library community could go a long way toward removing impediments to conducting user studies and would increase the yield of studies conducted.

TLA presents a slightly different set of issues because the data are not all under the control of the library. Through the efforts of ICOLC and ARL, progress is being made on standardizing the data points to be delivered by vendors of database resources. ARL’s forthcoming instruction manual on E-metrics will address procedures for handling these vendor statistics. Similar work remains to be done with OPAC and ILS vendors and vendors of full-text digital collections. Library-managed usage statistics for their Web sites and local databases and digital collections present a third source of TLA data. Use of different TLA software, uncertainty or discrepancy in how the data points are defined and counted, and needed analyses not supported by some of the software complicate data gathering and comparative analysis of use of these different resources. Work must be done to coordinate efforts on all these fronts to facilitate comparative assessments of resources provided by the library, commercial vendors, and other information service providers.

In the meantime, libraries could benefit from guidance on how to compile, interpret, present, and use the TLA data they do have. For example, DLF libraries have taken different approaches to compiling and presenting vendor data. A study of these approaches and the costs and benefits of each approach would be instructive. Case studies of additional research conducted to provide a context for interpreting and using TLA data would likewise be informative. For example, what does the increasing or decreasing number of queries of licensed databases mean? Is an increase necessarily a good thing and a decrease necessarily a bad thing? Does a decrease indicate a poor financial investment? Could a decrease in the number of queries simply mean that users have become better searchers? What do low-use or no-use Web pages mean? Poor Web site design? Or wasted resources producing pages of information that no one needs? Libraries would benefit if those who have gathered data to help answer these questions would share what they have learned.

The issue of compiling assessment data is related to managing the data and generating trend lines over time. Libraries need a simplified way to record and analyze input and output data on traditional and digital collections and services, as well as an easy way to generate statistical reports and trend lines. Several DLF libraries reported conducting needs assessments for library statistics in their institutions, eliminating data-gathering practices that did not address strategic concerns or were not required for internal or external audiences. They also mentioned plans to develop a homegrown MIS that supports the data manipulations they want to perform and provides the tools to generate the graphics they want to present. Designing and developing an MIS could take years, not counting the effort required to train staff how to use the system and secure their commitment to use it. Only time will tell whether the benefits to individual libraries will exceed the cost of creating these homegrown systems.

The fact that multiple libraries are engaged in this activity suggests a serious common need. One wonders why a commercial library automation vendor has not yet marketed a product that manages, analyzes, and graphically presents library data. The local costs of gathering, compiling, analyzing, managing, and presenting quantitative data in effective ways, not to mention the cost of training and professional development required to accomplish these tasks, could exceed the cost of purchasing a commercial library data management system, were such a system available. The market for such a system would probably be large enough that a vendor savvy enough to make it affordable could also make it profitable. Such a system would reduce the need for librarians to interpret and apply data effectively. The cost savings would be spent on purchasing the system. The specifications and experiences of libraries engaged in creating their own MIS could be used to develop specifications for the design of a commercial MIS. Building a consensus within the profession for the specification and marketing it to library automation vendors could yield collaborative development of a useful, affordable system. Admittedly, the success of such a system depends in part on the entry and verification of correct data, but this issue could begin to resolve itself, given standard data points and a system, designed by libraries for libraries, that saves resources and contributes to strategic planning.

The results of the DLF study suggest that individually, libraries in many cases are collecting data without really having the will, organizational capacity, or interest to interpret and use the data effectively in library planning. Libraries have been slow to standardize definitions and assessment methods, develop guidelines and best practices, and provide the benchmarks necessary to compare the results of assessments across institutions. These problems are no doubt related to the fact that library use and library roles are in continuous transition. The development of skills and methods cannot keep pace with the changing environment. The problems may also be related to the internal organization of libraries. Comments from DLF respondents indicate that the internal organization of many libraries does not facilitate the gathering, analysis, management, and strategic use of assessment data. The result is a kind of purposeless data collection that has little hope of serving as a foundation for the development of guidelines, best practices, or benchmarks. The profession could benefit from case studies of those libraries that have conducted research efficiently and applied the results effectively. Understanding how these institutions created a program of assessment-how they integrated assessment into daily library operations, how they organized the effort, how they secured commitment of human and financial resources, and what human and financial resources they committed-would be helpful to the many libraries currently taking an ad hoc approach to assessment and struggling to organize their effort. Including budgets and workflows for the assessment program would enhance the utility of such case studies.

Efforts to enhance research skills, to conduct and use the results of assessments, to compile and manage assessment data, and to organize assessment as a core library activity all shed light on how libraries and library use are changing. What remains to be known is why libraries and library use are changing. To date, speculation and intuition have been employed to interpret known trends; however careful interpretation of the data requires knowledge of the larger context within which libraries operate. Many DLF respondents expressed a need to know what information students and faculty use, why they use this information, and what they do or want to do when they need information or when they find information. Respondents acknowledged that these behaviors, including use of the library, are constrained by changes on and beyond the campus, including the following:

  • Changes in the habits, needs, and preferences of users; for example, undergraduate students now turn to a Web search engine instead of the library when they need information
  • Changes in the curriculum; for example, elimination of research papers or other assignments that require library use, distance education courses, or the use of course packs and course management software that bundle materials that might otherwise have been found in the library
  • Changes in the technological infrastructure; for example, penetration and ownership of personal networked computers, network bandwidth, or wireless capabilities on university and college campuses that enable users to enter the networked world of information without going through pathways established by the library.
  • Use of competing information service providers; for example, Ask-A services, Questia, Web sites such as LibrarySpot, or the Web in general

In response to this widespread need to know, the Digital Library Federation, selected library directors, and Outsell, Inc., have designed a study to examine the information-seeking and usage behaviors of academic users. The study will survey several thousand students and faculty in different disciplines and different types of institutions to begin to understand how they perceive and use the broader information landscape. The study will provide a framework for understanding how academics find and use information (regardless of whether the information is provided by libraries), examine changing patterns of use in relation to changing environmental factors, identify gaps where user needs are not being met, and develop baseline and trend data to help libraries with strategic planning and resource allocation. The findings will help libraries focus their efforts on current and emerging needs and expectations of academic users, evaluate their current position in the information landscape, and plan their future collections, services, and roles on campus on the basis of an informed, rather than speculative, understanding of academic users and uses of information.10

The next steps recommended based on the results of the DLF study are the collaborative production and dissemination of the following:

  • E-metrics lite: a limited subset of digital library statistics and performance measures to facilitate gathering baseline data and enable comparisons
  • How-to manuals and workshops for
    • conducting research in general, with special emphasis on planning and commitment of resources
    • conducting and using the results of surveys, focus groups, user protocols, and discount usability studies, with special emphasis on field-tested instruments, time lines, budgets, workflows, and requisite skills
  • Case studies of
    • the costs and benefits of different approaches to compiling, presenting, interpreting, and using vendor TLA data in strategic planning
    • how institutions successfully organized assessment as a core library activity
    • a specification for the design and functionality of an MIS to capture traditional and digital library data and generate composite measures, trend data, and effective graphical presentations

Libraries today are clearly needy. Facing rampant need and rapid change, their ingenuity and diligence are remarkable. Where no path has been charted, they carve a course. Where no light shines, they strike a match. They articulate what they need to serve users and their institutional mission, and if no one provides what they need, they provide it themselves, ad hoc perhaps, but for the most part functional. In search of high quality, they know when to settle for good enoughgood-enough data, good-enough research and sampling methods, good enough to be cost-effective, good enough to be beneficial to users. In the absence of standards, guidelines, benchmarks, and adequate budgets, libraries work to uphold the core values of personal service and equitable access in the digital environment. Collaboration and dissemination may be the keys to current and future success.


FOOTNOTES

8 Kurzweil is founder and chief technology officer, Kurzweil Applied Intelligence, and founder and chief executive officer, Kurzweil Educational Systems.

9 The measures were developed for public library network services, but are equally suited to academic libraries. See Statistics and Performance Measures for Public Library Network Services. 2000. Chicago: American Library Association.

10 The research proposal and plans are available at http://www.diglib.org/use/grantpub.pdf.


Skip to content