Search
Close this search box.
Search
Close this search box.

APPENDIX D Traditional Input, Output, and Outcome Measures


The body of this report focuses on studies of users and electronic resource usage because these were the areas that the Digital Library Federation (DLF) survey respondents spent most of their time discussing during the interviews. Putting these issues in the foreground, however, is somewhat misleading, because libraries have traditionally gathered and continue to gather statistics related to the size, use, and impact of all of their collections and services. These traditional measures are being expanded to embrace digital library activities in order to capture the full scope of library performance. This expansion is problematic for reasons already acknowledged; for example, because libraries are in transition and standard definitions and reporting mechanisms are not yet fully established. Nevertheless, substantial progress is being made through the efforts of groups such as the Association of Research Libraries (ARL), which are undertaking large projects to field-test and refine new measures.

This appendix describes what DLF respondents reported about their input, output, and outcome measures to indicate the full scope of their assessment practices and to provide a context in which to interpret both the design and the results of the user and usage studies presented in the body of this report. The treatment is uneven in detail because the responses were uneven. Many respondents talked at great length about some topics, such as the use of reference services. In other cases, respondents mentioned a measure and brushed over it in a sentence. The unevenness of the discussion suggests where major difficulties or significant activity exists. As much as possible, the approach follows that used in the body of this report: What is the measure? Why is it gathered? How are the data used? What challenges do libraries face with it?

1. Input and Output Measures

Traditional measures quantify a library’s raw materials or potential to meet user needs (inputs) and the actual use of library collections and services (outputs). Input and output statistics reveal changes in what libraries do over time. For example, they provide a longitudinal look at the number of books purchased and circulated per year. Traditional approaches to measuring inputs and outputs focus on physical library resources. Libraries are slowly building a consensus on what to measure and how to measure inputs and outputs in the digital environment. The goal is standard definitions that facilitate gathering digital library data that can be compared with traditional library data from their own institution and from others. Developing such standards is difficult for many reasons, not the least of which is the basic fact of digital library life addressed in the transaction log analysis section of this report: much of the data are provided by vendor systems or software packages that capture and count transactions differently and do not always provide the statistics that libraries prefer. Though the form of the problem is new in the sense that the data are provided by units not controlled by the library, the problem itself is not. Even in the traditional library environment, definitions were not uniform. Comparison and interpretation were complicated by contextual factors such as the length of circulation loan periods and institutional missions that shaped library statistics and performance.

1.1. Input Measures: Collection, Staff, and Budget Sizes

Libraries have traditionally gathered statistics and monitored trends in the size of their collections, staff, and budgets. Collection data are gathered in an excruciating level of detail; for example, the number of monographs, current serials, videos and films, microforms, CDs, software, maps, musical scores, and even the number of linear feet of archival materials. The data are used to track the total size of collections and collection growth per year. Typically, the integrated library management system (ILS) generates reports that provide collection data. Staff sizes are traditionally tracked in two categories: professionals (librarians) and support staff. The library’s business manager or human resources officer provides these data. The business manager tracks budgets for salaries, materials, and general operation of the library. DLF respondents indicated that collection, staff, and budget data are used primarily to meet reporting obligations to national organizations such as ARL and ACRL, which monitor library trends. Ratios are compiled to assess such things as the number of new volumes added per student or full-time faculty member, which reveals the impact of the economic crisis in scholarly communication on library collections.

New measures are being developed to capture the size of the digital library as an indication of the library’s potential to meet user needs for electronic resources. DLF respondents reported using the following digital library input measures:

  • Number of links on the library Web site
  • Number of pages in the library Web site
  • Number of licensed and locally maintained databases
  • Number of licensed and locally maintained e-journals
  • Number of licensed and locally maintained e-books
  • Number of locally maintained digital collections
  • Number of images in locally maintained digital collections
  • Total file size of locally maintained databases and digital collections

Whether libraries also count the number of e-journals, e-books, or digital collections that they link to for free is unclear. Some of these measures can be combined with traditional collection statistics to reveal the libraries’ total collection size (for example, the number of physical monographs plus the number of e-books) and trends in electronic collection growth. DLF respondents indicated that they were beginning to capture the following composite performance measures:

  • Percentage of book collection available electronically
  • Percentage of journal collection available electronically
  • Percentage of reserves collection available electronically
  • Percentage of the materials budget spent on e-resources

In many cases, baseline data are being gathered. Little historical data are available to assess trends within an institution. Even if multiyear data are available, libraries have had no way to compare their efforts with those of their peer institutions, because there is no central reporting mechanism for digital library input measures. ARL will soon begin gathering such e-metrics, but other reporting organizations appear to be further behind in this regard.

DLF respondents talked about the difficulty of compiling these data. The data reside in different units within the library, and the systems that these units use do not support this kind of data gathering and reporting. The upshot is a labor-intensive effort to collect, consolidate, and manage the statistics. ARL’s E-Metrics Phase II Report, Measures and Statistics for Research Library Networked Services, describes the related issue of “the organizational structure needed to manage electronic resources and services, particularly the configuration of personnel and workflow to support the collection of statistics and measures.1 Interpreting these data is also an issue. For example, what does it mean if the number of pages on the library Web site shrinks following a major redesign of the site? Just as traditional input measures seemed to assume that more books were better than fewer books, should libraries assume that more Web pages are necessarily better than fewer Web pages? DLF respondents didn’t think so. User studies and an interpretive framework based on a study of key factors in the larger environment are needed to interpret the data.

Some DLF respondents commented on trends in staff and budget sizes. They talked about hiring more technical staff (technicians, system managers, programmers) and other personnel (interface designers, human factors researchers) needed to support digital library initiatives. These positions are funded primarily by eliminating open positions because personnel budgets do not accommodate adding positions. At the time the DLF interviews were conducted, there was a crisis in hiring information technology (IT) personnel in higher education because salaries were not competitive with those in the corporate sector.2 The situation was even more urgent for academic libraries, which often could not compete with IT salaries even within their institution. The recent folding of many dot-coms might make higher education salaries more competitive and facilitate filling these positions, but unless the inequity in IT salaries within an institution is addressed, libraries could continue to have problems in this area. DLF respondents commented that materials budgets did not keep pace with the rising cost of scholarly communications, and that operating or capital budgets were often inadequate to fund systematic replacement cycles for equipment, not to mention the purchase of new technologies.

1.2. Output Measures

Libraries have traditionally gathered statistics and monitored trends in the use of their collections and services. They often compare traditional usage measurements across institutions, although these comparisons are problematic because libraries, like vendors, count different things and count the same things in different ways. Though settling for “good-enough” data seems to be the mantra of new measures initiatives and conferences on creating a “culture of assessment,” libraries have apparently been settling for good-enough data since the inception of their data gathering. Reference service data are a case in point, described in section 1.2.4. of this appendix. The following discussion of output measures reflects the expansion of traditional measures to capture the impact of digital initiatives on library use and the issues and concerns entailed in this expansion.

1.2.1. Gate Counts

Gate counts indicate the number of people who visit the physical library. Students often use an academic library as a place for quiet study, group study, or even social gatherings. Capturing gate counts is a way to quantify use of the library building apart from use of library collections and services. Libraries employ a variety of technological devices to gather gate counts. The data are often gathered at the point of exit from the library and compiled at different time periods throughout the day. Depending on the device capabilities, staff might manually record gate count data on a paper form at specified times of the day and later enter it into a spreadsheet to track trends.

Libraries include gate count data in annual reports. They use gate counts to adjust staffing and operating hours, particularly around holidays and during semester breaks. Sites capturing the data with card-swipe devices can use the data to track usage patterns of different user communities.3 One DLF respondent reported that regression analysis of exit data can explain fluctuations in reference activity and in-house use of library materials. If one of these variables is known, the other two can be statistically estimated. However, no library participating in the DLF survey reported using gate counts to predict reference service or in-house use of library materials. Adjustments to staffing and operating hours appear to be made based on gross gate counts at different time periods of the day and on the academic and holiday calendar. Gate count data, like data from many user studies, appear to be gathered in some cases even though libraries do not have the will, organizational capacity, skill, or interest to mine, interpret, and use them effectively in strategic planning.

Digital library initiatives introduce a new dimension to visiting the library. The notion of a “virtual” visit raises issues of definition, guidelines for how to gather the data, and how or whether to compile traditional gate counts and virtual visits as a composite measure of library use. Is a virtual visit a measure of use of the library Web site, the OPAC, or an electronic resource or service? All of the above? Surely it is not a matter of counting every transaction or page fetched, in which case a definition is needed for what constitutes a “session” in a stateless, sessionless environment such as unauthenticated use of Web resources. The recommendation in the ARL E-Metrics Phase II Report and the default in some Web transaction analysis software define a session based on a 30-minute gap of inactivity between transactions from a particular IP address.4 Compiling a composite measure of traditional gate counts and virtual visits introduces a further complication, because virtual visits from IP addresses within the library must be removed from the total count of virtual visits to avoid double counting patrons who enter the physical library and use library computers to access digital resources.

Libraries are struggling with how to adjudicate these issues and determine what their practice will be. Their decisions are constrained by what data it is possible and cost-effective to gather. One DLF site has decided to define virtual visits based strictly on use of the library Web site, a 30-minute gap of inactivity from an IP address, and aggregate data on virtual visits inside and outside of the libraries. Given their equipment replacement cycle and the number of new machines and hence new IP addresses deployed each year in the library, this library decided that the benefits of calculating the number of virtual visits from machines inside the library did not warrant the costs.

1.2.2. Circulation and In-House Use

Circulation statistics traditionally indicate how many items were checked out to users or used within the library. Circulation data reports are generated routinely from the Integrated Library System (ILS). Initial checkouts and renewals are tracked separately because national surveys require it. Reshelving data, gathered manually or through the ILS, are used to assess in-house use of library materials. Items that circulate through other venues, for example, analog or digital slides, might not be included in circulation statistics.

Libraries include circulation data in annual reports and national library surveys. The data are used to:

  • Identify items that have never circulated and inform retention and cancellation decisions
  • Assess or predict book use to help decide what to move to off-site storage5
  • Decide whether the appropriate materials are in off-site storage
  • Determine staffing at the circulation desk by examining patterns of circulation activity per hour, day, and academic quarter

In addition, one DLF respondent mentioned conducting a demographic analysis of circulation data to determine circulation per school, user status, library, and subject classification. The results were used to inform collection development decisions. Other DLF respondents simply commented that they know that humanists use books and scientists use journals.

Libraries also generate financial reports of fines and replacement costs for overdue and lost books. The data are tracked as a source of important revenue and are frequently used to help fund underbudgeted student employee wages. Collection developers determine whether lost books will be replaced, presumably based on a cost-benefit analysis of the book’s circulation and replacement cost. Some DLF respondents also reported tracking recalls and holds, but did not explain how these data are used. If the data are used to track user demand for particular items and inform decisions about whether to purchase additional copies, they serve a purpose. If the data are not used, data collection is purposeless.

The digital environment also introduces a new dimension to circulation data gathering, analysis, and use. For example, a comprehensive picture of library resource use requires compiling data on use of traditional (physical) and digital monographs and journals. Usage data on electronic books and journals are not easily gathered and compiled because they are not checked out or re-shelved in the traditional sense and because the data are for the most part provided by vendorsin different formats and time periods, and based on different definitions. Ideally, use of all physical and digital resources would be compiled, including use of physical and digital archival materials, maps, and audio and video resources. The discussions of transaction log analysis and virtual visits earlier in this report describe many of the difficulties inherent in tracking “circulation” or “in-house use” of electronic resources. A few DLF respondents mentioned efforts to compile book and journal data as their foray into this area, but a comprehensive picture of use of library collections appears to be a long way off.

1.2.3. Reserves

Faculty put items that they want students to use, but do not distribute in class or require them to purchase, on reserve in the library. Libraries track reserve materials in great detail. Reserves are tracked as both input and output measures. Both dimensions are treated here to facilitate an understanding of the complexity of the issues. Libraries place items on course reserves in traditional paper and electronic formats. Some DLF sites operate dual systems, offering both print and e-reserves for the same items. DLF respondents reported tracking the following:

  • The number of items on reserve in traditional and digital format
  • The use of traditional and e-reserve items
  • The percentage of reserve items available electronically
  • The percentage of reserve use that is electronic

The number of traditional and digital reserve items in some cases is tracked manually because the ILS cannot generate the data. Depending on how reserves are implemented, use of traditional reserves (for example, books and photocopies) might be tracked by the circulation system. Tracking use of e-reserves requires analysis of Web server logs (for example, the number of PDF files downloaded or pages viewed). The data are used to track trends over time, including changes in the percentage of total reserve items available electronically and the percentage of total reserve use that is electronic. Data on reserve use may be included in annual reports.

One DLF site reported analyzing Web logs to prepare daily and hourly summaries of e-reserves use, including what documents users viewed, the number of visits to the e-reserves Web site, how users navigated to the e-reserves Web site (from what referring page), and what Web browser they used. This library did not explain how these data are used. Another site reported tracking the number of reserve items per format using the following format categories: book, photocopy, personal copy, and e-reserves. Their e-reserve collection does not include books, so to avoid comparing apples with oranges, they calculate their composite performance measures without including books in the count of traditional reserve items or use. Several sites provide or plan to provide audio or video e-reserves. Only time will tell if they begin to track formats within e-reserves and how this will affect data gathering and analysis.

DLF respondents also mentioned tracking the following information manually:

  • The number of reserve items per academic department, faculty member, and course number
  • The number of requests received per day to put items on reserve
  • The number of items per request
  • The number of items made available on reserves per day
  • The number of work days between when the request was submitted and when the items are made available on reserves
  • The number of pages in e-reserve items

Data about the number of requests per day, the number of items per request, and the amount of time that passes between when a request is placed and when the item becomes available on reserve are used to estimate workload, plan staffing, and assess service quality. The number of pages in e-reserve items is a measure of scanning activity or digital collection development. It is also used as the basis for calculating e-resource use in systems where e-reserves are delivered page by page. (The total number of e-reserve page hits is divided by the average number of pages per e-reserve item to arrive at a measure comparable to checkout of a traditional reserve item.) No indication was given for how the data on reserve items per department, faculty, and course were used. If converted to percentages, for example, the percentage of faculty or departments requesting reserves, the data would provide an indication of market penetration. If, however, the data are not used, data collection is purposeless.

1.2.4. Reference

Reference data are difficult to collect because reference service is difficult to define, evolving rapidly, and being offered in new and different ways. The problem is compounded because naturally the methods for assessing new service delivery evolve at a slower rate than the service forms themselves do. DLF respondents reported offering reference service in the following ways, many of which are online attempts to reach remote users:

  • Face-to-face at the reference desk
  • Telephone at the reference desk
  • Telephone to librarian offices
  • E-mail, using a service e-mail address or Web-based form on the library’s Web site
  • E-mail directly to librarians
  • U.S. Postal Service
  • Chat software
  • Virtual Reference Desk software
  • Teleconferencing software

Libraries are also collaborating to provide online or digital reference service. For example, some DLF sites are participating in the Collaborative Digital Reference Service,6 which is a library-to-library service to researchers available any time, anywhere, through a Web-based, international network of libraries and other institutions organized by the Library of Congress. Other collaborative digital reference services include the 24/7 Reference Project and the Virtual Reference Desk Network.7 The DLF, OCLC, and other organizations are supporting a study of online reference services being conducted by Charles McClure and David Lankes. Findings from the study so far reveal a wide range of concerns and need for new measures. For example, there are concerns about competitive reference services in the commercial sector, concerns about decreasing traditional reference statistics and the potential volume of digital reference questions, and a need for instruments to measure the effectiveness, efficiency, costs, and outcomes of digital reference.8

Most DLF libraries track reference data, but they define different categories of questions to count, and they count at different frequencies. At bare minimum, libraries count questions asked at the reference desk and distinguish “reference” questions from “directional” questions. Some libraries distinguish “quick reference” questions from “real reference” questions. Some libraries explicitly count and categorize “technical” questions about computers, printers, or the network. Some include technical questions under the rubric of “reference” questions. Some do not count technical questions at all. Some have a category for “referrals” to other subject specialists. Some have an “Other” category that is undefined. Some libraries track the time of day and day of week questions are asked at the reference desk. Some track the school and status of the user and the reference desk location. Some libraries gather reference desk data routinely. Others sample, for example, two randomly selected days per month, two weeks per year, or two weeks per quarter. Some libraries include in their reference statistics questions that go directly to the librarian’s desk via telephone or personal e-mail. Others make no effort to gather such data. Two apparently new initiatives are to track the length of reference transactions and the number of reference questions that are answered using electronic resources.

Compiling data from different venues of reference service is time-consuming because the data gathering is dispersed. Reference desk questions are tracked manually at each desk. Librarians manually track telephone and e-mail questions that come directly to them. Such manual tracking is prone to human error. E-mail questions to a reference service e-mail address are tracked on an electronic bulletin board or mailbox. Chat reference questions are counted through transaction log analysis. Often efforts to assemble these data are not well organized.

Despite these difficulties and anomalies, reference data are included in annual reports and national library surveys. The data are used to determine

  • Performance trends over time, including the percentage of reference questions submitted electronically and the percentage of reference questions answered using electronic resources
  • Appropriate hours of reference service
  • Appropriate staffing at the reference desk during specific hours of the day
  • Instruction to be provided for different constituencies (for example, database training for a particular college or user group)

In addition, some librarians track their reference data separately and include it in their self-evaluation during annual performance reviews as a measure of their contribution and productivity.

Though reference data are tracked and in many cases examined, comments from DLF respondents suggest that strategic planning is based on experience, anecdotes, and beliefs about future trends rather than on data. Several factors could account for this phenomenon. First, the data collected or compiled about reference service are, and will continue to be, incomplete. As one respondent observed, “Users ask anyone they see, so reference statistics will always be incomplete.” Second, even if libraries have multiyear trend data on reference service, the data are difficult to interpret. Changes in institutional mission, the consolidation of reference points, the opening or renovation of library facilities, or the availability of competing “Ask-a” services could change either the use of reference service or its definition, service hours, or staffing. Decisions about what to count or not to count (for example, to begin including questions that go directly to librarians) make it difficult to compare statistics and interpret reference trends within an institution, let alone across institutions. Third, the technological environment blurs the distinction between reference, instruction, and outreach, which raises questions of what to count in which category and how to compile and interpret the data. Furthermore, libraries are creating frequently asked questions (FAQ) databases on the basis of their history of reference questions. What kind of service is this? Should usage statistics be categorized as reference or database use? Given the strenuous effort required to gather and compile reference data and the minimal use made of it, one wonders why so many libraries invest in the activity. One DLF site reported discontinuing gathering reference data based on a cost-benefit analysis.

1.2.5. Instruction

Librarians have traditionally offered instruction in how to use library resources. The instruction was provided in persona librarian either visited a classroom or offered classes in the library. Often the instruction was discipline specific, for example, teaching students in a histo ry class how to use the relevant collections in the library. Digital library initiatives and the appearance of the Web have expanded both the content and format of library instruction. In addition to teaching users how to use traditional library resources, librarians now teach patrons how to use many different bibliographic and full-text electronic resources. Given concerns about undergraduate student use of the surface Web and the quality of materials they find there, library instruction has expanded to include teaching critical thinking and evaluation (“information literacy”) skills. Remote access to the library has precipitated efforts to provide library instruction online as well as in person. The competencies required to provide instruction in the digital environment are significantly different from those required to teach users how to use traditional resources that have already been critically evaluated and selected by peer reviewers and librarians.

Libraries manually track their instruction efforts as a measure of another valuable service they provide to their constituencies. DLF respondents reported tracking the number of instruction sessions and the number of participants in these sessions. Sites with online courses or quizzes track the number of students who complete them. Libraries include instruction data in annual reports and national surveys. The data are used to monitor trends and to plan future library instruction. Some librarians track their instruction data separately and include this information in their self-evaluation during annual performance reviews as a measure of their contribution and productivity.

Though a substantial amount of work and national discussion is under way in the area of Web tutorials, national reporting mechanisms do not yet have a separate category for online instruction and no effort appears to have surfaced to measure the percentage of instruction offered online. Perhaps this is because the percentage is still too small to warrant measuring. Perhaps it is because online and in-person instruction are difficult to compare, since the online environment collapses session and participant data into one number.

1.2.6. Interlibrary Loan

Interlibrary loan (ILL) service provides access to resources not owned by the library. Libraries borrow materials from other libraries and loan materials to other libraries. The importance of ILL service to users and the expense of this service for libraries, many if not most of which absorb the costs rather than passing them on to users, lead to a great deal of data gathering and analysis about ILL. Changes precipitated by technology-for example, the ability to submit, track, and fill ILL requests electronically-expand data gathering and analysis.

Libraries routinely track the number of items loaned and borrowed, and the institutions to and from which they loan and borrow materials. They annually calculate the fill rate for ILL requests and the average turn-around time between when requests are submitted and the items are delivered. If items are received or sent electronically, the number of electronically filled requests (loaned or borrowed) and turn-around times are tracked separately. Some libraries also track the format of the items, distinguishing returnable items like books from non-returnable photocopies. Libraries that subscribe to OCLC Management Statistics receive detailed monthly reports of ILL transactions conducted through OCLC, including citations, whether requests were re-submitted, and turn-around times. They might have similar detail on ILL transactions conducted through other venues. Libraries with consortium resource-sharing arrangements track these transactions separately.

Some libraries track ILL requests for items in their own collections. Resource-sharing units that photocopy materials in their own collection and deliver them to campus users also track these transactions and, if a fee is charged, the revenue from these transactions. Libraries in multi-library systems track ILL activity at each library separately. If they operate a courier service among the libraries, they might also track these transactions.

Traditionally, much of this information has been tracked manually and later recorded in spreadsheets. The dual data entry is time-consuming and prone to human error. Implementing the ILLiad software enables automatic, detailed tracking of ILL transactions, saving staff time and providing a more complete and accurate picture of ILL activity.

ILL data are included in annual reports and national surveys. The data are used to

  • Track usage and performance trends over time, including the percentage of ILL requests filled electronically
  • Assess service quality on the basis of the success (fill) rate and average turn-around times
  • Determine staffing on the basis of the volume of ILL or courier transactions throughout the year
  • Distribute the ILL workload among libraries in a multilibrary system
  • Inform requests for purchasing additional equipment to support electronic receipt and transmission of ILL items
  • Target publicity to campus constituencies by informing liaison librarians about ILL requests for items in the local collection

One DLF respondent is considering analyzing data on ILL requests to assess whether requests in some academic disciplines are more difficult to fill than others are, though she did not explain how this data would be used. This respondent also wants to cross-correlate ILL data with acquisitions and circulation data to determine the number of items purchased on the basis of repeated ILL requests and whether these items circulated. Presumably this would enable a cost analysis of whether purchasing and circulating the items was less expensive than continuing to borrow them via ILL.

Cost data on ILL are important for copyright and budget reasons, but gathering the data to construct a complete picture of the cost of ILL transactions is complex and labor-intensive. Apparently many libraries have only a partial picture of the cost of ILL. Libraries have to pay a fee if they borrow more than five articles from the same journal in a single year. Collecting the data to monitor this is difficult and time-consuming, and the data are often incomplete. Libraries that subscribe to OCLC Fee Management can download a monthly report of the cost of their ILL transactions through OCLC. Cost data for ILL transactions through other venues are tracked separately, and often not by the resource-sharing unit. For example, invoices for ILL transactions might be handled through the library’s acquisitions unit; accounting for ILL transactions with institutions with which the libraries have deposit accounts might be handled through the administrative office. Often the cost data from these different sources are not compiled.

1.2.7. Printing and Photocopying

Printing and photocopying are important services provided by the library. Some libraries outsource these services, in which case they might not get statistics. If these services are under the library’s control, they are closely monitoredparticularly if the library does not recover costs. Printers and photocopies have counters that provide the number of pages printed or copied. The data are typically entered into a spreadsheet monthly. Some libraries also track the cost of paper and toner for printers and photocopiers. At least one DLF site even monitors the labor costs to put paper and toner in the machines. In some cases, use of these services by library staff and library users are tracked separately. The data are used to track usage trends and make projections about future use, equipment needs, expenditures, and revenue (cost recovery).

2. OUTCOME MEASURES

In the parlance of traditional library performance measures, the purpose of all inputs and outputs is to achieve outcomes. Outcomes are measures of the impact or effect that using library collections and services has on users. Good outcome measures are tied to specific library objectives and indicate whether these objectives have been achieved.9 Outcomes assessments can indicate how well user needs are being met, the quality of library collections and services, the benefits or effectiveness of library expenditures, or whether the library is accomplishing its mission. Such assessments can be difficult and expensive to conduct. For example, how do you articulate, develop, and standardize performance measures to assess the library’s impact on student learning and faculty research? Substantial work is underway in the area of outcomes assessment, but with the exception of ARL’s LIBQUAL+, libraries currently have no standard definitions or instruments with which to make such assessments; likewise, they have no source of aggregate or contextual data to facilitate comparing and interpreting their performance. Given the difficulty and expense of measuring outcomes, if university administrators do not require outcomes assessments, many libraries do not pursue them.

2.1. Learning and Research Outcomes

No DLF respondent reported gathering, analyzing, or using learning and research outcomes data. Instead, they talked about the difficulty and politics of measuring such outcomes. Assessing learning and research outcomes is very difficult because libraries have no graduates to track (for example, no employment rate or income levels to monitor), no clear definitions of what to assess, and no methods to perform the assessments. The consensus among DLF respondents was that desirable outcomes or proficiencies aligned with the institutional mission and instruments to measure success should be developed through the collaboration of librarians and faculty, but the level of collaboration and commitment required to accomplish these two tasks does not exist.

In the absence of definitions and instruments for measuring learning and research outcomes, libraries are using assessments of user satisfaction and service quality as outcomes measurements. In the worst-case scenario, outputs appear to substitute for outcomes, but as one DLF respondent commented, “It’s not enough to be able to demonstrate that students can find appropriate resources and are satisfied with library collections. Libraries need to pursue whether students are really learning using these resources.” The only practical solution seems to be to target desired proficiencies for a particular purpose, identify a set of variables within that sphere that define impact or effectiveness, and develop a method to examine these variables. For example, conduct citation analysis of faculty publications to identify effective use of library resources.

2.2. Service Quality and User Satisfaction

Years ago, the Association of College and Research Libraries (ACRL) Task Force on Academic Library Outcomes Assessment called user satisfaction a “facile outcome” because it provides little if any insight into what contributes to user dissatisfaction.10 Nevertheless, assessing user satisfaction remains the most popular library outcomes measurement because assessing satisfaction is easier than assessing quality. Assessments of user satisfaction capture the individual user’s perception of library resources, the competence and demeanor of library staff, and the physical appearance and ambience of library facilities. In contrast, assessments of service quality measure the collective experience of many users and the gaps between their expectations of excellent service and their perceptions of the service delivered. By identifying where gaps exist-in effect, quantifying quality-service quality studies provide sufficient insight into what users consider quality service for libraries to take steps to reduce the gaps and improve service. Repeating service quality assessments periodically over time can reveal trends and indicate whether steps taken to improve service have been successful. If the gaps between user perceptions of excellence and library service delivery are small, the results of service quality assessments could serve as best practices for libraries.

Though service quality instruments have been developed and published for several library services, the measure has had limited penetration. Few DLF sites reported conducting service quality assessments of particular services, though many are participating in ARL’s LIBQUAL+ assessment of librarywide service provision. DLF libraries reported conducting service quality studies of reference, interlibrary loan, course reserves, and document delivery services to assess user perceptions of their speed, accuracy, usefulness, reliability, and courteousness. The results were used to plan service improvements based on identified gaps. In some cases, the results were not systematically analyzed-additional examples of a breakdown in the research process that leads to purposeless data collection. One DLF respondent suggested that the best approach to measuring service quality using the gap model is to select which service to evaluate on the basis of a genuine commitment to improve service in that area, and then define quality in that area in a way that can be measured (for example, a two-day turn-around time). The keys are commitment and a clearly articulated measurable outcome.

DLF respondents raised thought-provoking philosophical questions about assessments of service quality:

  • Should service quality assessments strictly be used as diagnostic tools to identify gaps, or should they also be used as tools for normative comparison across institutions?
  • Do service quality assessments, designed to evaluate human-to-human transactions, apply to human-computer interactions in the digital environment? If so, how?
  • Are human expectations or perceptions of quality based on facts, marketing, or problems encountered? How do libraries discover the answer to this question, and what are the implications of the answer?
  • If quality is a measure of exceeding user expectations, is it ethical to manage user expectations to be low, then exceed them?

2.3. Cost-Effectiveness and Cost Benefits

Libraries have traditionally tracked costs in broad categories, for example, salaries, materials, or operating costs. ARL’s E-metrics initiative creates new categories for costs of e-journals, e-reference works, e-books, bibliographic utilities, and networks and consortia, and even of the costs of constructing and managing local digital collections. Measuring the effectiveness and benefits of these costs or expenditures, however, is somewhat elusive.

“Cost-effectiveness” is a quantitative measure of the library’s ability to deliver user-centered outputs and outcomes efficiently. Comments from DLF respondents suggest that the only motivation for analyzing the cost-effectiveness of library operations comes from university administrators, which is striking, given the budgetary concerns expressed by many of the respondents. Some libraries reported no impetus from university administrators to demonstrate their cost-effectiveness. Others are charged with demonstrating that they operate cost-effectively. The scope of library operations to be assessed and the range of data to be gathered to assess any single operation are daunting. Defining the boundaries of what costs to include and determining how to calculate them are difficult. Published studies that try to calculate the total cost of a library operation reveal the complexity of the task and substantial investment of time and talent required to assemble and analyze a dizzying array of costs for materials, staffing, staff training, hardware, software, networking, and system maintenance.11 Libraries charged with demonstrating their cost-effectiveness are struggling to figure out what to measure (where to begin), and how to conduct these assessments in a cost-effective manner.

Even if all of the costs of different library operations can be assembled, how are libraries to know whether the total cost indicates efficient delivery of user-centered outputs and outcomes? In the absence of standards, guidelines, or benchmarks for assessing cost-effectiveness, and in many cases a lack of motivation from university administrators, an ad hoc approach to assessing costs-rather than cost-effectiveness-is under way. DLF respondents reported practices such as the following:

  • Analyzing the cost per session of e-resource use
  • Determining cost per use of traditional materials (based on circulation and in-house usage statistics)
  • Examining what it costs to staff library services areas
  • Examining what it costs to collect and analyze data
  • Examining the cost of productivity (for example, what it costs to put a book on the shelf or some information on the Web)
  • Examining the total cost of selected library operations

The goals of these attempts to assess costs appear to be to establish baseline data and define what it means to be cost-effective. For example, comparing the cost per session of different e-resources can facilitate an understanding of what a cost-effective e-resource is and perhaps enable libraries to judge vendor-pricing levels.

Cost-benefit analysis is a different task entirely because it takes into account the qualitative value of library collections and services to users. Even if libraries had a clear definition of what it means to be cost-effective or a benchmark against which to measure their cost-effectiveness, additional work is required to determine whether the benefits of an activity warrant the costs. If the cost of an activity is high and the payback is low, the activity may be revised or abandoned. For example, one DLF respondent explained that his library stopped collecting reference statistics in 1993, when it determined that the data seldom changed and it cost 40 hours of staff time per month to collect. Quantifying the payback is not always so straightforward, however. User studies are required to assess the value to users of seldom-used services and collections. Knowing the value may only raise the question of how high the value must be, and to how many users, to offset what level of costs.

The terrain for conducting cost-benefit analyses is just as broad and daunting as is the terrain for assessing cost-effectiveness of library operations. One DLF institution is analyzing the costs and benefits of staff turnover, examining the trade-offs between loss of productivity and the gains in salary savings to fund special projects or pursue the opportunity to create new positions. As with analyses of cost-effectiveness, libraries need guidelines for conducting cost-benefit analyses and benchmarks for making decisions. The effort requires some campuswide consensus about what it values about library services and what is worth paying for.


FOOTNOTES

1http://www.arl.org/stats/newmeas/emetrics/phasetwo.pdf. October 2001, p. 41.

2 Recruiting and Retaining Information Technology Staff in Higher Education. Available at: http://www.educause.edu/pub/eb/eb1.html. August 2000.

3 Card-swipe exit data capture user IDs, which library the user is in, and the date and time. IDs can be mapped to demographic data in the library patron database to determine the users’ status and school (e.g., graduate student, business school).

Skip to content