Close this search box.
Close this search box.

4. General Issues and Challenges


4.1. Issues in Planning a Research Project

When a decision to conduct research has been made, a multifaceted process begins. Each step of that process requires different knowledge and skills. Whatever the research method, all research has certain similarities. These relate to focusing the research purpose, marshalling the needed resources, and scheduling and assigning responsibilities. Conducting user studies also requires selecting a sampling method, recruiting subjects, and getting approval from the IRB to conduct research with human subjects.

The experiences reported by DLF respondents underscore the importance of careful planning and a comprehensive understanding of the full scope of the research process. Textbooks outline the planning process. It begins with articulating the research purpose. The second step is conducting an assessment of human and financial resources available to conduct the research and clearly assigning who is responsible for each stage of the process-designing the research instruments; preparing the schedule; gathering, analyzing, and interpreting the data; presenting the findings; and developing and implementing plans to use them. The third step is selecting the research method (Chadwick, Bahr, and Albrecht 1984). The frequent breakdowns that DLF libraries experience in the research process suggest problems in planning, particularly in marshalling the resources needed to complete the project. Perhaps those responsible for planning a study do not have enough power or authority to assemble the requisite human and financial resources. Perhaps they do not have the time, resources, or understanding of the research process to develop a comprehensive plan. Whatever the case, resources assigned to complete research projects are often insufficient. The breakdown often occurs at the point of developing and implementing plans to use the research results. The process of developing a plan can get bogged down when the results are difficult to interpret. Implementing plans can get bogged down when plans arrive on the doorstep of programmers or Web masters who had no idea the research would create work for them. Data can go unused if commitment has not been secured from every unit and person necessary to complete a project. Even if commitment is secured during the planning stage, if a project falls significantly behind schedule, other projects and priorities can intervene, and the human resources needed to implement research results will not be available when they are needed.

Scheduling also influences the success or failure of research efforts. Many DLF respondents reported underestimating the time it takes to accomplish different steps in the research process. Getting IRB approval to conduct human subjects research can take months. Recruiting research subjects can be time-consuming. Analyzing and interpreting the data and documenting the research findings can take as much time as planning the project, designing the research instruments and procedures, and gathering the data. The time it takes to implement a plan depends on the plan itself and competing priorities of the implementers. An unrealistic schedule can threaten the success of the project. A carefully constructed schedule can facilitate effective allocation of resources and increase the likelihood that research results will be applied. Comments from DLF respondents suggest that the larger the number of persons involved in any step of this process, the longer the process takes. Cumbersome governance of user studies can be counter-productive.

The limitations of research results and the iterative nature of the research process also challenge DLF libraries. Additional research is often necessary to interpret survey data or to identify solutions to problems that surface in user protocols. Realizing that multiple studies might be necessary before concrete plans can be formulated and implemented can be discouraging. Conducting research can seem like an endless loop of methods and studies designed to identify
problems, determine how to solve them, and verify that they have been solved. When a library’s resources are limited, it is tempting to go with intuition or preferences. Nevertheless, DLF respondents agreed that libraries must stay focused on users. Assessment must be an ongoing priority. Research must be iterative, because user needs and priorities change with time and technology. To provide quality service, the digital library must keep pace with users.

Multiple research methods and a sequence of studies are required for the digital library to evolve in a way that serves users well. DLF respondents reported the following cases, which illustrate the rich, although imperfect, benefits that derive from triangulated or iterative efforts.

  • Protocol, Transaction Log, and Systems Analysis Research. Think-aloud user protocols were conducted in a laboratory to assess the usability of the library Web site. The study focused on the home page and e-resources and databases pages. A task script was prepared in consultation with a commercial firm. Its purpose was to identify the 10 tasks most frequently performed by students, faculty, and staff on the library’s Web site. Another firm was hired to analyze the Web site architecture, transaction logs, and usability (protocol) data and to conduct additional research to capture user perceptions of the Web site. On the basis of these analyses, the firm provided an interface design specification, architectural framework, and short- and long-term goals for the Web site. The firm also recommended the staffing needed to maintain the proposed architecture. The library used the design specification to revise its Web site, but the recommendations about staffing to maintain the Web site did not fit the political environment of the library. For example, the recommendation included creating an advisory board to make decisions about the Web site, hiring a Web master, and forming a Web working group to plan Web site development. The library has a Web working group and has created a new Web coordinator position, but is having trouble filling it. Librarians believe the issue is lack of ownership of Web project management. No advisory board was created.
  • Heuristic Evaluation, Card Sorting, Protocol, and Survey Research. A library created a task force to redesign the library Web site on the basis of anecdotal evidence of significant problems and the desire for a “fresh” interface. The task force
    • Conducted a heuristic evaluation of the existing library Web site
    • Looked at other Web sites to find sites its members liked
    • Created a profile of different user types (for example, new or novice users, disabled users)
    • Created a list of what the redesigned Web site had to do, organized by priority
    • Created a content list of the current Web site that revealed content of interest only to librarians (for example, a list of library organizations)
    • Created a content list for the redesigned Web site that eliminated any content in the existing site that did not fit the user profiles
    • Conducted a card-sorting study to help group items on the content list
    • Conducted a Web-based survey to help determine the vocabulary for group and item (link) labels. (The survey did not work very well because the groups and items the participants were to label were difficult to describe.)
    • Implemented a prototype of the new library Web site home page and secondary pages
    • Conducted think-aloud protocols with the prototype Web pages. (The library recruited and screened participants to get eight subjects. The subjects signed consent forms, then did the protocol tasks. Different task scripts were provided for undergraduate students, graduate students, and faculty. The protocols were audiotaped and capture software was used to log participant keystrokes. The facilitator also took notes during the protocols. The results of the protocol study revealed that many of the problems users encountered were not user interface problems, but bibliographic instruction problems.)
    • Conducted a survey questionnaire to capture additional information about the participants’ experience and perception of the new Web site

Although these activities took a substantial amount of time, they were easy and inexpensive to do and were very revealing. The new Web sites were a significant improvement over the old sites. User studies will be conducted periodically to refine the design and functionality of the sites.

The purpose of the usability studies and many of the other user studies described in this report is to improve interface design and functionality. One experienced DLF respondent outlined the following as the ideal, iterative process to implement a user-friendly, fully functional interface:

  1. Develop a paper prototype in consultation with an interface design expert applying heuristic principles of good design.
  2. Conduct paper prototype and scenario research.
  3. Revise the paper prototype on the basis of user feedback and heuristic principles of good design.
  4. Conduct paper prototype and scenario research.
  5. Revise the design on the basis of user feedback and implement a functioning prototype.
  6. Conduct think-aloud protocols to test the functionality and navigation of the prototype.
  7. Revise the prototype on the basis of user feedback and heuristic principles of good design.
  8. Conduct think-aloud protocols to test the new design.
  9. Revise the design on the basis of user feedback.
  10. Release the product.
  11. Revise the design on the basis of user feedback and analysis of transaction logs.

Libraries would benefit greatly from sharing their experiences and developing guidelines for planning and scheduling different kinds of studies and iterations. An outline of the key decision points and pitfalls would be an ideal way to share lessons learned. Similarly, libraries would benefit from discussing and formulating a way to integrate assessment into the daily fabric of library operations, to make it routine rather than remarkable, and thereby possibly avoid generating unnecessary and unhelpful comments and participation.

4.2. Issues in Implementing a Research Project

Several issues in implementing a research project have already been described. For example

  • Selecting the appropriate research method for the research purpose
  • Developing effective and appropriate research instruments
  • Developing the requisite skills to conduct research using different methods, including how to gather, analyze, interpret, and present the data effectively, and how to develop plans
  • Developing a system or method to manage data over time
  • Organizing assessment as a core activity
  • Allocating sufficient human and financial resources to conduct and apply the results of different research methods
  • Developing comprehensive plans and realistic schedules to conduct and apply the results of different research methods (the academic calendar affects the number of participants who can be recruited and when the results can be applied)
  • Maintaining focus on users when research results challenge the operating assumptions and personal preferences of librarians
  • Recruiting representative research subjects who meet the criteria for the study (for example, subjects who can think aloud, subjects experienced or not experienced with the product or service being studied)

DLF respondents discussed two additional issues that affect user studies: sampling and getting IRB approval to conduct human subjects research. Sampling is related to the problem of recruiting representative research subjects. IRB approval relates to planning and scheduling research and preserving the anonymity of research subjects.

4.2.1. Issues in Sampling and Recruiting Research Subjects

Sampling is the targeting and selection of research subjects within a larger population. Samples are selected on the basis of the research purpose, the degree of generalization desired, and available resources. The sample ideally represents the entire target population. To be representative, the sample must have the characteristics of the target population, preferably in the proportion they are found in the larger population. To facilitate selecting representative samples, sampling units or groups are defined within a population. For example, in a university, the sampling units are often undergraduate students, graduate students, and faculty. Depending on the purpose of the study, the sampling units for a study of undergraduate students could be based on the school or college attended (for example, fine arts, engineering) or the class year (for example, freshmen/sophomore, junior/senior). Though research typically preserves the anonymity of research subjects, demographic data are captured to indicate the sampling unit and other nonidentifying characteristics of the participants considered relevant to the study (for example, faculty, School of Business).

Textbooks outline several different methods for selecting subjects from each sampling unit designated in a study:

  • Random sampling. To represent the target population accurately, a sample must be selected following a set of scientific rules. The process of selecting research subjects at random, where everyone in the target population has the same probability of being selected, is called random sampling. There are many methods for random sampling units within a larger population. Readers are advised to consult an expert or a textbook for instruction.
  • Quota sampling. Quota sampling is the process of using information about selected characteristics of the target population to select a sample. At its best, quota sampling selects a sample with the same proportion of individuals with these characteristics as exists in the population being studied. How well quota samples represent the target population and the accuracy of generalizations from quota sample studies depends on the accuracy of the information about the population used to establish the quota.
  • Convenience sampling. The process of selecting research subjects and sampling units that are conveniently available to the researcher is called convenience sampling. The results of studies conducted with convenience samples cannot be generalized to a larger population because the sample does not represent any defined population.

Two additional sampling methods might produce a representative sample, but there is no way to verify that the sample actually represents the characteristics of the target population without conducting a study of a representative (random) sample of the population and comparing its characteristics with those of the sample used in the initial study. These methods are as follows:

  • Purposive sampling. This activity entails selecting research subjects and sampling units on the basis of the expertise of the researcher to select representatives of the target populations.
  • Snowball sampling. This process entails identifying a few research subjects who have the characteristics of the target population and asking them to name others with the relevant characteristics.

DLF libraries have used all of these sampling methods to select human subjects for user studies. For example, a library conducted a survey to assess journal collection use and need by mailing a survey to a statistically valid, random sample of faculty and graduate students. It used the characteristics of reference service users to target and select the sample for a survey about reference service. In rare cases, all the users of a service have been invited to participate in a study (for example, all the graduate students and faculty with assigned study carrels). In many cases, however, libraries conduct user studies with convenience samples that fall short of accurately representing the sampling units within the target population. Sometimes librarians provide the names of potential research subjects, which can skew the data toward experienced users.

Recruiting research subjects is so time consuming that the emerging practice is to provide financial or other incentives to recruit enough volunteers to “take the temperature” of what is going on with users of particular library services, collections, or interfaces. Though providing incentives can bias the research results, many DLF respondents commented that some user feedback is better than none. Libraries are experimenting with providing different incentives. With surveys, the names of participants are gathered (apart from the survey data, to ensure anonymity), and one or more names are drawn to win cash or some other prize. Every student in a focus group or think-aloud protocol study might be given $10 or $20 or a gift certificate to the bookstore, library coffee shop, or local movie theatre. Often lunch is provided to recruit students or faculty to participate in focus groups. Some libraries are considering providing more substantial rewards, such as free photocopying. Recruiting faculty can be particularly difficult because the incentives that libraries can afford to offer are inadequate to get their interest. Holding a reception during which the research results are presented and discussed is one way to capture faculty participation.

DLF libraries prefer to have hundreds of people complete formal survey questionnaires, with respondents ideally distributed in close proportion to the representation of sampling units on campus. They conduct focus groups with as few as six subjects per sampling unit, but prefer eight to ten participants per group. Many DLF respondents were comfortable with Nielsen’s guideline of using four to six participants per sampling unit in think-aloud protocol studies. A few questioned the validity of Nielsen’s claims, referencing the “substantial debate” at the Computer-Human Interaction 2000 Conference about whether some information was better than none. Others questioned whether six to eight subjects are enough in a usability study in the library environment, where users come from diverse cultural backgrounds. Given the work being done on such things as how cultural attitudes toward technology and cultural perceptions of interpersonal space affect interface design and computer-mediated communication,7 how does or should diversity affect the design of digital library collections and services?

Lack of a representative sample raises questions about the reliability and validity of data, particularly when studies are conducted with small samples and few sampling units. Using finer-grain sampling units and recruiting more subjects can increase the degree to which the sample is representative and address concerns about diversity. For example, instead of conducting one focus group with undergraduate students, a library could conduct a focus group with undergraduate students in each school or college in the university or a focus group with undergraduates from different cultural backgrounds-Asian, African-American, and Hispanic. The disadvantage of this approach is that it will increase the cost of the research.

DLF respondents indicated that they were willing to settle for a “good-enough” distribution of user groups, but were wrestling with how to determine and recruit a “good-enough” sample. There is inevitably a trade-off between the cost of recruiting additional research subjects and its benefits. Finding the appropriate balance seems to hinge on the goal of the assessment. Accuracy and the costs associated with it are essential in a rigorous experiment designed to garner precise data and predicative results, but are probably not essential when the goal is to garner data indicative and suggestive of trends. Focusing on the goal of identifying trends to help shape or improve user service could assuage much of the angst that librarians feel about the validity of their samples and the results of their research.

4.2.2. Issues in Getting Approval and Preserving Anonymity

Research must respect the dignity, privacy, rights, and welfare of human beings. Universities and other institutions that receive funding from federal agencies have IRBs that are responsible for ensuring that research will not harm human subjects, that the subjects have given informed consent, and that they know they may ask questions about the research or discontinue participating in it at any time. In providing informed consent, research subjects indicate that they understand the nature of the research and any risks to which they will be exposed by participating, and that they have decided to participate without force, fraud, deceit, or any other form of constraint or coercion.

DLF respondents were aware of IRB requirements. Some expressed frustration with their IRB’s turn-around time and rules. Others had negotiated blanket approval for the library to conduct surveys, focus groups, and protocols and therefore did not need to allow time to get IRB approval for each study.

To apply for IRB approval, libraries must provide the IRB with a copy of the consent form that participants will be required to read and sign, and a brief description of the following:

  • Research method
  • Purpose of the research
  • Potential risks and benefits to the research subjects
  • How the privacy and anonymity of research subjects will be preserved
  • How the data will be analyzed and applied
  • How, where, and for how long the data will be stored
  • Who will conduct the research
  • Who will have access to the data

On grant-funded projects, the signatures of the principal investigators are required on the application for IRB approval, regardless of whether they themselves will be conducting the human subjects research. A recent development requires completion of an online tutorial on human subjects research that culminates in certification. The certificate must be printed and submitted to the IRB.

Typically, IRB approval to conduct a particular study is granted for one year. If the year ends before the research with human subjects is completed, the researcher must follow the same procedures to apply for renewal. If the IRB does not grant blanket approval to conduct particular kinds of research, whether DLF libraries seek IRB approval for all user studies or just those funded by the federal government is a matter of local policy.

IRB guidelines, regulations, and other documents are available at the Web site of the Office for Human Research Protections, U.S. Department of Health and Human Services at

No DLF respondent addressed whether IRB approval was secured for routine transaction logging of use of its Web site, OPAC, ILS, proxy server, or local digital collections. Several respondents did indicate, however, that they are uncertain whether users know that they are tracking these transactions. The issue is of some concern with authenticated access because it identifies individual users. If authentication data are logged, they can be used to reconstruct an individual’s use of the digital library. Even if the data are encrypted, the encryption algorithm can be compromised. Few libraries require users to authenticate before they can use public computers in the library, and access to remote electronic resources is typically restricted by IP address. However, authentication is required for all proxy server users and users with personalized library Web pages. Many, or most, libraries run a proxy server, and personalized Web pages are growing in popularity. Personalized Web pages enable libraries to track who has what e-resources on their Web pages and when they use these resources. Authentication data in proxy server logs can be used to reconstruct individual user behavior. Card-swipe exit data also identify individuals and can be used to reconstruct the date, time, and library they visited. The adoption of digital certificates will enable the identification and tracking of an individual’s use of any resource that employs the technology.

While library circulation systems have always tracked the identity of patrons who borrow traditional library materials, the association between the individual and the items is deleted when the materials are returned. Government subpoenas could force libraries to reveal the items that a patron currently has checked out, but the library does not retain the data that would be required to reveal a patron’s complete borrowing history. In the case of transaction logs, however, the association remains as long as the library maintains the log files, unless the library manipulates the files in some way (for example, by replacing individual user IDs with the school and status of the users). Without such manipulation, it is possible for libraries, hackers, or government agencies to track an individual’s use of digital library collections and services over whatever period of time the log files are maintained. While there could be good reason to track the usage patterns of randomly selected individuals throughout their years at the university, the possibility raises questions about informed consent and perhaps challenges the core value of privacy in librarianship. The effects of the recently passed Anti-Terrorism Act on the privacy of library use are not yet known.


7 See, for example, Ess 2001.

Skip to content