Search
Close this search box.
Search
Close this search box.

1 BACKGROUND AND METHODOLOGY

Chapter 1 gives background on the MIRACLE Project, defines institutional repositories (IRs), and describes the methods MIRACLE Project staff used to conduct a census of IRs in U.S. academic institutions.

1.1       The Impetus for the MIRACLE Project’s Census of IRs in the United States

A considerable portion of the scholarly record is born digital, and some scholarship is produced in digital formats that have no physical, in-the-hand counterparts. The proliferation of digital scholarship raises serious and pressing issues about how to organize, access, and preserve it in perpetuity. The response of U.S. colleges and universities has been to build IRs to capture, preserve, and reuse the intellectual output of teaching, research, and service activities at their institutions. An IR is “a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members” (Lynch 2003) (see also Appendix F1).

The MIRACLE (Making Institutional Repositories a Collaborative Learning Environment) Project is investigating the implementation of IRs at academic institutions to identify models and best practices for the administration, technical infrastructure, and access to digital collections. The chief objective of the project is to identify specific factors contributing to the success of IRs and effective ways of accessing and using IRs. The census is the first of several activities aimed at achieving project objectives. Other activities will study IR users, contributors, and staff through the use of telephone interviews, case studies, personal interviews, observations, and experiments.

Originally, MIRACLE Project investigators proposed to survey operational IRs in North America; however, we were concerned that we would be duplicating the efforts of Charles Bailey and his University of Houston associates who were analyzing data from their Association of Research Libraries (ARL)-sponsored survey of member institutions at the same time we were making data-collection decisions for the MIRACLE survey (Bailey et al. 2006). Other surveys targeted specific user groups such as Coalition for Networked Information (CNI) members in the United States (Lynch and Lippincott 2005), CNI members abroad (van Westrienen and Lynch 2005), Canadian Association of Research Libraries (CARL)-member libraries (Shearer 2004), and early adopters of IR technology worldwide (Mark Ware Consulting 2004).

Examining these surveys’ results, MIRACLE project investigators decided not to limit their efforts to a particular user group, membership, or affiliation, and not to restrict participation to institutions with an operational IR. Instead, we sought to cast our net broadly and fill a void. Conducting a census of academic institutions in the United States about their involvement with IRs, MIRACLE Project investigators decided not to exclude institutions that have not jumped on the IR bandwagon. Being more inclusive would increase our confidence that we would be able to identify the wide range of practices, policies, and operations in effect at institutions where decision makers are contemplating, planning, pilot testing, or implementing IRs. At the same time, it would enable us to learn why some institutions have ruled out IRs entirely.

1.2            Obtaining a Mailing List of Academic Library Directors

The first task of MIRACLE Project staff was to obtain an electronic mailing list bearing the names and e-mail addresses of academic library directors and senior library administrators at U.S. educational institutions. A number of companies provide this information for a fee (for example, see American Library Association 2006). After examining their products and services, MIRACLE Project staff narrowed options to the following four companies or products: (1) Thomson-Peterson’s, (2) Market Data Retrieval, (3) American Library Directory Online, and (4) World Guide to Libraries Plus. After comparing these companies’ products with respect to such variables as the number of records with e-mail addresses available, scope, and price, as well as other advantages and disadvantages, we decided to purchase mailing lists from two vendors: (1) American Library Directory (ALD) and (2) Thomson-Peterson’s. Using ALD’s online database, we downloaded a comprehensive list (2,207 records) of all college and university main libraries in the United States (including U.S. protectorates). Because ALD’s online database did not provide e-mail addresses for specific individuals, we purchased a less comprehensive database from Thompson-Peterson’s that we used to add e-mail addresses to ALD data. After deleting community colleges and duplicates, we ended up with 2,147 e-mail addresses for the nationwide census.

1.3            Conducting Comparative Analysis of Survey Software

To compare survey-software programs for administering our Web-based survey, MIRACLE Project staff signed up for free trials of 10 such software programs: SurveyMonkey, Zoomerang, Key Survey, SurveyConsole, EZQuestionnaire, iSalient, QuestionPro, Ridgecrest Surveys, SmartSurveys, and SuperSurvey. Staff also researched Flashlight Online, ScyWeb, and UM.Lessons. On the basis of pricing information, flexibility, and functionality, we narrowed the list to SurveyMonkey, Zoomerang, Key Survey, and UM.Lessons. Staff eliminated UM.Lessons and Key Survey from consideration because of the former’s limited flexibility and functionality and the latter’s cost.

MIRACLE Project staff’s decision to use SurveyMonkey instead of Zoomerang was based on the former program’s greater flexibility and functionality. Our purchase of a one-year professional subscription to SurveyMonkey would enable us to launch an unlimited number of surveys with an unlimited number of questions and to use its advanced features for the survey’s many complicated questions.

1.4            Drafting and Pretesting Survey Instruments

To draft survey instruments, MIRACLE Project investigators reviewed published and open-access literature on IRs through 2005 (see the MIRACLE Project’s bibliography for a list of relevant publications at http://miracle.si.umich.edu/bibliography.html), talked to colleagues, and asked advisory group members (see Appendix A) to review, comment on, and edit draft instruments. Because the investigators expected survey respondents to come from institutions that were at various stages of the IR effort, they could neither ask everyone the same questions nor ask questions in the same way. Advice from advisory group members resulted in these four categories of IR involvement: (1) no planning to date (NP), (2) planning only to date (PO), (3) planning and pilot testing one or more IR systems (PPT), and (4) public implementation of an IR system at the respondent’s institution (IMP). MIRACLE Project investigators drafted four different questionnaires based on these four categories of IR involvement.

Asking the same or similar questions in two or more questionnaires would enable investigators to make comparisons among institutions on the basis of the extent of their involvement with IRs. For example, here is a question about anticipated benefits of IRs that is worded a little differently depending on an institution’s involvement with IRs:

  • For NP respondents: How important do you think these anticipated benefits of IR would be to your institution?
  • For PPT and PO respondents: How important are these anticipated benefits of IR to your institution?
  • For IMP respondents: At the beginning of IR planning at your institution, how important did you think these anticipated benefits of IR would be to your institution?

Appendixes B, C, D, and E contain the MIRACLE Project questionnaires for NPs, POs, PPTs, and IMPs, respectively.

1.5            Setting Up Survey Administration Procedures and Protocol

Having so many institutions (2,147) in the census sample required MIRACLE Project staff to work out a detailed distribution plan. After pretesting a few different approaches, we decided to send an e-mail message to each institution’s academic library director or a senior administrator to tell them about the census and to ask them about the extent of their involvement with IRs. More specifically, we wrote, “Please tell me how you would characterize the current status of your institutional repository (IR).” We asked them to base their response on one of four categories: (1) no planning to date, (2) planning only to date, (3) both planning and pilot testing one or more IR systems, and (4) public implementation of an IR system at their institution.

On the basis of the person’s response, we replied with an e-mail message bearing a link to the appropriate Web-administered questionnaire (see Appendixes B, C, D, and E for NP, PO, PPT, and IMP questionnaires, respectively). We used SurveyMonkey’s list-management tool to send out initial survey links and to perform two subsequent follow-ups with individuals who had agreed to participate but who had failed to respond to our inquiries.

Recruiting people to participate in the MIRACLE census in this way is the electronic version of what those in the sales world term a “cold call.” We sent prospective respondents e-mail messages with a substantive phrase in the “SUBJECT” line announcing our IR census and asked them to participate. It is likely that the people who responded to our e-mail message were interested in IRs and thus were more likely to open, read, and respond to such a message and eventually respond positively about IRs on their completed questionnaires. Thus, MIRACLE census respondents may be more favorably inclined toward IRs than other academic library directors and senior administrators generally because of how we recruited them.

1.6            Data Collection

MIRACLE Project staff conducted the nationwide IR census from April 19, 2006, through June 24, 2006. Data collection was not straightforward. When few respondents responded to our invitations and reminders, we discussed problems and brainstormed ways of solving them. For example, coprincipal investigator Elizabeth Yakel suggested replacing the original SUBJECT line in our e-mail messages, “IMLS Institutional Repositories Census,” with the catchier phrase, “Be Counted! National Institutional Repository Census.” This change did indeed result in a higher response rate.

Table 1.1 summarizes the six data collection rounds that were necessary to increase the survey’s invitational response rate to an acceptable level.

Table 1.1 Data collection rounds

No. of
invitations
sent

Date

Cumulative
invitation
response
rate*
Cumulative
survey
response
rate†

Explanations and
changes made to
increase response rates

No. % No. %
2,147 4/19 to 4/26 172 9 89 5 Invitations sent through Rieh’s e-mail account. Staff research 260 undeliverable messages.
1,698 5/2 to 5/14 320 15 169 8 Invitations sent through Markey’s e-mail account. Staff continue to research undeliverable messages.
1,805 5/15 to 5/22 467 22 273 13 Invitations sent through Markey’s e-mail account. Staff change SUBJECT line and invitation text.
1,619 5/23 to 5/30 566 27 370 18 Invitations sent through Markey’s e-mail account.
1,511 5/31 to 6/7 627 30 420 20 Invitations sent through Yakel’s e-mail account.
1,446 6/8 to 6/24 676 32 500 24 Yakel’s account. Staff change SUBJECT line announcing end of census. Seven undeliverable messages.
*Total number of people who responded to our invitation stating that they were willing to participate in the MIRACLE Project census.

†Total number of people who clicked on the SurveyMonkey link that MIRACLE Project staff sent to them in response to our invitation. Generally, these figures indicate how many people actually participated in the survey. Because some people who clicked on the link exited the survey without answering any questions, these percentages are inflated. After MIRACLE Project staff had removed empty and nearly empty response sets, deleted duplicates, etc., the census response rate was 20.8%.

Concurrent with sending e-mail invitations, MIRACLE Project staff e-mailed a link to the appropriate Web-administered questionnaire to respondents within three business days of their response to our invitation. When respondents failed to return the completed questionnaires, staff sent them up to two reminders. The text of these two e-mail responses (the first survey link e-mail and the reminder e-mail) remained fairly stable throughout the census. Staff took care to send e-mail correspondence from the same account (Rieh, Markey, or Yakel), matching the account to which each respondent had initially responded.

A large number of people who had agreed to participate in the census failed to follow through. To rectify this situation, MIRACLE Project staff drafted two e-mail messages—one for respondents who had not yet started filling out the questionnaire and a second for respondents who had answered some questions. The SUBJECT line of both e-mail messages was “Survey to Close 6/24 (Nationwide Census of Institutional Repositories).” In mid-June, staff sent these e-mails to selected respondents. Because these e-mail messages encouraged a number of respondents to complete questionnaires, staff sent a second message to those who had still not responded and changed the SUBJECT line to “5 Days Left: Last Chance to be Counted in Nationwide Census of Institutional Repositories.” Quite a few people filled out questionnaires after receiving the second message. When MIRACLE Project staff closed questionnaire administration in SurveyMonkey at 8 a.m. on June 25, 2006, the invitation response rate was 32%.

1.7            Data Analysis

After closing the census in SurveyMonkey, MIRACLE Project staff exported census data from SurveyMonkey into four Microsoft Excel files (one for each version of the survey—NP, PO, PPT, and IMP). Staff cleaned up census data, deleting the responses of people who did not sign the informed consent form as well as those of people who responded only to the informed consent form or to the one question about the number of IRs at their institution. Staff deleted empty questionnaires. They deleted multiple answer sets, keeping only the most comprehensive response sets from respondents. Staff deleted one entry that was submitted from a two-year college. This college had been sent an invitation because of an error in one of the mailing lists that we had purchased. After data cleanup had been completed, the census response rate was 20.8%.

MIRACLE Project staff imported the cleaned-up census data into SPSS and calculated frequency tables for the responses to each question in each of the four survey versions. Using these SPSS calculations, staff created an Excel spreadsheet that depicted frequency tables side-by-side for each question across the four questionnaire versions. Staff also produced a Word document that shows respondents’ answers to open-ended questions.

MIRACLE Project staff used related data files to probe research questions in greater depth. For example, they downloaded a file from the Carnegie Foundation’s Web site that allowed them to determine whether census participants were representative of educational institutions in the United States (see Subchapter 2.2) (Carnegie Foundation 2006b).

1.8            Chapter 1 Summary

Institutional repositories are the response of U.S. colleges and universities to the problem of organizing, providing access to, and preserving scholarship that their learning communities produce in digital formats.

Originally, MIRACLE Project investigators proposed to survey operational IRs in North America; however, we were concerned that we would be duplicating previous surveys that targeted institutions with operational IRs. We decided to cast our net broadly and to conduct a census of American academic institutions about their involvement with IRs. Census results would fill a void—yielding data and analyses about educational institutions that are and are not involved with IRs.

MIRACLE Project staff purchased mailing lists from two vendors: (1) ALD, and (2) Thomson-Peterson’s. After deleting community colleges and duplicates, we ended with a total of 2,147 e-mail addresses for the nationwide census.

Staff pilot-tested several Web-administered software programs and chose SurveyMonkey because of its flexibility and functionality for the complex questions in MIRACLE questionnaires.

Project investigators drafted questionnaires and received feedback from advisory group members regarding questions and response categories. On the basis of their input, staff developed four separate questionnaires based on respondents’ extent of involvement with IRs: (1) no planning (NP), (2) planning only (PO), (3) planning and pilot testing (PPT), and (4) implementation (IMP). (See Appendixes B, C, D, and E for NP, PO, PPT, and IMP questionnaires, respectively.)

Data collection took place from April 19, 2006, through June 24, 2006. MIRACLE Project staff sent invitations to participate in the census via e-mail to each institution’s academic library director or a senior administrator. The e-mail explained the census and asked them about the extent of their involvement with IRs. We replied via e-mail to those who responded to our request with a link to the appropriate Web-administered questionnaire.

Low response rates to our invitation resulted in changes in the text of our reminder messages, especially the wording of the message’s SUBJECT line. After data collection ended on June 24, 2006, MIRACLE Project staff cleaned up census data, for example, deleting empty questionnaires or responses of people who did not sign the informed consent form. After data cleanup had been done, the census response rate was 20.8%. MIRACLE Project staff then proceeded with data analysis and reporting activities.

2                THE INSTITUTIONS AND THE PEOPLE INVOLVED WITH IRs

Chapter 2 examines the extent to which certain types of academic institutions are involved with institutional repositories (IRs) and the nature of people’s involvement with IRs at these institutions.

2.1            Census Respondents

image003.gif

Of the 2,147 academic library directors and senior library administrators MIRACLE Project staff contacted, 446 participated in the census-a response rate of 20.8%. Characterizing the extent of their involvement with IRs, 236 (52.9%) respondents have done no IR planning (NP) to date, 92 (20.6%) respondents are only planning (PO) for IRs, 70 (15.7%) respondents are actively planning and pilot testing IRs (PPT), and 48 (10.8%) respondents have implemented (IMP) an operational IR. Figure 2.1 is a graphical representation of the extent of IR involvement by MIRACLE Project census respondents.

When MIRACLE Project staff contacted library directors and senior library administrators by e-mail, we asked them to pass our questionnaire to staff who were most familiar with their institution’s involvement with IRs. The questionnaires concluded by asking respondents to identify their positions at their institution. Figure 2.2 shows the titles of those who completed questionnaires.

Almost three-quarters of respondents are library directors; the second- and third-largest percentages (10.2% and 7.9%, respectively) are library staff and assistant-associate librarians, respectively. Library directors prevail in terms of responding to the MIRACLE Project staff’s request to participate in the census. We deliberately chose to make library directors or senior library administrators the initial contact at academic institutions because of the difficulty identifying the names of the key person(s) involved with IRs at academic institutions and finding address lists to simplify and streamline contacting tasks. For example, we could have contacted chief information officers (CIOs) instead of librarians but academic institutions do not necessarily apply the CIO moniker across the board nor do all institutions have such a position. The same thing probably applies to archivists. Even more complicated would have been contacting middle management in academic institutions-deans, directors, chairs, and heads of academic units, research centers, and institutes. Because every academic institution is likely to employ a librarian, we contacted librarians in top management positions to participate in our census.

image006.gif

Our decision to contact librarians may have caused us to miss academic, research, and service units that have implemented or are planning to implement an IR. To some extent, respondents’ answers to a census question about how many IRs are available at their institutions may shed light on what we missed (see Chapter 5.1 for answers to this question). MIRACLE Project investigators readily admit that census results may be biased toward libraries because our initial contact was the college or university librarian.

Table 2.1 shows a breakdown of census respondents based on the extent of their institutions’ involvement with IRs. At NP institutions, about 90% of respondents are library directors. Percentages in other named-position categories are very small. Of the four people classed in “Other,” three are combined library directors-CIOs, and one is head of digital library programs.

Table 2.1. Respondents’ positions based on the extent
of IR involvement at their institutions

Respondent position

NP

PO

PPT

IMP

Total

No.

%

No.

%

No.

%

No.

%

No.

%

Library director

194

90.6

57

71.3

29

48.3

8

21.6

288

73.7

Library staff

5

2.3

11

13.8

8

13.3

16

43.3

40

10.2

Assistant or associate library director

5

2.3

0

0.0

16

26.7

10

27.0

31

7.9

Archivist

4

1.9

3

3.7

2

3.3

0

0.0

9

2.3

CIO

1

0.5

5

6.2

1

1.7

1

2.7

8

2.0

VP or provost

1

0.5

0

0.0

0

0.0

0

0.0

1

0.3

Other

4

1.9

4

5.0

4

6.7

2

5.4

14

3.6

Total

214

100.0

80

100.0

80

100.0

37

100.0

391

100.0

At PO institutions, the percentage of respondents who are library directors (71.3%) is smaller than the percentage for NPs. Contacts at POs passed questionnaires to library staff (13.8%), CIOs (6.2%), and archivists (3.7%). Of the four people classed in “Other,” two hold combined positions (i.e., library director-CIO and library director-archivist), one is assistant director of administrative services, and one is a faculty member affiliated with the library.

At PPT institutions, the percentage of respondents who are library directors (48.3%) is smaller than the percentages for POs and NPs. Contacts at PPTs mostly passed questionnaires to associate or assistant library directors (26.7%), library staff (13.3%), and archivists (3.3%). They rarely passed questionnaires to CIOs (1.7%). The four people classed in “Other” hold different positions: two are digital library directors, one is the associate dean for research, and one is the assistant director of library campus support systems.

At IMP institutions, the percentage of respondents who are library directors drops to 21.6%. Larger percentages of library staff generally (43.3%) and associate or assistant library directors (27.0%) are completing questionnaires. The two people classed in “Other” are a director of special collections and archives and a librarian working as a temporary consultant on the IR.

As shown in Table 2.1, library-director percentages decrease as IR activity increases. Most likely, directors passed on our request to complete questionnaires to staff who would be more knowledgeable about IR activity at their institution. Such activity increasingly involves library staff generally at PO institutions, and both library staff generally and assistant and associate directors at PPT and IMP institutions.

2.2            Using CCHE to Characterize Participating Institutions

Scrutinizing the types of institutions that participated in the MIRACLE census respondents made project investigators wonder whether certain types of institutions are more or less likely to be involved with IRs. To characterize the institutions that participated in the MIRACLE Project, investigators turned to the Carnegie Classification of Institutions of Higher Education (CCHE). CCHE was derived from empirical data on colleges and universities and has been updated five times since it was originally published for use by researchers in 1973. CCHE is “the leading framework for describing institutional diversity in U.S. higher education [and] … has been widely used in the study of higher education, both as a way to represent and control for institutional differences, and also in the design of research studies to ensure adequate representation of sampled institutions, students, or faculty” (Carnegie Foundation 2006a).

Table 2.2 lists classes of CCHE institutions that MIRACLE Project investigators invited to participate in the nationwide census. Because we limited the census to institutions awarding four-year baccalaureate degrees or higher, missing from Table 2.2 is the CCHE “Associate” class for institutions awarding two-year associate degrees.

Table 2.2. Classes of CCHE institutions invited to
participate in the MIRACLE census

Class

Definition

Subclasses

Doctorate-granting universities*

Institutions that award at least 20 doctoral degrees per year (excluding doctoral-level degrees such as the JD, MD, PharmD, DPT, which qualify recipients for entry into professional practice).

RU/VH: Research Univs. (very high research activity)

RU/H: Research Univs. (high research activity)

DRU: Doctoral/Research Univs.

Master’s colleges and universities*

Institutions that award at least 50 master’s degrees per year.

Master’s/L: Master’s Colleges and Univs. (larger programs)

Master’s/M: Master’s Colleges and Univs. (medium programs)

Master’s/S: Master’s Colleges and Univs. (smaller programs)

Baccalaureate colleges*

Institutions where baccalaureate degrees represent at least 10% of all undergraduate degrees and that award fewer than 50 master’s degrees or fewer than 20 doctoral degrees per year.

Bac/A&S: Baccalaureate Colleges—Arts and Sciences

Bac/Diverse: Baccalaureate Colleges—Diverse Fields

Bac/Assoc: Baccalaureate/Associate’s College

Special-focus institutions

Institutions awarding baccalaureate or higher-level degrees where a high concentration of degrees is in a single field or set of related fields.

Examples are theological seminaries, Bible colleges, medical schools, engineering schools, business schools, and law schools.

Tribal schools

Colleges and universities that are members of the American Indian Higher Education Consortium.

The majority are associate’s colleges but there are a few baccalaureate colleges.

* Excludes special-focus institutions and tribal colleges.

To emphasize the research-intensive nature of census institutions, MIRACLE Project staff broke up the “Doctorate-granting Universities” CCHE class into two classes: (1) “Research universities,” bearing institutions from the two Research Universities (RU/VH and RU/H) CCHE subclasses; and (2) “Doctoral universities,” bearing institutions from the “Doctoral” (DRU) CCHE subclass. Figure 2.3 shows the percentages of institutions participating in the MIRACLE Project census by CCHE classes. It also distributes the population of U.S. academic institutions into these same CCHE classes.

High percentages of CCHE institutions come from the Special Focus (32.0%), Baccalaureate (29.3%), and Master’s (27.3%) CCHE classes. Except for special-focus (11.7%) institutions, percentages of MIRACLE census respondents are comparable for master’s (37.2%) and baccalaureate (27.6%) institutions. A large percentage (18.6%) of MIRACLE census respondents are research universities, but only 7.9% of CCHE institutions come from this class.

MIRACLE Project staff tallied the four types of MIRACLE census respondents according to their CCHE class. The results (see Table 2.3) reveal what types of CCHE institutions are more and less likely to implement IRs.

Research universities vastly outnumber all other CCHE classes involved with IMP and PPT. A few institutions in the other CCHE classes are implementing IRs, but most are in the PO stage or are not involved with IRs at all. Most NP and PO respondents come from master’s and baccalaureate institutions.

Table 2.3. CCHE classes reveal the extent of IR involvement
by census respondents

CCHE classes

NP

PO

PPT

IMP

Total

No.

%

No.

%

No.

%

No.

%

No.

%

Research univs.

13

5.5

14

15.2

26

37.1

30

62.5

83

18.6

Doctoral univs.

7

3.0

7

7.6

3

4.3

1

2.1

18

4.0

Master’s

103

43.6

32

34.8

22

31.5

9

18.8

166

37.2

Baccalaureate

79

33.5

29

31.5

10

14.3

5

10.4

123

27.6

Special focus

32

13.6

9

9.8

8

11.4

3

6.2

52

11.7

Tribal

1

0.4

0

0.0

0

0.0

0

0.0

1

0.2

Unclassified*

1

0.4

1

1.1

1

1.4

0

0.0

3

0.7

Total

236

100.0

92

100.0

70

100.0

48

100.0

446

100.0

*   Three institutions are unclassified because they responded with two or more institutions in partnership-like arrangements.

2.3            Positions of the People Involved with IRs

Questionnaires asked census respondents about the people involved in their institutions’ IR efforts. Specifically, respondents from PO, PPT, and IMP institutions were asked, “How active were people in the following positions in terms of leading the charge to get involved with IRs at your institution?” and respondents from NP institutions were asked, “How active do you think that the people in these positions would have to be to light the spark for IR planning at your institution?” Respondents chose from a list of 13 positions or could write in a response for “Other.”

To simplify results, MIRACLE Project staff assigned weights to response categories as follows: (+2) very active; (+1) somewhat active; (0) no opinion, don’t know, or not applicable; (-1) somewhat inactive; and (-2) very inactive. They totaled the weights. The results were then compiled to rank order all the positions. Table 2.4 uses IMP ranks to order top- (1 to 4), middle- (5 to 8), and bottom-ranked (9 to 13) positions.

Table 2.4. Positions of people involved in the IR effort

Top-ranked positions (1 to 4)

NP

PO

PPT

IMP

Library director

1

1

1

1

Assistant library director(s)

(11)†

2T*

2

2

Library staff member(s)

(5)

2T

3

3

A particular faculty member

(10)

(6)

4

4

Middle-ranked positions (5 to 8)

NP

PO

PPT

IMP

Institution’s archivist

7

(4)

5

5

Institution’s vice president or provost

(2)

8T

(9)

6

Staff at a library network, consortium, or other affiliated group

6

5

6

7

Faculty members generally

(3)

8T

8

8

Bottom-ranked positions (9 to 13)

NP

PO

PPT

IMP

Institution’s chief information officer

(4)

(7)

(7)

9

Faculty governance

(8)

12

12

10

Graduate students

12

10

10

11

Institution’s president or chancellor

9

13

13

12

Undergraduate students

13

11

11

13

†  Parentheses indicate NP, PO, and PPT positions that deviated from IMP top, middle, or bottom ranks.

*   T indicates a ranked position that tied another position’s weight.

Generally, PO, PPT, and IMP respondents agree about top-, middle-, and bottom-ranked positions. For PO, PPT, and IMP respondents, the three top-ranked active positions are (1) library director, (2) assistant or associate library director(s), and (3) library staff member(s). Students at all levels, faculty governance, presidents, and chancellors are not necessarily active in IR efforts.

NP respondents agree with those in the other three groups that the highest level of activity comes from the library director. To light the spark for the IR effort at NP institutions, support must come not only from top positions in the library but also from other leaders at the institution (e.g., the vice president or provost, or CIO) and from faculty members generally.

The questionnaires allowed respondents to give open-ended responses to this question. Most responses are unique but a few overlap. Respondents from PO, PPT, and IMP institutions say the following people are very active:

•               director, staff, and/or advisory committee from the institution’s instructional technology unit (three responses)

•               faculty at the institution’s Office of Undergraduate Research Initiatives

•               global subject discipline research committees

•               internal and external volunteers such as library school students and visiting librarians

•               public relations staff

•               strategist from the institution’s Academic Computing

•               curriculum technology staff

•               technology-assisted learning staff

At NP institutions, people in positions that would have to be very active in an IR effort are again external to the library:

•               director of the institution’s instructional technology unit

•               academic deans

•               museum director

•               Web developers

•               systems staff

An examination of open-ended responses reveals in retrospect that we should have included a response category for “instructional technology staff” because it might have figured among the top-ranked response categories at PO, PPT, and IMP institutions. Other response categories on our list should have been “academic computing” and “academic deans.”

2.4            Number of People Involved with IRs

Questionnaires asked PO, PPT, and IMP respondents how many people were involved in their institutions’ IR efforts. The overall average is 7.2 people. At PO, PPT, and IMP institutions, averages are 6.0, 8.4, and 7.8 people, respectively.

These are merely counts of the number of people involved in the IR effort. MIRACLE Project investigators also wanted to ask census respondents about full-time equivalents (FTEs). However, respondents who pretested MIRACLE questionnaires expressed difficulty generating exact FTE numbers so we deleted questions about FTEs.

Figure 2.4 presents results in five-person ranges. Up to 10 people is typical for 89.8% institutions in the PO stage. On average, when institutions move to the PPT stage, the number of people involved increases. It then decreases in the IMP stage. At some institutions, more than 20 people are involved in the IR effort at the PPT stage. Although numbers from the ARL SPEC Kit are substantially higher (Bailey et al. 2006, 15), both SPEC Kit numbers and our numbers show a downward trend between the PPT and the IMP stages (see Appendix F4).

2.5            Positions of People Involved with IRs

The questionnaires asked who is leading IR planning, planning and pilot testing, and implementation at PO, PPT, and IMP institutions. Table 2.5 gives the results.

Table 2.5. Positions of people leading the IR effort

Position leading the IR effort

PO

PPT

IMP

No.

%

No.

%

No.

%

Library director

46

54.7

18

28.6

13

31.7

Library staff member

12

14.3

15

23.8

14

34.2

Assistant or associate library director

3

3.6

13.

20.6

11

26.8

CIO

4

4.8

1

1.6

1

2.4

Archivist

5

5.9

2

3.2

0

0.0

Faculty member in an academic unit

4

4.8

1

1.6

0

0.0

No committee or committee chair has been charged

6

7.1

1

1.6

0

0.0

Other

4*

4.8

12†

19.0

2‡

4.9

Total

84

100.0

63

100.0

41

100.0

* Team effort (2); consortium (1); duo effort: archivist and library director (1).

† Director of instructional technology (3); duo effort (3), for example, library director and CIO; vice president or associate dean for research (2); team effort (2); consortium (1); digital asset management committee (1).

‡ Consortium (1); director of special collection and archives (1).

Generally, people in library positions lead in all stages of the IR effort. In the PO stage, the library director is in the lead. The library director relinquishes that role when the IR effort reaches PPT and IMP stages, in most cases, to one particular staff member or an assistant or associate library director. If archivists, CIOs, and faculty members from academic units are in the lead, the IR effort is in the planning stage. Write-in responses reveal that staff from two or more units may share the lead, especially during PPT stage. For example, the associate librarian leads planning and the CIO leads pilot testing.

Questionnaires for PO, PPT, and IMP institutions asked respondents what positions IR committee members held. Figure 2.5 depicts the percentages of respondents choosing from 13 positions listed on their questionnaires. The figure uses lines to connect the three percentages, beginning with PO and ending IMP. It is not a timeline because different people completed PO, PPT, and IMP questionnaires. Presenting results in this way is helpful because it reveals the following dynamics of committee membership:

•               Generally, IR committees are more inclusive during the PPT stage and less inclusive during the PO and IMP stages.

•               The likelihood that staff from the vice president’s or provost’s office are on IR committees decreases from the PO stage to the PPT stage, while people in all other positions are more likely to be members of IR committees.

•               The likelihood that library staff and assistant or associate library directors are on IR committees increases from stage to stage, while people in all other positions are less likely to be members of IR committees as IR work continues.

•               Faculty members are more likely to be involved in the conceptual stages of planning the IR; their involvement decreases as the IR becomes operational.

Especially nonrepresentative at the IMP stage are CIO staff, faculty members, and staff from the office of the vice president or provost. Not included in Figure 2.5 are percentages for five positions—graduate students, undergraduate students, and staff from the office of the president or chancellor, from the CIO’s office, and from the institution’s legal counsel—because less than 10% of respondents observed their participation on IR committees.

Many respondents wrote in unique staff or management positions not included in the list.

At PO institutions the write-in positions are

•               three for academic computing and three for instructional technology

•               two for consortium

•               one each for alumni relations, art-slide curator, center for teaching, communications, development, enrollment services, external affairs, dean of graduate studies, staff photographer, student affairs, and university press

At PPT institutions the write-in positions are

•               four for instructional technology

•               two for consortium

•               one each for academic computing, art-collections curator, art-slide curator, and dean of graduate studies

At IMP institutions the write-in positions are

•               three for instructional technology

•               one each for academic computing, digital library program, health sciences center, media services, university press, and the college-level Web content editor

In retrospect, we should have included response categories for “instructional technology” and “academic computing” because several respondents volunteered them.

2.6            The Responsibility for the IR

Questionnaires for IMP and PPT respondents asked what percentage of the responsibility for an operational IR is given to various campus units, and the questionnaire for PO respondents asked what percentage should be given to various campus units. All three questionnaire versions listed the same units and respondents could write in units that were not listed. MIRACLE Project staff programmed SurveyMonkey to force respondents to enter percentages that added to 100%. Figure 2.6 gives the results.

During planning, respondents share responsibility for the IR, with the library taking about 40% of the responsibility; archives, central computing, and various academic units, sharing about 12% of the responsibility; and the CIO’s office sharing 6% of the responsibility. During  PPT, the responsibilities of the library and various academic units increase while others’ responsibilities decrease. The increase for academic units during the PPT phase probably entails early adopters who are contributing to the IR in a pilot test. During IMP, the library shoulders almost all of the responsibility for the IR. Questionnaires should have included a response category for “consortium” because several write-ins named their consortium as the unit taking most of the responsibility for the IR.

2.7            Involvement with IRs

Questionnaires asked respondents at PO, PPT, and IMP institutions how long they have been involved with IRs. PO institutions average 12 months, PPT institutions average 21.3 months, and IMP institutions average 31.5 months.

Figure 2.7 shows responses in 12-month ranges. About 70% of PO institutions have been involved with IRs for 12 months or fewer. Comparable percentages of PPT (77.6%) and IMP (70.8%) institutions have been involved with IRs for 24 or fewer months and 36 or fewer months, respectively. About 15% of IMP institutions have been involved with IRs for more than four years.

2.8            Chapter 2 Summary

Of the 2,147 academic library directors or senior library administrators MIRACLE Project staff contacted, 446 participated in the census—a response rate of 20.8%. A little more than half of respondents have done no IR planning to date, about 20% are planning for IRs, about 15% are actively planning and pilot testing IRs, and a little more than 10% have implemented an operational IR (see Figure 2.1).

MIRACLE Project investigators used the CCHE to determine the types of institutions that are more or less likely to be involved with IRs. “Research universities” vastly outnumber all other CCHE classes involved with IMP and PPT (see Table 2.3). Most NP and PO respondents come from master’s and baccalaureate institutions.

Census respondents in the PO, PPT, and IMP stages agree on the positions of people most involved with IRs at their institution. They are the library director, assistant or associate library director(s), and library staff member(s) (see Table 2.4). To light the spark for the IR effort at NP institutions, support must come not only from the library director but also from other leaders at the institution, including the vice president or provost and CIO. Faculty members generally should also be active.

The number of people involved in the IR effort averages 7.2 overall but varies a little during the IR implementation process. PO, PPT, and IMP institutions average 6.0, 8.4, and 7.8 people, respectively (see Subchapter 2.4). The PPT stage is most inclusive, involving 20 or more people at times (see Figure 2.4).

In terms of the person leading the IR effort, the library director takes the lead in the planning stage but relinquishes it, in most cases, to one particular staff member or an assistant-associate library director in the PPT and IMP stages (see Table 2.5). If archivists, CIOs, and faculty members from academic units are in the lead, the IR effort is probably in the planning stage.

IR committee membership waxes and wanes depending on the particular phase of the IR project (see Figure 2.5). IR committees are most inclusive during the PPT stage and less inclusive during the PO and IMP stages. The likelihood that library staff and assistant or associate library directors are on IR committees increases from stage to stage while people in all other positions are less likely to be members of IR committees as work proceeds.

During planning, respondents share responsibility for the IR with the library taking about 40% of the responsibility; archives, central computing, and various academic units, sharing about 12% of the responsibility; and the CIO’s office sharing 6% of the responsibility (see Figure 2.6). When planning and pilot testing, the responsibilities of the library and various academic units increase while others’ responsibilities decrease. The increase for academic units during the PPT phase probably entails early adopters who are contributing to the IR in a pilot test. At implementation, the library shoulders almost all of the responsibility for the IR.

Asked how long their institutions have been involved with IRs, PO institutions average 12 months, PPT institutions average 21.3 months, and IMP institutions average 31.5 months (see Figure 2.7). Of the total 48 IMP institutions in the MIRACLE census, seven (14.6%) have been involved with IRs for more than four years.

Skip to content