by Anne R. Kenney, Cornell University Library
This conference on Collections, Content, and the Web brings together leaders from the museum and library communities to consider how the Web has affected the way we go about fulfilling our cultural mission. In this paper, I will address four topics that relate this technology to institutional responsibility, opportunity, and cost. My underlying argument is that cultural institutions face a point of critical transition. Over the past decade, they have come to appreciate the value of digital efforts to extend their reach. They must now appreciate that digitization is a normal part of doing business-one that is worthy of commanding its share of institutional resources.
Digital Collections Are Institutional Assets
As a normal part of doing business, institutions must create and manage their digital collections properly to ensure their long-term value and utility and to protect the investment that has been made in them. Although no universally endorsed guidelines or standards have been established for digital conversion of cultural resources, there is a growing belief in the value of creating “digital masters” that are rich enough to be useful over time in the most cost-effective manner. This position presumes that conversion requirements will be set at levels that are higher than either what is necessary to meet immediate needs or what is capable of being used under current technical environments. Michael Lesk and others have noted the economics of converting once (or, at least, only once a generation) and producing a sufficiently high-level image to avoid the expense of reconverting at a later date when technological advances either require or can effectively use a richer digital file (Lesk 1990). This economic justification is particularly compelling given that the labor costs associated with identifying, preparing, inspecting, and indexing digital information far exceed scanning costs.
Institutional investments in creating high-quality digital masters are rewarded in the area of access and use. The library and museum communities are expressing a growing desire to develop cultural heritage resources that not only offer the broadest-possible use but also are comparable and interoperable across disciplines, user groups, and institutional types (NINCH 1999). Adopting a consistent approach facilitates integration between collections of images that artists and photographers are creating in digital form (the “born digital”) and the “born-again” digital files that institutions create from their retrospective holdings. Peter Galassi, chief curator of photography at the Museum of Modern Art (MoMA), suggests creating a high-end digital master that is “purpose blind” (Sullivan 1998). Once created, the archival master can then be used to create derivatives to meet a variety of current and future users’ needs. The quality, utility, and expense of various derivatives (e.g., for publication, image display, computer processing) will be directly affected by the quality of the initial scan.
In addition to the arguments for the economic advantages of converting once and for the creation of purpose-blind masters, preservation is the third main argument that is advanced for investing in rich digital masters. Digital files can be created to replace or reduce the use of deteriorating or vulnerable originals if the digital surrogates offer accurate and trusted representations.
But we do not decrease the preservation problem by relying on digital information; we only increase it. As Terry Kuny put it (1988), “Being digital means being ephemeral.” Digital files must be created in a consistent and well-documented manner to make them worthy candidates for long-term retention. Disposition decisions should be based on continuing value and functionality, not limited by technical decisions that were made at the point of conversion or anywhere else along the digitization chain. We must appreciate how decisions that are made at the point of capture can affect our ability to manage, preserve, and use our digital collections.
Some guiding principles for safeguarding institutional assets include the following:
- Invest in the selection and creation of digital resources that have a high probability of use and reuse over time.
- Address preservation concerns from the ground up, including adequate quality capture and review; requisite metadata; and the use of standard, well-supported technologies. Unless these issues are addressed at the point of creation, “There is little prospect of archiving image resources that will survive technological change.” (Ester 1996; see also Day 1998; NISO, CLIR, RLG 1999)1
- Do not risk the master files by applying short-term solutions to short-term problems (many of today’s constraints will not be tomorrow’s, and we should avoid building an approach that becomes quickly outdated or superseded).
- Establish a social security fund for digital files from institutional resources (digital assets must receive perpetual care, which requires ongoing resource commitment).
Digital Collections Increase Patron Use, Which Places New Demands on Cultural Repositories
Cultural institutions experience incredible responses to digital resources that dwarf the use of their physical counterparts. The New York Public Library reports 10 million online hits a month, as opposed to the 50,000 books served at 42nd Street, and the Library of Congress transmitted nearly 347 million files in the first eight months of 1999 (Darnton 1999). These raw figures are not indicative of the qualitative use of this material; nonetheless, the ability to extend exponentially access to resources is compelling, particularly when developed for a museum, where a very small percentage of the total collection is ever on view at any one time.
Increased use is a double-edged sword, however, placing inordinate demands on resources of all kinds. Simply accommodating so many users requires institutions to support extremely powerful access systems. Peter Hirtle has noted the experience of the Church of Jesus Christ of Latter-day Saints, which in 1999 announced free access to many of its genealogical databases. Demand far exceeded expectations. The site had been built to handle 25 million hits a day-five times the anticipated use level. But in the first few weeks after it was opened to the public, the site recorded at least 40 million hits a day, and another estimated 60 million hits a day were turned away (Church of Jesus Christ of Latter-day Saints 1999).
A growing (and demanding) secondary clientele can tax staff resources. At Cornell, the Making of America Web site, consisting of 19th-century journals and monographs, receives 4,000 hits a day. A large share of the users is made up of non-Cornellians, who expect the digital library to act just like a regular library, replete with basic services. As the system becomes more stable, user requests have less to do with system difficulties and more to do with content inquiries, which often represent the interests of a general, rather than a scholarly, audience. Such questions as “What are my 1890s Harper’s magazines worth?” make us feel a little more like an auction site than an educational site. Cornell began its digital library a decade ago under the rubric “any time, any place,” and today must address the question of “anybody?”
The issues raised by user response to digital collections lead to the last two points I want to address: overcoming barriers to Web use and financing the enterprise.
Institutions Must Overcome Technical Barriers to Effective Use on the Web
Various user studies have concluded that all researchers expect the following things from displayed digital images:
- fast retrieval,
- acceptable quality, and
- added functionality.
Of course, they want many other things, too, such as the ability to print, to manipulate and annotate images, and to compare and contrast images. Increasingly, they want specialized services. In providing digital access, conflicts inevitably arise regarding what a user may want, what is affordable, and what the technology can deliver.
These expectations and inherent conflicts lead cultural institutions to confront a host of technical issues associated with quality, delivery, and utility that do not exist in the analog world. Unfortunately, no systematic assessment has been conducted to determine the cumulative effects of the total range of technological choices on the transmission and display of digital image material. File formats, compression processes, scripting routines, transfer protocols, Web browsers, processing capabilities, and the like combine to affect user satisfaction. This is particularly true when we consider the lag in technology adoption at the user’s end. Users may think they want the highest quality, but they may be frustrated by how long it takes to download a file or may be disappointed when a beautiful color image displays in a largely posterized form.
Speed of Delivery
Speed of delivery is perhaps the major concern to users. A one-megabyte file might be accessed in a tenth of a second on a fiber network link but will take nearly three minutes on a v.90 modem. Because network configurations cannot be controlled, cultural institutions have focused on constraining image file size to speed access. Typically, institutions have reduced file size by limiting the resolution, or bit depth, or by applying compression. Each of these choices can have a pronounced effect on image quality. New and emerging file formats and highly efficient compression schemes such as Flashpix, GridPix, and Wavelet compression are gaining in popularity. They enable the delivery of large images over slow network links with little quality loss and offer the user the means to pan and zoom.2 Another option for increasing delivery speed is to bundle images together, which may not increase the initial delivery speed but can facilitate “flipping” through a cache of downloaded images in rapid time. The most notable example of this capability is found in the use of Adobe System’s PDF (portable document format) to view and print multipage documents. Other options include the use of multi-image TIFF (tagged image file format) files, CPC (Cartesian perceptual compression), and QuickTime movies.
The rush to embrace these new technologies should be tempered by the need to protect digital assets from obsolescence. This concern has sparked a continuing debate within the cultural community over the use of compression in master image files or the adoption of proprietary formats. As John Price-Wilkin has noted, “The Internet is littered with ‘good ideas,’ particularly in the form of impressive plug-ins or helper applications with frighteningly short life spans.” (in press; see also Dale 1999)
The need to reduce file size to speed delivery may be a limited-term concern as broad bandwidth information pipelines and wireless high-speed data transfer capabilities are developed in the next 5-10 years to support research, electronic commerce, and entertainment. For instance, current Federal Communications Commission (FCC) rules require all analog broadcasts to be phased out by the end of 2006. The potential of digital television, in particular high-definition television (HDTV), to provide new and different kinds of information to a broad range of users-including access to digitized cultural resources-is tantalizing (FCC 1998). Beginning with Internet2, the U.S. government is funding efforts to build the Next Generation Internet (NGI) which will link research labs and universities to high-speed networks that are 100 to 1,000 times faster than the current Internet. Designed to handle high volumes of information, the NGI will make access to digital image files very easy and access to high-quality audio and moving-image transfer very practical (Cohen 1999).
Users expect digital images to offer visual quality comparable to that of the original. However, as has been noted, image quality may be reduced by the need for timely delivery. Quality can be further compromised by inadequate display technologies. Because monitor resolutions are often lower than those used to create digital image files, readers may be presented with difficult choices. They can choose a complete image, which can be delivered quickly but may be illegible; or they can examine image details but at the price of slow delivery and the ability to view only a fraction of the image at any given time. Color appearance is most problematic. The use of different browsers, the transfer between color spaces, or the reliance on underpowered monitors may affect it. Possible solutions include the use of sophisticated file formats such as portable network graphics (PNG), which supports both a Web-safe palette and sRGB, a color profile designed to ensure color consistency across platforms. Some institutions include gray-scale and color targets with their images to enable the end user to adjust the color when necessary. Others have created electronic targets and specified monitor settings to assist users in calibrating their monitors. Evidence suggests, however, that few users take advantage of these offerings. As Michael Ester has pointed out (1996), “The only controls that are apt to see widespread use are those that are built into applications and underlying software.” I suspect, however, that because color representation is a growing concern in electronic commerce, basic solutions will be forthcoming. As was learned in the mail-order business, no company can afford to handle too many returns and exchanges that are requested because the color of the ordered shirt does not match the color in the catalog-whether in print or on the Web.
Digital image files are “dumb” files; they convey little beyond an electronic likeness of the original document or object. Additional work, which traditionally requires time-consuming descriptive cataloging or manual indexing, is needed to bring intelligence to these files. Containing costs while keeping pace with rising user expectations will require more automated image processing. Most of us are familiar with text conversion via optical character recognition (OCR) applications. These programs have improved tremendously, with error rates declining by half in the past few years because of advances in core recognition technologies, in weighted voting, and in the use of automated error-reduction applications. But highly accurate text conversion is still an elusive goal for most handwriting, for nonstandard scripts (such as Gothic), and for many nonroman languages (Dahl in press).
Interest in computer processing extends beyond textual information to graphic and photographic images. Raster-to-vector conversion software shows growing promise to create manipulable images for some graphic materials, such as maps, satellite and aerial photographs, architectural drawings, and engineering plans, but this capability still does poorly on rich, continuous-tone image files. Considerable research is under way in the area of content-based image retrieval (CBIR) to automatically extract features that characterize an image’s appearance. Today’s CBIR is based primarily on numerical measures of shape, color, and texture and is currently most effective where there is a need to retrieve information by image appearance (e.g., finding items of a particular color) rather than image semantics (e.g., pictures of children on a beach). Creative use of current capabilities can lead to retrieval either by characterizing the search in terms of proportion and color (e.g., a beach is 75 percent yellow, 25 percent blue) or by identifying a particular shape (e.g., a tiger), which will retrieve similar shapes and patterns that will include tigers but also fur coats. Because CBIR is actively being investigated, improvements could be rapid, but the capability to automatically retrieve images by a particular artist or photographs from a particular decade remains an elusive goal (Wu in press; Eakins in press; Lesk 1998).
In addition to providing added functionality, we can offer auxiliary features that facilitate more effective use of our collections. Consider, for instance, the success of Amazon.com, which is due in part to the added capabilities to facilitate access, selection, and ordering. Our digitized resources will be more accessible to a broader community if we provide simple online tools that extend the capabilities of their analog counterparts, such as the following:3
- automated perpetual calendar, enabling a reader to key in month and day information (e.g., October 6) and receive a listing of all years in which that date falls on a particular day of the week (e.g., Tuesday),
- timelines to place historical items in the context of certain events,
- currency conversion tools that not only translate pesos into pounds but also peg value to their relative worth for any date in history,
- metric-to-English conversion tool,
- listing of scientific, medical, business, and cultural signs and symbols,
- multilingual dictionary and translation programs for text-searchable material,
- dimension tools not only to facilitate the use of digitized maps but also to enable the viewer to appreciate that a Dürer and a Dali may be of completely different scales (Handel 1995), and4
- lists of “sightings” in museum and auction catalogs.
Institutions Should Not Expect to Recover Costs Incurred in Digitization
No consensus has been reached about what it costs to create-much less maintain and make accessible-digital image files. The cost figures that are available vary tremendously, depending on the types of material being scanned, the image conversion requirements, the hardware and software used, and the range of functions covered in the calculations. There is no consistent price for outsourcing image conversion from vendor to vendor, or even from project to project, that is analogous to what we experience in other conversion efforts such as preservation microfilming.
We probably know the most about text scanning of disbound volumes, with estimates ranging from $.10 to $.30 per image for large production projects.5 Figures for bound volume scanning are perhaps twice that amount. A number of institutions have found that they can obtain a better product and faster production rate when bound items are rendered into single leaves for scanning, even when the costs of rebinding are included (MacIntyre and Tanner 1998; ILEJ 1999).
Although production rates for film scanning theoretically are very high, in practice, current limitations pose difficulties that have reduced scanning rates considerably, and today, the costs of film scanning remain equal to or higher than the costs of paper scanning at the same resolution and bit depth. In a recent project to convert preservation-quality film, Cornell paid nearly twice what it would have paid a vendor to do single-sheet scanning. The Internet Library of Early Journals Project involved both bound-volume scanning and film scanning. The Project concluded that costs for microfilm scanning were higher and the quality lower than bound-volume scanning (ILEJ 1999).
Advances in grayscale capture will soon rival bitonal scanning in speed. When one moves from grayscale to color scanning, however, the time and costs increase significantly-on the order of two to three times.6 Scanning figures for graphic materials represents an order-of-magnitude increase in cost over scanning text. Steve Puglia of the National Archives has completed a comparative analysis of digital imaging costs, the results of which have been presented in RLG DigiNews (1999). His findings offer a sobering reminder that imaging is not an inexpensive proposition. In the National Archives’ Electronic Access Project, which included manuscripts as well as graphic and photographic materials, image acquisition costs averaged $7.60 per image. These figures go up when one considers high-end imaging projects of museum holdings; the reported production rates to create 70-100 MB files range from 15 to 70 images a day.7
More significant than image acquisition costs are the total costs associated with digital projects. Although figures vary from project to project, it appears that digital conversion represents one-third or less of total costs, with the other two-thirds going to metadata creation, administration, and the like. More sobering yet are the ongoing maintenance costs, which prove difficult to calculate because there are few production figures available. Some claim that the majority of costs are incurred in the first five years after creation and that they decrease significantly thereafter. Others claim that the maintenance costs will dwarf the costs of image acquisition. Unofficial figures from the Environmental Protection Agency peg the total costs of supplies, services, and hardware to maintain digital material for ten years at four to seven times the cost of creation. At a recent conference, presenters argued that digital images need to be migrated every three to five years at a cost equivalent to 50-100 percent of the costs associated with the original imaging project (Kenney 1997; MacTavish 1999).
Most digital conversion projects have been funded by one-time appropriations from government, foundation, or institutional sources. Ultimately, an institution must assume the ongoing costs of maintaining its digital assets. Facing these costs leads to considering the economic sustainability of digital image conversion efforts. Such conversions can be accomplished in two ways. First, an institution can realize cost savings in other operational areas and divert those resources to the digital effort. This method seems more suited to libraries than to museums. Second, institutions can recover costs associated with digitization by selling or licensing their digital products. This approach may be more comfortable for museums than libraries, given their historic commitment to free access to information.
Reducing Institutional Expenses
The potential for cost savings is at the heart of one of the largest digitization projects today. JSTOR is based on the premise that space savings is a key cost factor for libraries, and these costs collectively will lead to the economic viability of digital conversion projects. JSTOR contends that a single library cannot save money by digitizing its older holdings, but that cooperative, multilibrary agreements might be economical. This cost assessment is based on the assumption that libraries can pay for their subscription fees by discarding paper holdings or by moving them to cheaper, less accessible offsite storage. By taking these actions, libraries presumably make space for other materials, reduce the need to build new libraries, and accrue additional operational savings (for example, in binding, preservation, retrieval, and reshelving). This model presumes that libraries can trust JSTOR to maintain its digital holdings in perpetuity. To date, the promise of cost savings has yet to be realized; few institutions, if any, have taken steps to wean themselves from the hard-copy versions. For the time being, one can assume that JSTOR members are subscribing to avail their constituencies of the enhancements and convenience it affords and thus have increased, rather than decreased, expenses. JSTOR offers little incentive to museums because they do not hold as many items in common and are even less likely than libraries to dispose of their physical collections (Guthrie 1999; De Gennaro 1997).
Some institutions hope to cover costs by generating revenue, which conflicts with the assumptions of many Web users that everything on the Internet should be free. Indeed, many institutions currently provide free access to their digital holdings, in part because they have received outside funds and in part because their administrations have supported the expense of maintaining the electronic presence. As institutions face the need to fund digital efforts from internal sources, the pressure to recover costs will grow.
A number of initiatives to develop cost-recovery solutions have been advanced, but little hard evidence is available to show that they will succeed. For example:
- In 1997, The British Library developed a business case to seek private sector collaboration to create a self-sustaining digital library service. Unfortunately, after a year, the library and the bidding consortium agreed to discontinue negotiations, maintaining that it had proved impossible to “balance the objectives of the Library with the commercial operating requirements of the consortium” (The British Library 1998).
- The University of Toronto has developed a business plan for selling paper versions of digitized books. Currently, its customers are limited to Japanese institutions, and the university is not breaking even. The university has concluded that the market must be expanded to make the program economically viable. The library has been willing to subsidize the operation to build its collection of digitized books but has not yet fully embraced this broader marketing strategy.8
- The MESL (Museum Educational Site Licensing) Project attempted to address many of the issues related to consortial licensing of museum images to universities for educational purposes. A detailed financial assessment concluded that consortial distribution of digitized museum objects to educational institutions will likely not be an economically sustaining, revenue-producing venture for some years to come. A collaborative initiative stemming from the MESL experience, The Museum Digital Library Collection, Inc. (MDLC), aspired to become a nationwide image licensing enterprise but now appears to be moribund (Besser and Yamashita 1998).9
- AMICO (Art Museum Image Consortium) is moving the concept of consortial licensing of museum images to educational institutions one step closer to large-scale reality. The project focuses on taking advantage of emerging education opportunities, but supporters also expect that it will bring new revenue sources and greater economic stability to the museums that participate in it (Bearman 1996; Trant and Bearman 1997). The Research Libraries Group (RLG) and AMICO have joined forces to provide access to the AMICO library. At this point, the effect of the Academic Image Cooperative, sponsored by the Digital Library Federation and the College Art Association and aimed at providing free access to art images for educational and nonprofit use, on AMICO’s market is unclear.
This economic assessment leads one to question the cost-effectiveness of retrospectively digitizing library and museum holdings. Clearly, digitization efforts will have a greater chance of becoming sustainable if
- institutions consider digital material critical assets and create digital files in a manner to ensure their long-term value and utility;
- digital initiatives are mainstreamed within library and museum operations;
- libraries and museums can substitute digital for traditional means of access;
- researchers embrace the use of digital image collections and are willing to pay for some added value or convenience that digital versions offer; and,
- institutions are prepared to cooperate with one another to share the rewards and responsibilities of the digital world.
More important, cultural institutions should come to view digital conversion as a means to other things, not an end in itself. Susan Yoder, director of Integrated Information Services at RLG, has suggested that digitization efforts will be sustainable if they are justified by at least one other institutional goal beyond generating revenue (Yoder in press). For the foreseeable future, the digitization of retrospective collections will not pay for itself, but it may be a legitimate loss leader in a new service paradigm, enabling libraries and museums to compete successfully in reaching a broad range of cultural consumers.
Web site addresses noted in this paper are valid as of January 20, 2000.
Bearman, David. 1996. New Economic Models for Administering Cultural Intellectual Property. Paper presented at the Digital Knowledge Conference, Toronto, Ontario, February 7, 1996. Also presented at EVA Florence, Italy, February 9, 1996. Available from www.archimuse.com/papers/db.mesl/economics.html.
Besser, Howard, and Robert Yamashita. 1998. The Cost of Digital Image Distribution: The Social and Economic Implications of the Production, Distribution, and Usage of Image Data. Report produced by the School of Information Management and Systems, UC Berkeley. Available from http://sunsite.Berkeley.edu/Imaging/Databases/1998mellon/.
Church of Jesus Christ of Latter-day Saints. 1999. New Family Tree Internet Service Swamped by Demand. Press release, May 26. Available from www.lds.org/med_inf/new_upd/19990526_FIGS_Demand.html.
Cohen, Elizabeth. 1999. Internet2 and the Next Generations. NARAS Journal. 8(2): 51-56.
Dahl, Kenn. In press. OCR Trends and Implications. In Moving Theory into Practice, Digital Imaging for Libraries and Archives, edited by Anne R. Kenney and Oya Y. Rieger. Mountain View, Calif.: Research Libraries Group.
Dale, Robin. 1999. Lossy or Lossless? File Compression Strategies Discussion at ALA. RLG DigiNews 3, no. 1 (February 15). Available from www.rlg.org/preserv/diginews/diginews3-1.html.
Darnton, Robert. 1999. The New Age of the Book. The New York Review of Books. March 18, pp. 5-7.
Day, Michael. 1998. Issues and Approaches to Preservation Metadata. Paper presented at the Joint RLG/NPO Conference on Guidelines for Digital Imaging, at the University of Warwick, Coventry, England, September 28-30, 1998. Available from www.rlg.org/preserv/joint/day.html.
De Gennaro, Richard. 1997. JSTOR: Building an Internet Accessible Digital Archive of Retrospective Journals. 63rd IFLA General Conference, Conference Programme and Proceedings, August 31-September 5, 1997. Available from http://ifla.inist.fr/IV/ifla63/63genr.htm.
Eakins, John. In press. Content-Based Image Retrieval. In Moving Theory into Practice, Digital Imaging for Libraries and Archives, edited by Anne R. Kenney and Oya Y. Rieger. Mountain View, Calif.: Research Libraries Group.
Ester Michael. 1996. Digital Image Collections: Issues and Practice. Washington, D.C.: Commission on Preservation and Access.
Federal Communications Commission (FCC). 1998 (November). Digital Television Consumer Information. Available from www.fcc.gov/Bureaus/Engineering_Technology/Factsheets/dtv9811.html.
Guthrie, Kevin M. 1999. JSTOR: The Development of a Cost-Driven, Value-Based Pricing Model. In Technology and Scholarly Communication, edited by Richard Ekman and Richard E. Quandt. Berkeley and Los Angeles: University of California Press.
Handel, Mark. 1995. Issues of Scale and Size in Visual Databases. Unpublished paper. Available from http://sunsite.Berkeley.EDU/Imaging/Databases/Fall95papers/handel.html.
Internet Library of Early Journals (ILEJ). 1999 (March). Final Report. Available from www.bodley.ox.ac.uk/ilej/papers/fr1999/fr1999.htm.
JSTOR. 1999. JSTOR: The Need. Available from www.jstor.org/about/need.html.
Kenney, Anne R. 1997. Digital to Microfilm Conversion: A Demonstration Project, 1994-1996. Final Report to the National Endowment for the Humanities. Available from www.library.cornell.edu/preservation/com/comfin.html.
Kuny, Terry. 1998. The Digital Dark Ages? Challenges in the Preservation of Electronic Information. International Preservation News 17 (May):13.
Lesk, Michael. 1990. Image Formats for Preservation and Access: A Report of the Technology Assessment Advisory Committee to the Commission on Preservation and Access. Washington, D.C.: Commission on Preservation and Access.
_____1998. Finding Pictures. RLG DigiNews 2(1) (February 15). Available from www.rlg.org/preserv/diginews/diginews21.html.
MacIntyre, Ross, and Simon Tanner. 1998. Nature-A Prototype Digital Archive. Paper presented at the Sixth DELOS Workshop Preservation of Digital Information, Tomar, Portugal, June 17-19, 1998. Available from http://heds.herts.ac.uk/HEDCinfo/Papers/HEDSnature.pdf.
MacTavish, Sue. 1999. DoD-NARA Scanned Images Standards Conference. RLG DigiNews 3(2) (April 15). Available from www.rlg.org/preserv/diginews/diginews3-2.html.
National Information Standards Organization, Council on Library and Information Resources, and Research Libraries Group. 1999. Initiative on Standardizing Metadata. NISO/CLIR/RLG Technical Metadata Elements for Images Workshop. Available from www.niso.org/image.html.
National Initiative for a Networked Cultural Heritage (NINCH). 1999. Guide to Good Practice in Networking Cultural Heritage. Available from www.ninch.org/PROJECTS/practice/why.html.
Odlyzko, Andrew. 1999. The Economics of Electronic Journals. In Technology and Scholarly Communication, edited by Richard Ekman and Richard E. Quandt. Berkeley and Los Angeles: University of California Press.
Price-Wilkin, John. In press. Enhancing Access to Digital Image Collections: System Building and Image Processing. In Moving Theory into Practice, Digital Imaging for Libraries and Archives, edited by Anne R. Kenney and Oya Y. Rieger. Mountain View, Calif.: Research Libraries Group.
Puglia, Steven. 1999. The Costs of Digital Imaging Projects. RLG DigiNews 3(5) (October). Available from www.rlg.org/preserv/diginews/diginews3-5.html.
Sullivan, Terry. 1998. The Modern’s Digital Archive. Photo District News (October/November).
The British Library. 1998. The British Library Digital Library Programme. Available from www.bl.uk/.
Trant, Jennifer, and David Bearman. 1997. The Art Museum Image Consortium: Licensing Museum Digital Documentation for Educational Use. Available from www.archimuse.com/papers/amico.spectra.9708.html.
Wu, Yecheng. In press. Raster, Vector, and Automated Raster-to-Vector Conversion. In Moving Theory into Practice, Digital Imaging for Libraries and Archives, edited by Anne R. Kenney and Oya Y. Rieger. Mountain View, Calif.: Research Libraries Group.
Yoder, Susan. In press. Sustainability Through Integration. In Moving Theory into Practice, Digital Imaging for Libraries and Archives, edited by Anne R. Kenney and Oya Y. Rieger. Mountain View, Calif.: Research Libraries Group.
1 Day’s work focuses on requisite preservation metadata. The NISO / CLIR / RLG initiative on standardizing metadata should provide specific preservation guidelines for digital image collections.
2 Institutions experimenting with new file formats and compression schemes include the Library of Congress, the Library of Virginia, the University of Michigan, the U.S. Geological Survey, the Fine Arts Museums of San Francisco and the University of California at Berkeley, and the Cornell Johnson Art Museum.
3 I am indebted to my archival colleagues at Cornell for many of these suggestions.
4 Technical development at the Blake Archives Project includes a Java applet (The ImageSizer) to view Blake’s work on screen at the actual physical dimensions. Available at http://www.iath.virginia.edu/blake/.
5 These figures have been reported by Cornell, Michigan, and JSTOR (Journal Storage) (see also Odlyzko 1999). The Andrew W. Mellon Foundation has funded a project at the University of Michigan to document the full range of costs associated with digitization in a production environment. The results of that study will be available in late 2000.
6 Gray-scale and color production figures reported in projects at the Library of Congress, the Smithsonian Institution, and at the Beinecke Library at Yale University.
7 Production at the Johnson Art Museum at Cornell University averages 70 images a day for a nine-hour shift using one digital camera and two photographers. The Museum of Modern Art reports scanning and editing 20 images a day.
8 E-mail, Karen Turko to Anne R. Kenney, 31 May 1999.