The work of the Joint Task Force on Text and Image was supported by a generous grant from the Getty Grant Program.
NB: The printed version of this document contained images which are not included in this electronic version
The “brittle books” problem has been recognized largely in terms of the pressing need to preserve texts in danger. Many of these texts include a variety of images, often just as significant, if not even more so, as the words that accompany them. However, present preservation practices are relatively insensitive to the vast majority of these images and fail to capture them with sufficient fidelity to be useful.
The mission of the Joint Task Force on Text and Image was to inquire into the problems, needs, and methods for preserving images in text which are important for scholarship in a wide range of disciplines and to draw from that exploration a set of principles, guidelines, and recommendations for a comprehensive national strategy for image preservation. During its two-year inquiry, the Task Force studied the cognitive relationships between words and pictures; the basic attributes of images; and the distribution of images in books and periodicals during the time when acidic paper was used for scholarly publication. Since little useful information was found on the last area of inquiry, the Task Force also informally surveyed its own membership and colleagues regarding types of images found in publications in the disciplines of anatomy, architecture, art history, cultural history, entomology, geology, history (general), medieval archaeology, and photographic history in various periods between 1850 and 1950.
I. An important beginning can be made on the preservation of books and periodicals in the 1850-1880 era in almost all disciplines that depend on images In texts.
- Current microfilm technique, i.e., high-quality, high-contrast black-and-white filming, can be used for preserving most of the books in this era, since colored or halftone illustrations are uncommon, and the bulk of illustrations are line cuts or drawings.
- Most exceptions to this generalization can be handled by conservation rather than preservation, since these relatively small numbers of materials are likely to be rare and/or intrinsically valuable.
- The film archive can serve as an intermediate technology until it can be converted to a standardized electronic medium.
II. The preservation of halftone illustrations in text, Increasingly common after 1880, requires further exploration, including additional data about the distribution of images in texts and exploratory trials of alternative technologies for preservation.
- High-contrast black-and-white microfilm does not reproduce half-tones satisfactorily for scholarly purposes. Available alternatives (e.g., color and continuous-tone filming and electronic scanning and bit-mapped storage) require further study and experimental trials to estimate cost and time requirements and quality of results.
III. The available information about the number and types of images in various kinds of publications in various epochs is insufficient and undependable for large ranges of time and materials.
- Sample surveys that collect data on image characteristics and frequency of occurrence are needed across a variety of materials in several disciplines and time periods, to determine the most common kinds of image attributes.
- Surveys should be done on publications from 1850 to 1950 in selected disciplines in the arts and humanities; the biological, physical, and social sciences; technology, and engineering.
IV. Further information is needed about the effectiveness, costs and requirements of alternative technologies for preservation of post-1880 text-cum-image material.
- A series of pilot projects that have been begun by the Commission should be continued and expanded to learn what time, effort, and special problems are involved in capturing text and image by scanning, continuous-tone black-and-white and color microfilming, and in converting among these media for archival storage.
Task Force conclusions are based on the following findings.
Uses and Attributes of Images
It is essential to recognize the considerable diversity of image types and their physical and conceptual relationship to texts. The relative importance of images and the significance of their specific attributes must be taken into account when determining preservation priorities and strategies. The Task Force concludes that:
- A visual record of the images in a book, in their original relationship to the text, should be preserved in some useable form, regardless of the specific preservation strategy chosen, since such a record makes it possible to reconstruct what the writer saw and used in developing the text, and what the audience read.
- For some materials, “mixed” technologies may be the preservation strategy of choice, for example, high contrast microfilm for text and digitized images for illustrations when images are collected in one place in a book.
- It is essential that preservation strategy take the broadest possible view of future use, since images, however field specific in initial appearance, have a variety of present and future users, both scholarly and general.
Recommended strategies should include conservation of some volumes as physical items in the context of a national (or even international) plan. Since the per-unit costs for artifact conservation are considerably higher than for preservation by converting content to a different medium, a well-worked out strategy for selection for conservation is of utmost importance. Conservation decision principles include:
- Any candidate for conservation should also have a surrogate made for continued use and/or access to its intellectual contents.
- In cases where conservation actions have been taken because items are rare or have intrinsic value, those items should be passed to a category of “limited use.”
- To avoid unnecessary duplication of effort, nationally accessible bibliographic records should be created for materials that have been conserved, in the same manner that is now done for preserved items.
Technological strategies should attempt to couple dependability of storage with flexibility to exploit new technologies as they develop. Conversion of text and image to microfilm is a stable and widespread technology with product longevity. Electronic scanning and storage is potentially a much superior method which is changing rapidly; it has not demonstrated exceptional product longevity, and is not yet available widely for routine use. Microfilm can be readily scanned to produce bit-mapped images. The Task Force concludes that:
- The rapidly changing technological situation argues for maximum flexibility and the widest range of options for achieving the required functionalities for the scholarship of the 21st century.
- Until standardized digital technologies become routinely available for mass production, the most practical medium for preservation of material that cannot be successfully converted to high-contrast black-and white microfilm is probably color microfilm, from which digital images can subsequently be generated. If continuous tone black and-white microfilming becomes available, it could substitute for or supplement color film.
The “brittle books” problem has been recognized largely in terms of the pressing need to preserve texts in danger. Many of these texts include a variety of images, often just as significant, if not even more so, as the words that accompany them. However, present preservation practices are relatively insensitive to the vast majority of these images and fail to capture them with sufficient fidelity to be useful. The mission of the Joint Task Force was to inquire into the problems, the needs, and the methods for preserving images in text which are important for scholarship in a wide range of disciplines; and to draw from that exploration a set of principles, guidelines, and recommendations for a comprehensive national strategy for text-cum-image preservation.
In 1989 the Commission on Preservation and Access asked the Getty Grant Program to support a two-year project to be concerned with problems of preserving text-cum-image publications for research and scholarship. This request itself emerged from an earlier Commission activity sponsored by the Getty Grant Program, namely a conference of art historians assembled at Spring Hill, Minnesota, in 1988 to consider the challenges posed by the alarming deterioration of research resources published on acid paper since the mid19th century. The participants in this conference had reached agreement on three fundamental points: (1) scholarship in art history (and in a number of other disciplines) is dependent upon both text and image; (2) the current preservation process of high-contrast black-and-white microfilm is not satisfactory for the reproduction of halftone and continuous-tone images; and (3) the optimal preservation process must result in enhanced access to the full range of scholarly resources.
The principal recommendation coming out of the Spring Hill conference was for the appointment of a Joint Task Force to be composed of scholars from a variety of fields that depend upon text-cum-image for their advancement, for example, archaeology, architecture, biology, geography, geology, and medicine, as well as art history. It was recognized that the interests of these disciplines are characterized by a substantial core of commonalities as well as a range of strikingly dissimilar needs and purposes. The primary function of the Task Force then would be to sort out these similarities and differences and to develop an improved understanding of the characteristics of images and their uses in the various fields of scholarship, as a basis for beginning to develop strategies for preservation. A second concern would be to analyze the problems of material conservation and to suggest strategies for identification and selection of items to be conserved as artifacts as well as being converted to other media. Thirdly, the rapidly changing electronic technology for image capture, storage, retrieval, and networked access make it imperative to understand the relation between scholarly needs and alternative technological capacities for preservation. Underlying all three of these ends is the ultimate goal of developing a comprehensive plan for preserving text-cum-image materials. It is critically important to agree upon a coordinated strategy that would allow preservation to go forward in some areas while simultaneously acknowledging the need to accommodate significant distinctions and complexities that inhibit action in other areas. In short, it is important to disaggregate the issues and to manage them with appropriate measures taken as needed and as feasible.
The Getty Grant Program made the award that enabled the Joint Task Force to be organized in the spring of 1990. Its membership included senior scholars, librarians, and practitioners in several fields that use text and image extensively for research, teaching, and professional purposes. (A list of members is included at the conclusion of this report.) The Task Force held three meetings, at some of which consultants extended the range of disciplinary coverage by explaining how images were used in their fields. What follows is a report of the principal findings of the Joint Task Force, together with its recommendations for further work.
Statement of the Problem
Throughout this country–indeed throughout the world–the pages of books and journals in libraries are gradually, but steadily, turning brown, becoming brittle, and crumbling to dust. The reason is the acidic paper on which most of them were printed between 1860 and 1950, and the temperature and humidity conditions under which they were stored. About 80 million volumes in U.S. Libraries are seriously deteriorated. More each year are becoming brittle and unusable. The earliest books in the threatened period are, generally, in poorest condition, and their content is most likely to be lost forever unless some measures are taken soon to save it. There is not time enough, nor are there likely to be resources enough to save all the volumes that are at risk. Choices must be made and the process of selecting what is to be preserved (and implicitly deciding what is to be left to crumble) is agonizing as well as difficult. Attempts to save a book by deacidifying its pages result at best in halting further deterioration, but not in returning the pages to their initial strength. So far, furthermore, successful deacidification remains a labor-intensive, costly “hand” process carried out on one book at a time under the supervision of highly skilled conservators. Mass deacidification has not been developed to the point of practical application. The most promising strategy for saving the intellectual and artistic content of endangered volumes is to transform or convert it to a different medium–by copying it to film or digital electronic form.
Many of the books and journals published between 1850 and 1950 contain images that may be (a) embedded in the text, (b) adjacent to text with which the images are meaningfully linked, or (c) gathered together in one or more places in the volume. There are also instances in which the images are included as foldouts or contained in pockets, and instances in which images are bound in a separate volume that was intended to be used together with a specific text. These last two types present special problems in handling, but should be considered to be cognitively equivalent to category (c).
The basic question is: what attributes of the visual image in a text are important to preserve because they serve scholarship and other professional purposes in a given field? An alternative formulation of the question might be: how should the attributes of the images, the distribution of their occurrence, and the purposes for which scholars will want to use them inform the strategies for preservation of these text-cum-image publications? As a start on the problem, this report outlines some things that scholarly opinion identifies and asserts about images and their uses in various academic disciplines.
In thinking about the preservation of images-cum-text, it is important to recognize that the developing technology of recording may change the way in which the materials of the past can be used in the future, and that the methods and purposes of scholars in the future may not be the same as those of the present and earlier generations. For example, the ability of computers and word processing programs to create machine-readable text made it possible to do much more extensive text comparisons than literary scholars had been able to do with “hand” methods, thereby virtually creating a new field of scholarship. It is entirely possible that art history or taxonomic biology could be similarly affected by the capacity to manipulate and compare images that electronic scanning and storage, color “enhancement,” and other technological devices offer. In our zeal to preserve images and text in faithful relationship to each other, we must recognize that future generations may no longer be as bound by the book as earlier ones have been and may wish to have greater flexibility in handling the data of the past.
Another important consideration in preservation is the varying needs and wishes of different kinds of users. A biological scientist whose primary interest is in the physical structure of a botanical specimen may be indifferent to color representation or even regard it as a nuisance, while “secondary users” of the same literature, such as artists, gardeners, decorators, or poets, may find the structurally explicit black-and-white line drawing uninspiring and irrelevant to their purposes.
In deciding how to preserve text-cum-image, one must appreciate the cognitive relationships between the words and the pictures. Conventionally, this relationship has been differentiated into instances where: (a) the image and text conveyed the “same” information (“redundant”); (b) the information in each was “complementary” (i.e., the text explicates the image or the image conveys more clearly something that can be only clumsily or partially conveyed in words); (c) the information provided by each enlarges the immediate context (“supplementary”). There is also a rhetoric of presentation in the sense that the physical placement of text and image can have a bearing upon the argument being made. The important question to confront is: what is the significance of these different text and image relationships? Are they both intrinsic parts of the argument and not dispensable? Intellectual, social, and cultural historians assert the significance of the “actual relationship” of image to text as it exists on the printed page in understanding both the intent of the author and the connection between text and image that the contemporary reader perceived and reacted to. This point of view argues for the preservation of the original relationship or the artifact itself.
Color in illustrative images plays at least two roles. In most works of art, the colors of the original object are significant in themselves, whereas in many geological and geographic images, as well as some medical images, color is employed as a way of distinguishing adjacent areas of the image that have different identities or functions. Ideally, the preserved color in works of art should be true to the original. On the other hand, “encoding” colors need only be distinguishable from each other in the preserved form–a much less stringent requirement. In fact, in some but not all colored images in anatomy the information could probably be saved by reproduction in black-and-white, supposing adequate gray-scale differentiation.
On the other hand, encoding colors in some complex geological maps, for example, are used to depict multidimensionally differentiated areas of a region of the earth. A single map of, say, a county in the northeastern United States may present simultaneously: soil types, topographic or physiographic features, epoch of deposition or formation, and even direction of glacial movement. Such complex combinations of data may employ as many as a dozen different colors and stippling or cross-hatching in order to convey the variety of surface and subsurface characteristics. Reproduction in continuous-tone black-and-white microfilm would probably fail to preserve the full range of distinctions that the encoding colors of the original intend to convey.
Most black-and-white illustrations in books fall into two broad classes: (1) line drawings, woodcuts, and other “edge-based” representations, where the significant information is carried by an area that is internally undifferentiated while itself sharply distinct from its surroundings; and (2) halftone reproductions (principally of photographs), shaded drawings, monochrome paintings, and other graphic techniques where the surface of an image is internally differentiated and this “texture” carries information. The degree of differentiation may have a bearing on the conversion process preferred. There may be image types that are hard to classify.
The size of the original object (i.e., the illustration or image in the publication to be preserved) and the size of the image to which it is converted for preservation interact complexly. In geological and geographic maps, for example, the original object may be large in size in order to capture both the totality of the area being depicted as well as fine distinctions among portions of the entirety. When the size of the original exceeds the capacity of the microfilm camera, the conversion may take place by subdividing the map into many parts. The result is unsatisfactory for users, since they can view only a part of what they are interested in at a time. Alternatively, if the conversion process produces an entire map on a single frame of film, the result may be unreadable under magnification because of the coarseness of “resolution” or “register.” Furthermore, geologists and geographers require maps that can display accurately fairly fine detail, be used in the field, and be easily carried. Essentially, this means a (foldable) paper copy at high resolution. So far, this problem seems most acute in the earth sciences, but it could surface elsewhere. The embedded problem of inadequate resolution certainly has wider implications.
Adjacency of Text and Image
Inevitably the cultural historian’s interest in preserving all aspects of the original relationship between text and image raises the question of how important it is to preserve (or achieve?) adjacency between text and image, meaning that the text and image to which it is related should be easy for the user to view simultaneously or nearly so. Opinion seems to be coalescing around the idea that adjacency is a desideratum for preservation. To be sure, functional adjacency was not always achieved in the original publication, but conversion should not lessen adjacency. Clearly, convenience of the user in referring from text to image and vice versa will be affected by the technology of conversion. For example, digitization allows manipulation of images and texts and can actually enhance adjacency if desired. On the other hand, the fixed linear order of presentation of microfilm frames makes cross-reference difficult. Because the use of text-cum-image materials varies so widely from scholar to scholar, the widest range of needs would be served by preserving the original character and relationship of image, while permitting each user to control “enhancement.” This approach would ensure that no important information would be lost through technical error or misunderstanding of need.
Distribution of Images in Texts
There is little useful information about the frequency and types of images in the endangered books and periodicals of various disciplines. Most studies of collection condition have treated illustrated materials as an undifferentiated mass. Furthermore, there is not an abundance of prior experience with the preservation of materials from fields that are known to be text-cum-image dependent, nor sensitivity to the special problems presented by images in text. These facts emerged from the Task Force members’ analysis of the literature in their respective fields, and led to the conclusion that a fundamental piece of information upon which to base a preservation strategy is missing or incomplete.
An examination of the material collected by the Commission on preservation programs in the last decade and a half persuaded the Task Force that the great bulk of activity had been in areas and disciplines that use primarily text (without important images) material. Only three studies that we were able to locate had even attempted to collect data about the kinds of images that have been published in the books and periodicals of various disciplines and over a reasonable span of time. Furthermore, these three studies had been carried out for specific purposes that differed sufficiently from those of the Task Force to make them inconclusive from its viewpoint. In addition, one of these studies had never been completely analyzed, another was limited in the scope of the collection surveyed, and the third was unable to provide cross-tabulated data of critical importance (e.g., the distribution of types of illustrations by era of publication and by discipline).
Since it appeared that there was little objective evidence from studies of collections about the distribution of images in texts of various fields, the Task Force undertook to survey its own membership (supplemented by their consultation with colleagues) as to their opinions about what types of images were likely to be found in various eras of 19th- and 20th-century publications in the disciplines the Task Force encompassed. The fields of knowledge included were: Anatomy, Architecture, Art History, Cultural History, Entomology, Geology, History (General), Medieval Archaeology, and Photographic History. The dates of publication were grouped into three periods: 1850-1879, 18801919, and 1920-1949. Respondents to the survey were asked to indicate the frequency with which each of several image attributes occurred in books and in serials of each epoch, and the importance of each attribute for scholarly purposes.
Although this survey was informal and was certainly not based on carefully designed samples, or rigorous methods, the members did examine some of the books in collections at their home institutions and did more than make an offhand guess as to the image content of scholarly or scientific works in various fields. The results are not conclusive, and it might endow them with too great an air of certainty if detailed tables were presented here; but several generalizations seem to emerge from the data–generalizations that seem plausible enough to be at least suggestive of strategy for preservation of text-cum-image materials. Finally, the results do not conflict with any of the available data from the three formal surveys already referred to as having been conducted at major libraries that included observations on images in text.
These are some selected findings from the Task Force exercise:
- Color for representational purposes is very rare until the latter part of the 19th century except in geology, natural history, medieval archaeology, and photo history. Even in the 1880-1920 era, representational color occurs with modest frequency only in a few fields: entomology (serials, not books), cultural history, and medieval archaeology.
- Line cuts show an opposite trend: very common in the earliest era, and declining in frequency in most fields in the 20th century (with the exceptions of entomology and geology, where line cuts are fairly common throughout).
- Encoding color occurs frequently in a few fields (geology, anatomy), but rarely or not at all in most of the disciplines surveyed.
- Half-tones do not show clear trends, but, in general, tend to be used infrequently during the earliest epoch surveyed and much more commonly in the latest one.
- Oversize is a common problem in a small number of fields (e.g., architecture books (not serials) in earlier epochs and photographic portfolios in all periods).
There is some hint in the data that the importance of representational color imagery is associated with its rarity, for example in anatomy, architecture, and photo history. Interestingly, the art historians did not rate representational color as an important attribute regardless of how common or uncommon was its use, whereas the medical historian always rated color as very important, when it was representational.
It is important to repeat the caution that the foregoing results are derived from a quite small, informal, and exploratory survey of expert opinion. It is unquestionably important to broaden the scope and depth of our knowledge in this area by carrying out carefully specified and rigorously executed studies that actually examine well-designed samples of volumes on the shelves of major library collections in various areas of scholarship.
What is needed is a half-dozen or so sample surveys that collect data on image characteristics and frequency of occurrence across a variety of materials in several disciplines and time periods. The basic objective is to estimate, for the purpose of developing preservation strategies, the most common kinds of image attributes in both books and periodicals of scholarly importance. From that estimate the size of the preservation problem and the extent to which various sorts of technologies can handle it in various epochs can be more accurately judged. We emphasize the phrase “most common kinds” of image attributes because we recognize that in almost every discipline, every era, and all kinds of publications, there will be unusual occurrences that are exceptions to any flat rule. These exceptions, while important, simply signal the need for special treatment and do not impugn a general policy–such as using high-contrast black-and-white microfilm for collections in taxonomic botany which contain mostly line drawings that illustrate plant structures.
The design of the surveys will require the joint expertise of librarians who manage and scholars who use the collections, as well as personnel who are familiar with sampling and survey form design. The design will not require exotic talent, but it cannot be emphasized too strongly that sampling technique, subject matter knowledge, and familiarity with the physical features of the collections to be surveyed are all of equal importance and can be slighted only at peril of misleading or irrelevant results.
It may be necessary ultimately to survey collections in all disciplines that are dependent upon text-cum-image publications. Certainly, art and architectural history, systematic botany and zoology, geophysical science, technology, and engineering would be appropriate places to begin since they are so image-dependent, and since the variety of images and users is so great. The collections surveyed should include ample numbers of publications from 1860 to at least 1930 in order to learn the full range of image types and their distribution at various stages of embrittlement. The opinions of scholars and their acquaintance with major collections in these various disciplines should inform the selection of sites where surveys are to be carried out.
While the surveys we recommend focus on image types and distribution, other features of publications that may be of more general interest to librarians of the participating collections could be included in the data collection as a service to and some compensation for the efforts of those libraries. For example, brittleness of pages, condition of binding, size of volume, year(s) of publication, and so on can be incorporated into the survey form without requiring gross additional time and effort to collect the data. Finally, it is essential to train the survey staff to understand and follow the sampling procedure strictly, to understand and record accurately the observations required, and to understand the criteria for judgments and apply them consistently. Inconsistent or careless execution can make the best design produce faulty data.
This program of surveys will provide information that is needed to estimate the preservation needs for materials that cannot be handled with high-contrast black-and-white microfilm, and will help to guide formulation of policies for that era. These policies should be promptly tested in demonstrations that bring the new technologies of preservation into play.
Uses and Users
In analyzing the preservation problems presented by text-cum-image materials it is essential to recognize the considerable diversity of image types and their physical and conceptual relationship to texts. The most commonly found images in books, at least up to the 20th century, are black-and-white illustrations, of various sorts, ranging from simple line cuts to halftone reproductions, and not uncommonly including original woodcuts, etchings, lithographs, and tipped-in photographs. Line cuts are most common in earlier years and halftones do not appear in quantity until late in the 19th century. Illustrations incorporating color vary from those using a single or a few colors for encoding purposes to those including a wide range of hues and values for purposes of representation. Images also vary markedly in physical size, from tiny decorated chapter initials to huge maps and diagrams, and such differences will often have dictated their presentation, for example, as foldouts or in albums or pockets of single sheets.
As noted earlier, images in a book may be placed adjacent to the text to which they refer, presumably in order to reinforce the conceptual link between the two and to make cross-reference easy. On the other hand, images may be widely separated from text, in one or several gatherings in a volume, or in independent, complementary volumes. The latter arrangement may result from purely practical publication considerations or may be conceptually meaningful, in an effort, for example, to assemble a unified corpus of illustrations supplementing but having also an independent value. In the preservation process it is therefore essential to consider the rationale behind the arrangement of images in relation to text.
For the purposes of preservation, all images are valuable in themselves. Images store information that may be useful independently of the text they illustrate. They can often be critical for scholarship in fields not addressed or even contemplated by the writer of the text containing them. The most obvious example is probably historical scholarship, which ways be concerned with the reconstruction of the body of visual knowledge available at a given time. For example, even the images contained in advertisements appearing in art and architecture periodicals are significant documents for tracing the evolution of taste.
It would be easy, especially considering current interest in cross-disciplinary scholarship, to cite other examples. One must conclude, therefore, that strong efforts should be made to preserve the images appearing in a text, and the closer to their original appearance the better. Various practical considerations more often than not dictate the need to compromise this ideal. The nature of that compromise depends on an informed assessment of the importance of particular kinds of images. Some images, such as original prints or photographs bound or tipped into a book, have artifactual value and therefore an importance that is independent of the information they convey. But in general the significance of an image must be measured on a gauge of probable informational usefulness.
There is no completely satisfactory procedure for making the difficult judgment as to which images, or which attributes of a class of images are important to preserve. Certainly, the opinion of contemporary scholars should be consulted, even though their estimates of importance may be different from those of future scholars. It is possible, furthermore, to err in both directions: failing to retain images or attributes that future scholars would have prized, or insisting on a level of completeness and fidelity that later users will find unnecessary. The second type of error may seem, at first glance, to be vastly preferable, but one must recognize that it will ordinarily consume more resources–time, equipment, material–and, with inevitably limited budgets, effectively mean that fewer works are preserved. Limitation of resources also means that a volume-by-volume expert inspection and decision as to what to preserve will prove an unfeasibly long and costly process. In short, painful judgments about whether to save or not will have to be made about whole classes of images and attributes. Unavoidably, some mistakes will be made. We should expect an imperfect process and recognize that the urgency of embrittlement and the constraints of resources leave no alternative.
Information stored in the image or body of imagery must be considered to have especially great importance if it is in some sense ‘unique.” Maps, botanical or zoological type specimens, and drawings and photos documenting archaeological excavations are among the many image types that may in actuality be unique. Once the specimen withers or dies, once the artifacts are removed from the archaeological site, the “original” has been lost and only the representation of it remains–as a unique item. Other images, though not truly unique, may appear in books with sufficient infrequency to warrant a high priority in the preservation process. Judgments in this area are difficult, for it is clearly unfeasible to scrutinize each illustration (for rarity) in every volume that is a candidate for preservation.
The important attributes of images, from the viewpoint of specialists in a scholarly or scientific discipline, are often surprising to an outsider and may seem to violate common sense; an art historian, for example, may not need to have an accurate copy of a colored illustration of a painting in a book of an earlier era because the color reproduction itself is such an unfaithful copy of the original. A taxonomic botanist would prefer to have a well- executed black-and-white line drawing than a beautiful color photograph of a specimen, simply because the former displays the structure of the plant better. A geologist may be content to have stratigraphic charts reproduced in black-and-white, as long as the distinctions among strata that were established by colors in the original are preserved in the black-and-white copy. An historian of 19th-century photography may be interested in the tone of a brown albumen print that, to a non-expert, might seem no different from a “black- and-white” photograph.
Images may retain their significance long after the words that surround them lose theirs. For example, many reference works containing images, such as archaeological reports, ethnographic and geological surveys, corpora of all kinds, remain fundamental for ongoing research and have, therefore, a great claim on the preservation and access process. Due consideration must be given to the claims of scholars in related or collateral fields. Geological texts- cum-image serve cartographers, evolutionary biologists, and others as well as geologists. Finally, the claims of future scholarship must be considered, and even though, in many cases, they can only be guessed at, it is at least evident that historians of specific disciplines will always want access to the visual materials of the past. All this requires curatorial consultation with the various constituencies of a library and a balanced assessment of needs and
In addition to the relative importance of various images, considered in terms of what they represent, the significance of their specific attributes must be taken into account in the determination of preservation priorities, and especially strategies. The value of an image may depend largely or entirely on the information conveyed through its tonal range or its color. But it is important that the contribution of image attributes be properly weighed. Color, for example, may in some instances be little more than a publisher’s cosmetic device, and therefore of low priority for preservation. Furthermore, an attribute of the original image may not be intrinsic to its informational content. In the case of encoded color it is the encoding rather than the color that is important. Size, as often in the case of maps, may be essential to the capacity of an image to convey information, but in other cases an inconsequential attribute.
Some specific suggestions concerning the choice of preservation strategies are addressed in the sections of this report on material conservation and digital technology, but it is appropriate in the context of this discussion of the use and attributes of images to stress two conclusions of the Task Force.
First, it seems essential that, regardless of the specific strategy chosen, a visual record of the images in a book, and of their relationship to the text, be preserved in some usable form, so that it is possible to reconstruct what the writer saw and used in developing the text. For example, it might not be feasible, or considered necessary, to strive for high-quality, high-cost preservation of images such as standard, frequently reproduced views of St. Peter’s. This might dictate the choice of high-contrast microfilming, which yields only poor quality images, but would preserve the identification and relationship of the image to the corresponding text.
Second, in some instances “mixed” technologies may be the preservation strategy of choice. For example, a book with an appended volume of unique or otherwise valuable views of St. Peter’s might, perhaps in the interest of cost containment, or because of an anticipated very low use of its verbal content, have its text preserved on high-contrast microfilm, but its illustrations on color microfilm or in electronic form. This would especially be the case when ongoing or future research is perceived as benefiting from the possibility of using the preserved imagery interactively with other electronically stored material.
Finally, an assessment of the importance of images and their attributes in a given field evidently requires the advice of their users. But it might be stressed again that images, however field-specific in initial appearance, have a variety of present and future users, and it seems essential that preservation strategy take the broadest possible view of future use.
Conservation Versus Preservation
The Task Force was charged to search for solutions to the various problems of preserving the intellectual content of brittle books containing a combination of text and image. Early in its work, it became clear that because the number of volumes at risk was enormous, conversion to a different format was the only practical approach to saving the intellectual content of the bulk of these materials. Nonetheless, in spite of the sense of urgency imposed by the dimensions of the problem, a recurrent theme of the discussions was the need to include among the recommended strategies some whose goal would be to conserve the physical volumes themselves in the context of a national (or even international) plan. Although the decision to conserve has traditionally been considered a strictly local matter, the Task Force members agreed that it was important to emphasize strategies for national coordination of efforts to enhance the conservation and durability of some of the artifacts as well.
From the small survey informally conducted by various members of the Task Force in their respective collections, it became clear that preservation through conversion to black and-white microfilm may be suitable for a substantial portion of embrittled materials containing text and illustrations. The major determinant is date, since the number of books illustrated with photographs prior to the introduction of the halftone processes about 1880 might total less than ten thousand titles, including those that contain only frontispiece portraits. Much of the illustrated material prior to the advent of photographic processes can be said to be suitable for black-and- white microfilming preservation. Nevertheless, we need to develop a clearer idea of what portion of the literature of each discipline for which the image carries significant intellectual information belongs in this category. This can only be determined through detailed surveys of the sort described earlier.
If we can only preserve (through conversion to a different medium) a small set of the endangered brittle materials, we can only conserve an even smaller subset, as the process is time consuming and per-unit costs of conservation are considerably higher. Thus, a clear understanding of the reasons for undertaking conservation, and a careful development of strategies in setting priorities is of utmost importance.
The Task Force identified a number of reasons for which certain objects in a collection could be selected for physical conservation, then proceeded to categorize the kinds of conservation activities to be considered, and concluded by enumerating some principles that should be kept in mind when developing strategies and selecting materials for significant conservation treatment.
Reasons for Conservation
There are three principal reasons for conserving the physical object:
- As a hedge against time to await better conversion technologies for materials containing text-cum-image.
- In order to return objects to normal use after preservation reformatting.
- Because items are recognized as having intrinsic value for exhibition, teaching, or research. Of these reasons, only the first is specific to illustrated materials. The second can generally be considered as a service-oriented decision of primarily local interest. It is particularly the third reason that requires some special attention for the development of judicious strategies. A library or archives may decide to conserve an item that is judged to have intrinsic value, for example, because it is the only example of its kind or an important variant. Among other reasons are that, although there may be several copies extant, the institution holds the one in the best condition, or can offer the best storage environment, or because the item forms part of a collection of national or international significance.
There are a number of actions that can be taken for the physical conservation of books and other library materials, and libraries have been taking some of these, such as rebinding, since their inception. Conservation actions can be grouped into three broad categories:
- Prevention of future deterioration. This can be done through the improvement of environmental conditions, development of practical means for deacidifying new items on acid paper as they are acquired, fostering of better paper-making and binding techniques in the publishing industry, etc.
- Maintenance of the existing collections. In addition to improving environmental conditions, measures such as the decision not to rebind any acidic material without deacidifying it first, limiting circulation, and the like can maintain the integrity of the collection.
- Rehabilitation or salvage of brittle materials. This is primarily accomplished through reformatting and physical conservation. This is the kind of action on which the Task Force focused its attention, although the others are not to be neglected.
Conservation Decision Principles
As mentioned above, priorities for conservation are harder to determine because less can be accomplished at greater cost. The Task Force identified some principles to be followed in the implementation of decisions to conserve. They are as follows:
First, any candidate for conservation should also have a surrogate made for continued use and/or access to its intellectual content. This principle should apply to all conservation actions taken to provide a copy for normal use of an item with intrinsic value. It may sometimes apply to cases where conservation is a hedge against the time when better conversion technologies become available.
Second, items that have been conserved in a collection should be passed to a category of “limited use.” This principle clearly does not apply to those cases where the goal of the conservation action is to return to use “till destroyed” items whose intellectual content has been satisfactorily preserved. Rather it applies primarily to cases in which a conservation action has been taken because the item is rare and considered to have some intrinsic value worthy of considerable investment in its physical conservation.
Limited use means one or more of the following:
- The item is removed from use until a better preservation technology becomes available.
- Only persons with special scholarly qualifications or justification will be allowed to use it.
- It can be used only under controlled conditions (e.g., with a curator turning the pages with white gloves), in effect making it a museum object.
- It is reserved for display only, under carefully controlled conditions. Third, there should be a nationally accessible bibliographic record created for treated materials which indicates that they have been conserved. This information is particularly important to avoid unintended duplication of effort for items on whose conservation a great amount of time and money have been expended.
An example of a conservation decision following the above guidelines and principles would be the conservation of one complete serial set in better condition than the one that was surrendered for microfilming, storage of the conserved set in a repository with superior environmental conditions, and creation of a nationally available bibliographic record with an indication of the action taken.
Another example would be the boxing of brittle items where color and/or continuous tone images are essential to the preservation of the intellectual content. Such boxing should be done in addition to providing a surrogate for use and limiting access to the originals. This will ensure that the conserved item is protected from further degradation from exposure to the atmosphere or through unnecessary use until such time as more satisfactory conversion technologies become available. Again, the creation of a nationally available bibliographic record would signal the availability of the surrogate and the conservation action taken.
Alternative Technologies for Preservation of Text and Image
At the present time, there is a substantial variety and a great fluidity in the technical capabilities available for preservation of text-cum-image materials. The encoding of text and the digitizing (bit-mapping) of images hold great promise for the capture, storage, and communication of such scholarly materials, and, at the same time, pose a dilemma for preservation strategies now. On the one hand, the new capabilities make it imperative that whatever preservation strategies are adopted now will result in the enhancement rather than the obstruction of future research; that, for example, the texts and images of historic books which must be preserved in the near future can ultimately be made available in the superior electronic formats of the future. On the other hand, the electronic media are in constant, seemingly inexorable change. The technologies that offer the greatest potential for access and manipulation are, unfortunately, the least stable and the least mature in terms of standards.
Electronic technology is entering scholarship of all varieties at an accelerating pace and promises to enhance, if not to revolutionize, scholarly research in the future. Technology will continue to influence both research methods and information requirements. Most striking is that it offers far wider geographic, and more rapid, access to information than has ever been possible before. It also provides greater portability of information than does our massive paper-based culture, while at the same time allowing for rapid retrieval and conversion to paper form. At the workstation, ever more rapid and sophisticated scanning or browsing capabilities will be available to the scholar. These advantages are beginning to be manifest and the technology is almost, but not quite, within our grasp.
The questions confronting the preservation community in regard to digital technologies seem to fall into two categories, one conceptual, concerning the desirability of digital reformatting for scholarly purposes; the other practical, involving the feasibility of such reformatting on a large scale at the present time. These questions are of particular moment in relation to text-cum-image preservation.
At the conceptual level, the capacity to enhance or manipulate images at the workstation makes digitizing an exceptionally attractive option for preservation in some disciplines. It becomes possible for art historians, for instance, to “see through” obscuring deterioration due to time and pollution. Stains and discoloration can be removed electronically, and textures and cross-hatching made clearer. Faded areas in works of art can be strengthened for viewing, and thus made more legible. Corrections can be made for changes that have altered the color in an original image. In other disciplines, for example, geology and architecture, the capacity to use two-dimensional images to create three- dimensional simulations has important ramifications for present and future research. Clearly, too, the possibility of using images from scholarly works of the past interactively with newly generated images especially increases the attractiveness of digital preservation.
Storage of bit-mapped images requires large computer memory, larger for color images, and still larger for high resolution. Yet, in capturing the image it seems unwise to record at anything but the highest practical resolution, even at the cost of increased memory demands, since details left undiscriminated (by coarser register) can never be recaptured, while a very finely registered image can be displayed on a low resolution screen if nothing better is available.
Although digital reformatting of text-cum-image materials may often seem desirable, the necessary technologies are not fully available for routine use. The Joint Task Force has seen striking examples of visual material that has been digitized and subsequently reproduced in paper copy, sometimes clearer and more legible than the original document from which the digital copy was made. We have been shown superb color images on photographic film and outstanding paper copies made from them. We have seen microform images that were produced from digitized format, and digitized text produced from microform without apparent loss of fidelity or information. These examples have, however, been generated by experimental or pilot projects carried out under laboratory conditions, rather than in routine production processes.
Because the technical capabilities in digitization are still developing, it is not clear what investment in equipment and what costs of operation will be incurred in making it a standard operating procedure. The required infrastructure for broad access to electronic formats does not currently exist, and technical questions about methods and standards of digital access remain to be resolved.
Finally, in order to reap the full scholarly harvest from electronic technology, the text of illustrated works should be captured in encoded (rather than bit-mapped) form. Encoded text is machine-readable, hence easily searchable, which makes it a powerful tool for comparative studies, indexes, concordances, and similar scholarly enterprises. A bit-mapped text, however, is no more–and no less–searchable than is a microfilm copy. Moreover, encoded text occupies far less memory than bit-mapped text.
In order to convert printed text to machine-readable (encoded) form a device that functions like an Optical Character Recognition (OCR) machine is required. (Otherwise the text must be keyboarded by a human typist.) Current OCR machines, however, have an unacceptably high error rate in reading text, an error rate that gets worse when the machine encounters uncommon type faces, unfamiliar orthographics, faded or imperfect imprints–which probably occur frequently in the older publications that are the prime candidates for preservation. In the absence of an adequate OCR technology, a bit-mapped image of the text is preferable, of course, to none at all.
One consideration in choosing technologies for text-cum-image preservation is comparative cost. It is also a complex matter whose full analysis exceeds the scope of the present report, both because there is not a well- defined cost-accounting framework in common use, and because there are few studies of costs that yield a full and fair comparison. In comparing digital scanning with microfilming, for example, the costs of storage and transmission (dissemination) of the preserved item should be taken into account as well as the costs of preparation and handling of items for preservation, the capital costs of equipment (or a surrogate therefor) and, of course, the costs of materials and labor required for preservation operations. We have not been able to uncover cost studies of digital preservation that can be fairly compared to the few adequate studies of microfilm preservation costs. Conservation processes involve such a large amount of highly skilled labor that we believe they far exceed the equipment and materials costs of the two preservation techniques, making conservation the most expensive alternative as well as the slowest. Furthermore, the principles for estimating dissemination costs of conserved material are not well established.
Several analysts have concluded that the most expensive parts of any preservation process are turning the pages and preparing the material for capture: i.e., selecting the volumes to be preserved, removing them from storage, making the bibliographic record, and transporting the books to the scanner or camera. The cost of actual microfilm, of exposing, processing, and quality-checking it is small by comparison. These latter costs are well documented.
There is not such solid documentation of digital scanning costs, though it seems entirely possible that actual scanning (i.e., post-preparation) costs will be as low as microfilm or even less. Until cost studies have been carried out, however, there is no assurance of that. A Commission-sponsored study of scanning costs is under way at Cornell, and cost data are expected later in 1992.
In addition to developmental work in this country on electronic capture, storage, and transmission of images, there are a number of substantial projects being carried out abroad that should soon add considerably to the store of information about effectiveness, costs, and problems of various methods. It is significant that opinion has by no means solidified on what approach to take. The “Seville Project” is devoted to preserving the Archivo General de Indias, 45 million documents and 7,000 maps and blueprints which record Spain’s 400 years of control of the Americas. The texts are being scanned directly, while all materials in color (primarily maps) are first captured in color microfilm, then converted to digitized form. On the other hand, the National Museum of Denmark has elected to make 35mm color slides of the objects ( including documents) in its Ethnographical Department’s collection before transferring the images to videotape. This decision was based upon the high resolution and long storage life of film, as well as its potential for automated transfer to some new medium, and the ease and low cost of using film-making equipment. The Royal Library of Copenhagen is evaluating three alternative filming techniques and a high-resolution digital format for setting up a National Picture Database from its collections. One Danish authority familiar with the project sums up the situation as he sees it:
. . . all cultural artifacts deteriorate with time, some faster than others. There is therefore a premium on storage media which will still be intact and accessible well into the next century. As regards images, we are at a critical point in the transition from analogue to digital storage. As far as alphanumerical data is concerned, the transition is complete, whereas this is not so for sound and images. Images represent the biggest difficulty, as there are no universally accepted digital formats which offer the same combination of resolution, contrast, cheapness and longevity as film. It is likely therefore that film will continue to be used as an intermediate high-resolution format for the rest of this decade while the battle between analogue and digital storage in the outside world has been settled.
Continuous-tone black-and white microfilming is not currently available in the United States, as far as the Task Force has been able to discover. We did investigate a continuous-tone technique that has been developed by a European concern and attempted to acquire it for use in a pilot project through the MicrogrAphic Preservation Service (MAPS). The negotiation failed, however, because of the developer’s proprietary concerns, and the project could not be pursued as we had wished. An alternative procedure might be to ship the material to a site where the developer could have full control over the microfilming process and thus shield its trade secrets. The Task Force is still investigating this admittedly cumbersome possibility to determine whether there is a sufficient likelihood of obtaining informative results to justify the effort.
One alternative is to film black-and-white (halftone) illustrations with color film. This produces more satisfactory results but at a very much increased cost. Such costs may be justified by the importance of the images being preserved, if indeed the process does produce satisfactory results. The Commission is therefore exploring the question of quality in a pilot project described below.
Commission-sponsored studies of Cibachrome microfilm by the Image Permanence Institute have been exploring the rate at which color fades and the film base deteriorates under various storage conditions. Results are incomplete but strongly suggest that the useful life of the film “will likely be several hundred years at room temperature and . . . [it] is an excellent preservation medium for color imagery.” In contrast, electronic storage media–compact and other disks, tape–appear to deteriorate within a decade and, more importantly, to require repeated and systematic “refreshing.”
In general, it appears that at the present time conversion to microfilm–color or black and-white, high contrast or continuous tone, depending on image characteristics–remains the preservation process of choice for most materials and certainly for archival purposes. Digital imaging, however, may already be the choice for capture and transmission, for some materials at least, with microfilm being produced directly from the electronic record. This suggests that combinations of technologies may provide useful solutions today for urgent preservation problems. The use of dual technologies, however, for economic and other reasons, may not always be feasible. Until the rapidly evolving digital technology becomes more stable, microfilm, from which digital images can be generated, is probably the most practical medium for preservation in the largest number of cases. The choices made, however, should depend on a thoughtful and judicious assessment of scholarly as well as practical considerations, of the needs of the broad constituency for materials requiring preservation, and of future as well as present demands on them.
On the basis of its study, the Joint Task Force has reached four principal conclusions about the problems of preserving text-cum-image publications, and offers these as a basis for a series of actions designed to refine the strategy for saving that portion of the intellectual heritage in which images and text are both necessary for a full understanding of the scholarly ideas being presented.
First, the Task Force is persuaded by its own experience and the fragmentary data available that an important beginning can be made on the preservation of both books and periodicals in the 1850-1880 era in almost all disciplines that depend on images using only current microfilm technique–i.e., high-quality, high-contrast black- and-white filming. The reason for this recommendation is that, in most disciplines, scholarly publications in this era employed line drawings, woodcuts, or other simple “edge-based representations” for illustrative purposes. There were comparatively few publications with colored illustrations in most fields until the late 19th century. The principal exceptions occur in natural history, geology, and medieval archaeology. Halftone illustrations (mostly photo-engravings from photographs) did not play a major role in illustration until the last two decades of the century. The resulting film archive can serve as an intermediate technology until it can be converted to a demonstrably superior and standardized electronic storage medium.
Such exceptions to these generalizations as do exist should probably be handled by conservation rather than preservation, for they are likely to be rare, intrinsically valuable publications, containing perhaps tipped-in photographs, hand-painted or tinted pictures, original prints, pochoir, or other exotic images. Such items, it must be stressed, represent a small, perhaps very small, proportion of the total corpus of publications at risk. Their identification and selection for special treatment can be incorporated into the procedure for moving materials into mass preservation. Such special treatment is currently recognized by the National Endowment for the Humanities (NEH) as a permissible option for a certain portion of its preservation grants.
Second, the preservation of halftone illustrations and accompanying text cannot be handled satisfactorily–for most scholarly purposes–by high-contrast black-and-white filming. Too much differentiation within the image is lost, and essential details needed for interpretation simply vanish. The Joint Task Force believes that the alternatives to black-and-white microfilm require further study, some experimental trials to estimate cost and time requirements and quality of results, and, perhaps, some additional technology development.
Electronic scanning and bit-mapped storage of continuous-tone black-and-white images is an available alternative though not without its own problems, to which we have alluded earlier. It is a still-developing technology, whose standards have not been settled, suggesting that a heavy investment in equipment at this time may not be wise, since it may soon be outmoded. At the rate electronic technology is developing, however, it would seem prudent to keep exploring and planning for its ultimate domination, and equally prudent to begin new to preserve what can be saved with whatever dependable means are available. Since high-quality microfilm can handle most of the problems of the earliest works at risk, and since microfilm can be readily converted to digitized format, there seems to be a clear path to follow.
Third, the available information about the number and types of images in various kinds of publications in various epochs is insufficient and undependable for large ranges of time and materials. It is obviously both dangerous and inefficient to base a preservation policy on inadequate information, so the Task Force recommends an investment in systematic surveys of research library collections to cover a variety of disciplines and time periods.
The program of collection surveys ought to take a coordinated, national approach. The participation of several major libraries will have to be enlisted, the cost of the effort will require outside funding, and the results of the surveys should be made freely available to inform local as well as national strategies for preservation. Comparability and consistency of data across sites are important features of a useful program. One model might be the design of a single, standard survey form and data collection instructions to ensure that comparable information was collected at all sites. A uniform sampling procedure should be prepared to accompany the form, and it should indicate, to the extent possible, how local variations in physical environment, collection size, and other significant independent variables might require adaptation of the procedure; and, if so, how this could be done to enhance consistency in the results. The surveys can provide a model for others to follow or to confirm in their own collections by modest verification procedures. The effort might well be coordinated or managed by the Commission, with data collection and analysis delegated to a professional survey group that could do the actual surveys at all sites.
The basic purpose of the surveys is to inform the design of text-cum-image preservation strategies for the post-1880 era, when images that cannot be adequately handled by high-contrast black-and-white microfilm appear and begin to supplement or replace line drawings in many fields. The results of the surveys should lead directly to pilot projects that test the suitability of alternative technologies for capturing halftone and color images–the next big step in preservation of text-cum-image volumes. Fourth, the Task Force recommends continuing and expanding a series of pilot projects to apply a variety of technologies to preservation of defined bodies of text-cum-image material for the dual purposes of learning (1) what time, effort, and special problems are involved in employing these techniques; and, (2) how satisfactory the results are for scholarly purposes.
An example is the project already set in motion by the Commission with Getty funding, of color-microfilming a brittle portion of a scientific serial publication (the New York State Museum Bulletin) which covered a number of natural sciences in the last half of the 19th century. Its illustrations possess all the varieties and combinations of attributes to which we have earlier alluded, and they document features of the natural world that have since disappeared or been so altered that the images are important to preserve accurately for comparative purposes. Among the questions to be answered in this project is whether current microfilm technique can produce a user acceptable copy of the images, whether usable hard copy can be produced from the film, and, of course, how great the cost and how long the time involved in the effort. A further step in this experiment is the scanning of the color microfilm produced to understand better the problems of conversion of complex colored images.
A related project is the “multimedia conversion” experiments proposed by the Task Force. Briefly stated, this project is a study of image-and text capture, transmission, and output resulting from applying microfilm and digital scanning techniques to a specific body of material, then transforming each output into the other medium. The trial envisioned will use about 1,000 books drawn from various disciplines and include a variety of image types, but predominantly or exclusively black-and-white. The four products–direct film, film made from a digital scan, direct digital, and a digital copy from a film original–will be compared as to quality, time, and cost required, and user acceptability of the several products. Two trials currently underway with Commission support will provide information on some of these issues. At Yale, the Commission is supporting a project to digitize materials from the preservation microfilm collection. At Cornell, the reverse is the case: a bit-mapped scan is being converted to paper, and the feasibility of producing microfilm from the digital files is being explored. The two projects are using different subject matter, however, so a direct comparison of user acceptability will not be as strong as it would be if all four processes were applied to the same material, giving users direct full comparisons, and standardizing the handling problems by eliminating variance in materials characteristics.
The next steps in this series of experimental trials will be influenced, of course, by the outcome of work already under way, but the preliminary results from current trials strongly support the Task Force recommendation for continuing this sort of work.
Looking back on the deliberations of the Task Force, it is evident that unanticipated obstacles were encountered. The first of these was the dearth of information about the kinds and numbers of images in books that need to be preserved. The wide variety of illustrative techniques that have been used in an equally wide variety of humanistic and scientific disciplines furthermore make the technical problems of conversion daunting for much material in many eras. Fortunately, the relatively simplest era of illustration coincides with the oldest–and hence most decayed, on average–material, and allows a major recommendation about where and how to begin the preservation task.
The second obstacle, wholly outside of Task Force control, is the state and the rate of development of technologies for preserving the more graphically complex publications of later epochs. The future looks clearer than the present, especially in respect to electronic capture and storage; and that leads to what may seem a paradoxical recommendation: to stick with an old technology–microfilming–because it is dependable, long-lived and, above all, convertible to future electronic media that will unquestionably offer advantages that cannot yet be grasped.
A beginning has been made to preserve the text-cum-image heritage of our society. It is time to move forward to concerted action, aware as we must be that, with the advent of new technologies, the course we are advocating will inevitably be modified. So long as the need and goals of preserving text-cum-image remain strongly established, effective response to new technologies will be assured.
Membership of the Joint Task Force on Text and Image
Nancy S. Allen
Librarian Museum of Fine Arts, Boston
Patricia Battin ***
President Commission on Preservation and Access
Thomas C. Battle
Director, Moorland Spingarn Research Center
Professor, Department of History
University of California, Berkeley
Richard Brilliant (Chair)
Professor, Department of Art History and Archaeology
David B. Brownlee
Associate Professor, Department of the History of Art
University of Pennsylvania
Harvard University Extension
Librarian, Avery Architectural and Fine Arts Library
Anne R. Kenney
Assistant Director, Department of Preservation and Conservation,
Geological Sciences Librarian
Lamont-Doherty Geological Observatory of Columbia University
M. Stuart Lynn *
Vice President, Information Technologies
The Winterthur Library
James R. McCredie **
Institute of Fine Arts
New York University
Robert G. Neiley
Robert Neiley Architects
R. Nicholas Olsberg **
Head of Collections
Canadian Centre for Architecture
Chief, History of Medicine Division
National Library of Medicine
Professor, Institute of Fine Arts
New York University
NB: The printed version of this document contained images which are not included in this electronic version
The text-cum-image photographs in this report are from books held by Columbia University’s Lamont Doherty Geological Sciences and Avery Architectural and Fine Arts Libraries and the National Library of Medicine. The Commission is grateful for their cooperation and participation. The Committee thanks Henry W. Riecken for his useful contributions to their deliberations.
Page 7. Poirer, Paul (ed.) Traité d’anatomie humaine vol. 2, fasc.3. Paris, Masson, 1898. Courtesy of National Library of Medicine.
Page 12. Rivoira, Giovanni Teresio. Roman architecture and its principals of construction under the empire: with an appendix on the evolution of the dome up to the XVIIth century. Oxford: Clarendon Press, 1925. Courtesy of Avery.
Page 13. Jahrbuch der jungen kunst. Leipzig, Klinkhardt, 1921. Courte.sy of Avery.
Page 22. Clarke, John M. “Report of the State Paleontologist 1902.” New York State Museum Bulletin 69, p. 851-1311, 1903. Courtesy of Lamont-Doherty.
Page 25. Scheuer, Alfred. Die fälschung der Ludovisischen thronlehne. Teplitz-Schönau, Selbstverlag, 1934. Courtesy of Avery.
Page 30. Eckhardt, Theodore. An atlas of anatomy of the human body. Indianapolis, Keyler, 1881. Courtesy of National Library of Medicine.
Design and typesetting by Design Innovations/Ten Point Type, Washington, D C. Photography by Burwell/Burwell, Washington, D C.–Printing by Todd Allan Printing, Washington, D C.
1. A “bit-map” is functionally the electronic equivalent of a photograph in which the “pixels” (picture elements) on the cathode ray tube (the screen) play the same role that the silver iodide “grains” of a photographic film emulsion play–that is, by darkening they provide the “gray-scale” of a black-and-white picture, or by taking on various hues, the color image.
2. Rütimann, Hans, and M. Stuart Lynn. Computerization Project of the Archivo General de Indias, Seville, Spain. Washington, DC, Commission on Preservation and Access, March 1992.
3. Looms, PO. “Economic and Design Issues of Large-Scale Multimedia Databases” in Bearman, D. Hypermedia and Interactivity in Museums, Archives and Museum Informatics Technical Report #14, Pittsburgh, PA, Archives and Museum Informatics, 1991.
The Commission on Preservation and Access
1400 16th Street, NW, Suite 740
Washington, DC 20036-2217
Reports issued by the Commission on Preservation and Access are intended to stimulate thought and discussion. They do not necessarily reflect the views of Commission members.
Additional copies are available from The Commission on Preservation and Access, 1400 16th Street, NW, Suite 740, Washington, DC 20036-2217 for $10.00. Orders must be prepaid, with checks made payable to “The Commission on Preservation and Access,” with payment in U.S. funds.
This publication has been submitted to the ERIC Clearinghouse on Information Resources.
Copyright 1992 by The Commission on Preservation and Access. No part of this publication may be reproduced or transcribed in any form without permission of the publisher. Requests for reproduction for noncommercial purposes, including educational advancement, private study, or research will be granted. Full credit must be given to both the author(s) and The Commission on Preservation and Access.