1999 Preliminary Abstracts
HSS Semisesquicentennial Anniversary
1999 Annual Meeting, Pittsburgh, Pennsylvania

3-7 November 1999

Meeting Program

Please Note: All abstracts are arranged alphabetically. If you are a participant and would like to make changes to your abstract, please e-mail the changes to the HSS Executive Office at hssexec@u.washington.edu.

Go to previous page

Masanori Kaji
Tokyo Institute of Technology

D.I. Mendeleev and the Concept of Chemical Elements

Dmitrii Ivanovich Mendeleev's firm belief in the individuality of each chemical element, expressed in his distinction between "simple bodies" and "elements," played an important role in his discovery of the periodic law. From his participation in the Karlsruhe Congress in 1860, he developed a clear idea for determining atomic weight. At the same time, he explicitly distinguished between "bodies" and "radicals," which can be regarded as the beginning of the distinction above, which appears in his Organic Chemistry. When Mendeleev became Professor of General Chemistry at St. Petersburg University in 1867, he began writing a new textbook, The Principles of Chemistry, the first part of which is based on the principle of valency. He did not seem to pay much attention to atomic weights because his studies during the 1860s on so-called indefinite compounds, including solutions, made him think that atomic weights were very narrow in scope and valid only for definite compounds. This also made him consider "element," not "atom," as the unit of matter that represents an individual chemical entity and forced him to distinguish it explicitly from the term "simple body." He tried to find a fundamental property, not peculiar to a certain compound of the element only, but common to all compounds and simple bodies. In February 1869 Mendeleev discovered that atomic weight was the fundamental numerical property of the elements that he sought. Having succeeded in compiling the table of elements based on their atomic weights, he was convinced of the validity of this idea. Thus, the individuality of each chemical element received a new confirmation from its unique position in the periodic table. Later, Mendeleev's concept of the chemical elements encountered various difficulties, which seemed to contradict the concept of the individuality of each element, such as the positions of the rare earths in the system and radioactive isotopes.

Paul Kelton
Southern Connecticut State University

Avoiding the Smallpox Spirits: Epidemics and Southeastern Indian Survival to 1800

Smallpox and other illnesses that accompanied Europeans and Africans to the Americas caused massive mortality and disappearance for some Indians, but the major confederacies of the American Southeast–the Cherokees, Creeks, Chickasaws, and Choctaws–withstood the epidemiological onslaught. Their survival was due in part to innovative responses, which have gone unnoticed by current scholars who focus primarily on the destruction of native peoples rather than their persistence. The Southeastern Indians, however, took effective action in the face of disaster. They enacted quarantines to prevent the spreading of infectious disease and avoided certain areas that commonly harbored illnesses such as malaria, yellow fever, and typhoid. They also changed their ceremonial practices to curtail the exchange of pathogens among villages and developed new settlement patterns better suited to the new disease environment. Southeastern Indians constructed their strategies within their own cultural framework. They explained diseases as the result of witchcraft, ghosts, and other supernatural phenomena. They also performed medical rituals with attention to gaining divine favor.

Daniel J Kevles
California Institute of Technology

"The Baltimore Case: Obligations, Judgment , and Data"

The so-called Baltimore Case involved charges of scientific fraud against Theresa Imanishi-Kari, one of the coauthors, along with the Nobel laureate biologist David Baltimore, of a paper published in Cell in April 1986. The case, which lasted a decade, was marked by investigations by two universities, a panel of the National Institutes of Health, the Office of Scientific (later Research) Integrity (ORI), and a congressional committee. In June 1996, Imanishi-Kari was exonerated on all counts. While attending to the politics of the affair, this paper will dwell on issues that the politics have tended to overshadow–the scientist's obligations and judgment in the handling of data. Imanishi-Kari was not only charged with fabricating data but also faulted for failing to meet those obligations the way ORI defined them.

Dong-Won Kim
School of Humanities & Social Sciences, Korea Advanced Institute of Science and Technology

Y. Nishina and the Japanese Physics Community in the 1930s

Yoshio Nishina is often called "the pioneer of modern physics in Japan." Trained at the Cavendish Laboratory and Niels Bohr's Copenhagen institute, he was well known for his work on Compton-scattering with Oscar Klein. At home, Nishina opened his laboratory at the Institute of Physical and Chemical Research (better known as Riken) in 1931, which soon became the center of quantum mechanics, nuclear physics and cosmic ray research. In this paper, I will analyze his role in and contribution to the Japanese physics community from three different angles: Nishina as researcher, Nishina as teacher, and Nishina as organizer. I will argue that Nishina's contribution to the Japanese physics community was deep and enduring because he did not just add new elements but instead changed the whole "milieu." In doing so, he contributed to the emergence of a research network during the 1930s and 40s that would not only produce two Nobel Prize winners (Yukawa and Tomonaga) but also raise the level of the whole Japanese physics after the Second World War.

Mina Kleiche
Université Paris7-CNRS (France)

To convert the Morocco into a vast orchard : to introduce new agricultural methods from California to Morocco during the 1930's

In first decades of French Protectorate, during the1920s, French settlers, entrepreneurs and protectorate administrators based their effort ´to convert Morocco to a granary of wheatª for the motherland. When the economics crisis appears in 1929, the French settlers had to turn their production to new cultures which should be complementary to the motherland agricultural production. So a major new colonial agriculture emerged in Morocco, based on irrigation and export of citrus fruits and early vegetables on the California pattern. The many natural parallels between California and Morocco (located at approximately the same latitude on the west coasts of their respective continents, with similar prevailing winds and cool ocean current offshore, they are roughly equivalent in size and shape, both have predominantly Mediterranean climates and sizeable desert areas) attracted the attention of French settlers during the 1930s crisis of the colonial agriculture. The Californians had set through development of the state's agriculture resources, which by 1930 accounted for 90 percent or more of all U.S. production in almonds, apricots, avocados, dates, dried plums. The idea of Californians was : from a desert area, they had created through irrigation over million hectares of admirably maintained fruit orchards. The California had achieved phenomenal success. Some policy-makers aspired to create in Morocco ´une nouvelle Californieª. Morocco presented indeed two advantages over California :being located near the large European markets and possessing cheap native laborers. Several French agricultural study missions visited California. Influential colonists, and agronomists went to study horticulture at the University of California at Berkeley. The California example was introduced in 1928. During a subsequent ´période d'implantationª, from 1929 to 1933, a major new orientation toward California-style irrigation agriculture emerged. Finally, after 1933, there was a broad-based effort to convert Morocco into a vast orchard, following techniques pioneered in California. The agronomists transferred en masse agricultural methods, irrigation techniques, and select crop varieties from California to Morocco. The first paper's purpose is to explain how the agronomists, as scientific community, took part in introducing a modern, highly capitalistic, agribusiness approach. This approach, presenting a clean break with past French tradition, became tightly woven into the agricultural development of Morocco in the 1930s, where it has remained ever since. So I am going to analyze precisely the agronomist's practices in their experimental farms (´stations expérimentales, fermes expérimentalesª), through scientific papers. The secondary purpose is to demonstrate how science, through the agricultural practices and the agronomists community, was, not only, in service of French ´Mission Civilisatriceª but participated in building the myth of universal science for the development.

Judy Klein
Mary Baldwin College

Controlling Gunfire, Inventory and Expectations with the Exponentially Weighted Moving Average

In World War II, applied mathematicians studied the ball-disc integrators that were effective in lead computing gun sights for marksmen who fended off enemy fighter planes from atop of US bombers. After the war, a generation of engineers and economists turned to the mathematical models of servomechanisms to search for linear decision rules that engendered stable, automatic and optimal behavior in the control of production, inventories, and even national economies. The adaptive expectations model, as it came to be called in economics literature, dominated empirical work in macroeconomics for fifteen years. Mathematical statisticians, impressed with the unusually good forecasting results of the EWMA, achieved with parsimony of variables, parameters, data input and computational power, sought the theory and structure that cast the shadow of this ad hoc procedure. George Box and Gwilym Jenkins generalized the non-stationary quality, constructed a family of models for which the EWMA was a special case, and derived criteria for model selection. John Muth, Marc Nerlove and Andrew Harvey were among those who treated the EWMA as a reduced form and who pursued the structural models that held a key to how the world works. From exponential smoothing in gun sights to the random-walk-plus-white-noise of structural time series models, this paper illustrates how statisticians and economists used concrete mechanisms of war as material models from which they could construct abstract models of processes and human behavior.

Ursula Klein
Max Plank Institute for the History of Science

Paper-toos and the formation of a new experimental culture in 19th-century chemistry

In the early 19th century experimental investigations of chemical reactions of organic substances were extremely rare. Chemists assumed that organic substances underwent exclusively chaotic decompositions in the laboratory which were too complex for any reconstruction. This changed dramatically from the late 1820s when French and German chemists began to focus their research on organic chemical reactions. Within a period of fifteen years a new experimental culture of carbon chemistry evolved around this new kind of experiments and its combination with quantitative analysis. The paper argues that chemical formulae were necessary paper-tools in this process.

Ronald R Kline
Cornell University

Historical Writing on Business, Technology, and Industrial Research

Historical writing on business, technology, and industrial research in the United States has changed rather dramatically in some areas since the Osiris volume on American Science was published in 1985. As the influence of Pulitzer-Prize winning business historian Alfred Chandler has waned, many historians have written more social and cultural histories of American business and technology. As part of this trend, several historians have turned from looking at the so-called producers of technology (scientists, engineers, and managers) to the so-called consumers of technology, who are now seen as active agents of technological change, rather than passive recipients of new technology. The result has been that such venerable topics as the relationship between science, technology, and business structures have been reconsidered from points of view that attempt to broaden the field of the history of science and technology.

Scott G. Knowles
Johns Hopkins University

The Symbol of Safety: Underwriters' Laboratories and the Rise of Consumer Product-Testing in the United States, 1903-1917

An average day at the Underwriters' Laboratories in Chicago, just after the turn of the century, found a team of physicists and chemists consulting with an electrical engineer on the best way to replicate a burning safe falling from a four story building. Not satisfied with a single test, these researchers then subjected the safe to a stay in the 1500 degree furnace. Then, there was a blast from the water cannon for good measure–to simulate the arrival of the fire brigade at a burning building. What exactly, were these scientists all about? What exactly were they testing for and who would pay for such a grab-bag of rigorous tests anyway? The answers to these questions reveal an overlooked chapter in the history of American research and development. Furthermore, they give us a new insight into the coalitions of scientists and engineers that ushered in the first strong relationships between Progressive-era reform and scientific activity. Americans after 1900 began to buy consumer products on a large scale for the first time. Many of these products, especially those of an electrical or mechanical nature, carried with them the threat of bodily injury and material damage. Originally founded by the National Board of Fire Underwriters, the Underwriters' Laboratories of Chicago emerged to mitigate the risks of fire associated with new goods such as electric lamps and wires. Soon, however, the mandate of the Lab grew to encompass broad-based materials testing, disaster modeling, and publishing of standards for product performance across the nation. Most importantly, the Lab began to issue to manufacturers its famous symbol, the UL tag, which became a recognizable symbol to consumers that the product they were about to buy had been proven "safe" by the methods and practitioners of modern research. In its attempts to test a broad range of products and materials, the UL hired university-trained chemists, physicists, and engineers, along with insurance professionals and architects/designers. The goal was to develop a broad base of expertise with which to corner the market on product information--then use this advantage to influence building codes, product designs, and consumer behavior generally. The UL turned out fundamental research in chemistry and thermodynamics and applied studies alike. This included developing an array of testing innovations, such as thermocouples, electrical signaling devices, and stress-loading presses. The UL, to say the least, worked on a variety of scientific fronts to press its "symbol of safety" into the consumer's consciousness. This paper traces the beginnings of the UL, its successes and failures in early product testing, and its ultimate victory in establishing its symbol as a trademark of scientific consumerism. I strive to work between the historiographies of R&D and Consumer Culture to tell a story of laboratory diversity and Progressive-era science and reform. Turn back the shade of any lamp today and one finds the UL symbol–how and why this legacy of safety grew out of the early 20th century overlapping environments of science, physics, and engineering comprises the task of this paper.

Alexei B Kojevnikov
American Institute of Physics

Freedom, Collectivism, and Quasiparticles: Social Metaphors in Quantum Physics

Disagreements over the large issue of freedom played a particularly important role during the formative stages of the quantum physics of matter from the 1920s through the1950s. A variety of specific approaches and rival theories relied on their authors' conflicting intuitions regarding whether particles in dense bodies could be described as free or not, in what sense and to what degree. In their attempts to conceptualize these intuitions in mathematical terms, physicists often used social metaphors which reflected their varying interpretations of freedom, their political philosophies–liberal and collectivist, among others,–and personal experiences of social life in different countries and regime. The paper studies one major line in this debate, the collectivist approach in condensed matter physics. It follows the attempts by socialist minded physicists to describe collective behavior of particles which resulted in the discovery of new types of natural objects and in the development a new fundamental language of modern physics: the concept of collective excitations, or quasiparticles.

Richard L Kremer
Dartmouth College

From Text to Trophy: Shifting Functions for Regiomontanus's Library

In Worldly Goods: A New History of the Renaissance (1996), Lisa Jardine argues that cultural artifacts of the Renaissance had significant economic as well as intellectual value for their owners, users and producers. This paper will offer a similar analysis of the shifting cultural functions played by "Regiomontanus's library" from the sixteenth through the early nineteenth centuries. Initially containing more than 300 codices and incunables in Latin and Greek, Regiomontanus's library has long been praised as one of the largest fifteenth-century personal libraries north of the Alps. Previous historians have decried the dissolution of the library, and attempted to locate all the extant books. Rather than simply reconstructing the library, this paper will examine how subsequent owners, from B. Walther, W. Pirckheimer, J. Schoener or J. Camerarius (in the sixteenth century) through Ch. von Murr and Czar Alexander I (in the nineteenth century), used Regiomontanus's books. As texts, the codices provided sources for printed editions of classical authors, of Regiomontanus's own works, or of works (wrongly) attributed to Regiomontanus. As trophies, the books became highly prized by collectors for the status to be attained by their possession or by giving them away. And as both texts and trophies, the mystique of this library has contributed to the construction of an image of Regiomontanus that has dominated Regiomontanus historiography until the present.

Gary M Kroll
Univerity of Oklahoma

The Self-Effacing Hero of Science: William Beebe and his Literature of Oceanic Natural History

The many books and articles written by William Beebe on the natural history of the ocean demonstrate an interesting contradiction. In the first place, they highlight Beebe as an heroic adventurer. He was the man who sought to unveil the mystery of the dangerous Sargasso Sea, and the naturalist who put his own life in danger in order to practice natural history some three thousand feet beneath the surface of the Atlantic Ocean. At the same time, his writings consistently call our attention to the insignificance of humans in the economy of nature's history. At times, Beebe's literature takes on the overtones of a nascent biocentrist metaphysics human presence on this earth, according to Beebe, is as transitory as the wake left by an oceanographic vessel. While these two positions are not necessarily mutually exclusive, they do create a tension that runs throughout Beebe's writings. The purpose of this paper is to describe and explain this dualism in Beebe's oceanic natural history (1925-1935). I will argue that the tension originated in the shifting matrix of the discipline of natural history in the first quarter of the twentieth-century. Natural history was being marginalized by two forces: the rise of experimental procedures in biology, and, concomitantly, the tendency to equate natural history with hunting and travel expeditions by the leisured class (this became especially burdensome in the 1930s). Beebe's heroic and adventurous literature is thus a continuation with this older tradition, but his language of self-effacement was a deliberate strategy to legitimize his own natural historical practice as a serious scientific endeavor. In short, Beebe's biocentric writings demonstrate an important shift in natural history's representational conventions, a shift necessitated by natural history's all too unstable status as a modern science.

Andris V. Krumins
Institute for the History and Philosophy of Science and Technology

Symmetry, Conservation Laws, and Nuclear Interactions

In the early 1930s, the study of the nucleus revealed new interactions of short range. Many attempts were made to explain these processes electromagnetically, using a proton-electron model, but they were all unsuccessful. Beta-decay was so troubling, in fact, that it even threw the conservation of energy into doubt. Pauli saved the conservation law by postulating an undetected particle, now known as the neutrino, which could carry off the missing energy. After the neutron was discovered experimentally, Heisenberg exploited an exchange symmetry between the proton and neutron to help explain nuclear stability, while Fermi, on the other hand, incorporated the neutrino in his theory of beta-decay. It was still far from obvious how electrons were emitted from the nucleus, but Yukawa made an effort to explain both the theories of Heisenberg and Fermi using phenomenological considerations, rather than symmetry arguments. He proposed the emission of a new, heavy particle, and his theory was successful until it ran into trouble with conflicting experimental data. Following the discovery of strange particles in cosmic rays, two kinds of nuclear interactions were differentiated: the strong and the weak. This paper will examine the efforts to understand these two interactions using conservation laws and symmetry arguments.

Henk Kubbinga
University of Groningen

Laplace and the rise of molecularism

An in depth study of the history of the concept of molecule has revealed its XVIIth century roots. It has become evident further that later, in the XVIIIth century, it became the core of any theory of matter, not only in physics (Newton, Laplace) and chemistry (Stahl, Lavoisier), but also in the life sciences (Leibniz, Buffon. In Laplace's Exposition du systeme du monde and his Traite de mecanique celeste the molecular theory is indeed paramount. It is presented as a Theory of Everything in the modern sense: each and all physical phenomenon is analyzed in terms of molecular entities and their dynamic interactions. In a way molecularism superseded classical atomism, that is: in a first approximation of the problems involved. In my paper I shall highlight some neglected aspects of the history of the molecular theory in the SVIIIth century and sketch in why it was so compelling. It will turn out that Ivor Grattan-Guinness, in his epoch-making Convolutions in French Mathematics, 1800-1840 [..] (1990), obviously was right where he honored Laplace for his mathematization of molecularism.

Kalevi Kull
Institute of Zoology and Botany, University of Tartu

A case of anancasm: Nomogenetic school in biology

In the article of 1893, titled as "Evolutionary love", Ch.S.Peirce distinguished between three major types of theories of evolution–tychastic, anancastic, and agapastic: "Three modes of evolution have thus been brought before us evolution by fortuitous variation, evolution by mechanical necessity, and evolution by creative love". As examples of these approaches can serve the neo-Darwinian, the nomogenetic, and the contemporary biosemiotic modes of evolutionary explanations. In this paper, a review of nomogenetic approach in the 20th century biology in Russia will be given. The main figures of nomogenetic theory of evolution have been Russian scientists L.S.Berg (1876-1950), A.A.Lyubischev (1890-1972), and S.V.Meyen (1935-1987). In 1922, L.Berg published his book "Nomogenesis, or evolution by law". All known at that time phylogenetical laws (relations between onto- and phylogeny, irreversibility and directiveness of evolution, homological variability, etc.) were used by Berg in his attempt to prove the lawful and purposive character of the evolutionary process. As a source for these conclusions served the teleological statements of K.E.v.Baer, from whom followed the line of "nomogenetic pre-adaptionism" in the Russian evolutionism. A.Lyubischev, an entomologist and influential theoretician, made a critical analysis of Berg's views and developed the concept of natural classification as different from phylogenetic classification. S.Meyen, paleobotanist and evolutionist, made through his writings the nomogenetic views widely known in Russia. The meetings on theoretical biology, organised by the groups of biologists in Moscow, St.Petersburg, and Tartu in 1970s, gave an organisational basis for the nomogenetic school. Some of the papers of these meetings have been published in English in two volumes of "Lectures in theoretical biology". The fundamental ideas of the nomogenetic approach can be formulated as follows: (1) taxonomy of organisms should be something like Mendeleyev‚s table of chemical elements, i.e. non-phylogenetic, combinative classification The natural classification should reflect the internal laws of structure and variation without being phylogenetic (2) congruent to taxonomy is meronomy, which structures components of holistic systems, e.g., organs in organisms (3) natural selection plays only secondary role in evolution. (4) organisms of different taxa have different archetypes, which determine their possible ways of evolution (5) morphogenetic rules rather than ecology is responsible for the limits and paths of variation accordingly, the problems of form rather than function have the primary importance for the theory of evolution.

Jens Lachmund
Max Planck Institute for the History of Science

Maps, Biotopes and the City. Urban Bio-Ecological Mapping in Germany, 1970-1998

Maps have become an important form of knowledge both in ecological science and environmental policy. The paper examines the relationship between mapping and ecological expertise in urban planning. It focuses on the history of so-called "urban biotope mapping", which has been carried out in various German cities since the 1970's. These mapping projects aimed at a comprehensive inventory of the ecological structure of urban space. They were closely related to the rise of new strategies of nature conservation and environmental protection in the realm of urban planning. It will be argued throughout the paper that maps are cultural practices which are actively involved in the construction of the spaces they represent. Urban biotope mapping, accordingly, did not only shed light onto an aspect of the city which hitherto had been overlooked. It took part in the production of the city as an ecological space of knowledge. The maps created new meanings of urban places and reconfigured the relations between them and thus transformed the geography of the city. This also had far-reaching implications for the definition of the culture-nature divide: Whereas nature for a long time has been the "other" of the city, biotope mapping inscribed "naturalness" into the urban environment itself. The paper examines the various processes which shaped the development and stabilization of biotope mapping as a new field of ecological expertise. How did mapping come to be considered as a useful tool for handling environmental problems of the city? How, and to what extent, became biotope maps part of the administrative practices of urban planning? What were the main controversies surrounding the rise of urban biotope mapping, and how were they settled? The rise of urban mapping will be described as the development of a heterogenous network of practices which implied both the epistemic reconstruction of urban space and new modes of political regulation.

Thomas C Lassman
Johns Hopkins University

University-Industry Relations in Pittsburgh: Edward Condon and the Rebirth of Industrial Research at Westinghouse, 1937-1945

Throughout most of the twentieth century, Pittsburgh was America's dominant steelmaker, and much of the city's history has been written within that context. Less well known to historians, however, is how Pittsburgh's industrial success depended upon a regional network of industrial laboratories (Westinghouse, Gulf Oil, Alcoa), private research institutions (Mellon Institute), and universities (Carnegie Institute of Technology, University of Pittsburgh) to generate scientific knowledge that could be applied to the development of new technologies. I will address the issue of university-industry relations in Pittsburgh by examining physicist Edward Condon's efforts to strengthen fundamental science at Westinghouse during the last years of the Great Depression and World War II. For Years, Westinghouse engineers had received advanced training at local universities. Condon successfully exploited these institutional connections in order to expand the company's knowledge base in modern theoretical physics. Westinghouse scientists acquired expertise that would be useful as the company established new markets in nuclear power and solid-state electronics after the war. As an industrial research manager, Condon was different from his more well-known predecessors. Unlike Willis Whitney at GE and Frank Jewett at AT&T, both of whom were more successful as laboratory managers than practicing scientists, Condon brought to Westinghouse extensive research experience and a sophisticated working knowledge of quantum mechanics. The best academic scientists, however, do not always make good managers. For Condon, the transition was facilitated by the ease with which he was able to build upon the existing relations between Westinghouse and Pittsburgh's academic community.

Timothy Lenoir
Stanford University

Science and Technology in the Making (STIM): Media-Intensive Tools for Teaching and Research

Workers in science and technology studies need new tools to meet the demands of working with the types of information, distributed modes of work and interaction, and computer-mediated methods of constructing scientific and technological objects in recent technoscience. The aim of this presentation is to discuss these new challenges in the context of an ongoing experiment with web-based technologies. A project on the history of human computer interaction will illustrate one effort to use media-intensive tools to encourage dialogue among key participants about their roles in a series of technological innovations–the computer mouse, hypertext, networked collaboration, and more-and an associated effort to provide online forums and an interactive archive in which focus groups are invited to contribute commentary and source materials. Their contributions in the form of stories, artifacts, and written accounts are part of the history presented through the web site. This leads historians to multiply the perspectives of contemporary history and engage the community who made the technology in a collaboration to write their own history. Thus, the traditionally silent subjects of a historical study become personally involved in the writing of their community's history. These materials are being used in a collaborative teaching initiative involving historians of science and technology at Stanford and the Georgia Institute of Technology during the spring of 1999. Our use of live video connections for real-time collaboration via Internet2 will be described. The course also facilitates collaborative research and presentation of projects in the course where project teams are assembled from students of both institutions. A typical student presentation and resulting course project will be described.

Tim Lenoir
Stanford University

"To make sensuous man rational, you must first make him aesthetic."–Physiological Aesthetics and the Normalization of Taste in Germany, 1860-1895

Realism was a much contested category in German science, politics, and art throughout the later decades of the nineteenth century. This paper explores the collaboration between scientists and state ministers to establish aesthetic standards by raising conformity to the "real" and the "natural" to normative status. Concerned that the taste and sensibility of German artisans and manufacturers of decorative art objects should not be left to the vulgar whims of fashion, ministers of art and education in centers such as Berlin and Vienna created training programs aimed at instilling a "naturalistic aesthetics" for producing the desired materials and artifacts of everyday life. Such programs were defended as important means for protecting civilization and instilling patriotic feeling. Natural science would instruct the artisan in ways to escape mischievous influences external to true art itself, thereby preparing sensibilities receptive to higher moral ideals and the beautiful. Instructional programs at the Museum für Kunst und Industrie incorporated physiological optics and physiological color theory as key elements in constructing this natural aesthetic, while scientific instruments, such as Ernst Brücke's schistoscope, and color wheels constructed according to James Clerk Maxwell's methods were employed as means for inculcating a "proper sense" of color harmony. Another element of the program for educating the senses was application of chemical principles to the production of color pigments and the establishment of standards for regulating the naming and production of color throughout German speaking lands if not throughout Europe. While ostensibly directed to improving the mundane taste of artisans, I will argue that scientists such as Ernst Brücke, Hermann Helmholtz, and Emil DuBois-Reymond also urged the adoption of their standards for a "natural aesthetic" as a counter weight to certain pernicious elements of high art at the end of the century which they regarded as decadent.

Stuart Leslie
Johns Hopkins

Korean Science at the Crossroads

This paper will compare science in North and South Korea from the Japanese occupation before World War II, through the Korean and Cold Wars, to the present. For most of the postwar period, this is essentially a comparison of two very different types of military dictatorships and the science policy they pursued. This comparison will include universities, science-based industry, and "Big-Science" projects like nuclear power.

Theresa Levitt
Harvard University

Le rouge et le vert: Colors of opposition in Restoration France

The concept of complementary colors, although long used by artists and dyers, did not become a part of Newtonian optics in France until the early nineteenth century. Of critical importance was the development of an instrument, the polarimeter, that split polarized light into two beams of opposite colors. Although on the one hand, this instrument alleviated fears of a Goethean subjectivism through a purely physical generation of complementary colors, it also contained within it new assumptions about the role of the observing subject. This paper traces two alternative designs and uses for the polarimeter. One effort, put forward by Jean-Baptiste Biot, sought first to use the instrument to standardize colors, and then adapted it to the needs of large-scale chemical industry. Working within this context, Eugene Chevreul, industrial dye manufacturer, cited the use of the polarimeter as crucial in the development of his theory of simultaneous contrast. The co-inventor of the polarimeter, Francois Arago, criticized both the project of standardization and involvement with industry. These alternative designs carried with them competing visions of the individual practitioner of science in post-revolutionary France.

Trevor Levere
University of Toronto

Handling and Conceptualizing Airs around 1800

Once the pneumatic devices of Stephen Hales and Peter Woulfe had entered into laboratory practice, the way was open for chemists to refine the concept of airs, and to move from the washing and storing of airs to their measured transmission in experiments. Lavoisier's simple but elegant research apparatus was followed by the construction of his gasometers, the most elaborate and the most expensive pieces of chemical apparatus of their time. Ingenious simpler and cheaper alternatives proliferated, and were soon in use in laboratories from the Mediterranean to the Baltic Sea. Their use underlay what Fourcroy insisted should be called the pneumatic revolution, centered on the study of the quantitative and qualitative transformations of airs in chemical reactions. But chemical gasometers were in turn soon displaced by the simpler gas holders developed by William Hasledyne Pepys and others. Simplification and standardization in instrumentation accompanied the ceconceptualization of gas chemistry from the focus of theoretical debate to an accepted and unproblematic component of early nineteenth-century chemistry.

Bruce V Lewenstein
Cornell University

Have Books Mattered in American Science since 1945?

Since at least the beginning of the 20th century, the publication of original research in science has moved almost entirely from books into journals. Yet books remain important in science–as texts, as philosophical statements, as polemics. This paper will report on a project looking at the relationship between books and science since World War II. The paper is being prepared as a contribution to the multi-volume "History of the Book in America" being funded by the NEH and coordinated by the American Antiquarian Society. A major theme of the paper will be showing changes over time in the multiple roles that books play in science. By mid-century, books were rarely occasions for presentation or working out of major intellectual issues rather, they were repositories of well-established knowledge–sometimes as introductory textbooks , sometimes as more advanced treatises drawing together information previously published in journals. By the end of the century, books again became a place for new ideas in science--but not so much cutting-edge research as the intellectual implications of science, illustrated by the rise of blockbuster "popular science" authors. A second major thread will be the economic presence of European publishers in the American science book industry, especially the role that American operations played in globalizing the overall science publishing industry. An American counterpoint to these technical publishers, however, will be the growth of American textbook publishers, especially those for the primary and secondary school markets stimulated by the baby boom and by the renewed national commitment to science education after Sputnik. Finally, a third thread will be the changing elements of scientific "style," looking at the effects of "objective" (read: "passive") writing styles and codified rules for presentation of scientific information (including the rules generated by the need to be able to retrieve material from the rapidly expanding "information explosion"). Some of the key texts to be explored in the paper will be Linus Pauling's Chemical Bond (1939) and General Chemistry (1947), the field-changing textbooks Chemistry (1957) by Michell [sic] Sienko and Robert Plane and Molecular Biology of the Gene (1965) by James Watson, and the cultural mileposts Silent Spring (1962, Rachel Carson), Double Helix (1968, James Watson), Chaos (1987, James Gleick), and A Brief History of Time (1988, Stephen Hawking). I will deal with the oeuvres by Stephen J. Gould and Carl Sagan as variations on the relationships between science and public (high culture vs. low culture, intellectual vs. proletarian ). Among the publishers I'll discuss will be Pergamon, Elsevier, Wiley, W. H . Freeman, Addison-Wesley, McGraw-Hill, and perhaps others.

David E. Lewis
University of Wisconsin–Eau Claire

Zinc Alkyls in Synthetic Organic Chemistry: Cutting Edge Chemistry at Kazan'

The development of structural theory during the period 1858-1862 was a watershed event in the development of organic chemistry. While the timing of the contributions by various individuals to this theory may be debated, there is little doubt that Aleksandr Mikhailovich Butlerov, who began his career at Kazan' University in Eastern Russia, was a seminal force in its development and later acceptance. Butlerov, more than Couper or Kekulé, saw at an early stage the potential of this new theory for organizing organic chemistry—his was the first organic chemistry textbook based solely on structural theory. It is equally clear that he realized that this theory could only become firmly established on a footing of experimental evidence supporting the theory's predictions and that this would require, inter alia, the synthesis of new compounds. It may be fitting that the first example of an organometallic synthesis of a tertiary alcohol should have been carried out by Butlerov, who prepared tert-butyl alcohol by the reaction between phosgene and dimethylzinc in 1860. The study and extension of this reaction was taken up at Kazan' by Butlerov's student, Aleksandr Mikhailovich Zaitsev, possibly one of the most talented synthetic organic chemists of his time. He established a school that, over the course of three decades, developed the addition of organozinc reagents to carbonyl compounds into a major synthetic method. Zaitsev himself developed a general synthesis of tertiary alcohols by the reaction between zinc metal, an alkyl halide, and an acid chloride. The use of aldehydes as the carbonyl component in the reaction was developed by Zaitsev's student, Egor Egorevich Vagner (Wagner), and the use of a-haloesters as the alkyl halide component of the reaction by another Zaitsev student, Sergei Nikolaevich Reformatskii. It was not until the advent of the Grignard reaction in 1900 that the organozinc chemistry developed by the Kazan' school was superseded in the arsenal of synthetic organic chemists as the method of choice for forming carbon-carbon bonds.

Bernard Lightman
York University

The Story of Nature: Victorian Popularizers and Scientific Narrative

In his Writing Biology, Greg Myers asserts that all twentieth-century science writing falls into one of two categories, the "narrative of natural history," or the "narrative of science." The latter describes any work which is written to meet the standards of a discipline, heavily committed to model building, and designed to establish the credibility of a scientist within the scientific community. However, work which appears in natural history magazines, for example, is structured by the "narrative of natural history," which is intended as a popular account of nature that is diverting, full of anecdotes, and nontheoretical. In Natural Eloquence: Women reinscribe Science, Barbara Gates and Ann Shteir argue that Myers's distinctions can easily apply to the Victorian era. But they also maintain that a third narrative of science was in use then, which they title the "narrative of natural theology." This narrative, they argue, embodied the structure of Paley's natural theology with its emphasis on science as providing proofs of the wisdom and power of a divine creator, and it was the narrative which informed the popularizations of science produced by women in the early nineteenth-century. However, Gates and Shteir believe that the "narrative of natural theology" lost its appeal by the 1840s with the rise of evolutionary theory. Scholars have only in the last decade begun to examine the literary structures which shape the theories and concepts of modern science. But the majority of scholars have dealt with particular scientific theories rather than attempt the more ambitious goal of outlining a sophisticated typology of modern scientific narrative. Myers's twentieth century typology, refined by Gates and Shteir for the nineteenth century, is therefore a significant step forward in making sense of the larger relationship between modern science and literature. In this paper I wish to focus on the distinction between the "narrative of natural theology" and the "narrative of natural history." In my work on popularizers of Victorian science, I have been impressed by the persistence of religious themes right until the end of the nineteenth century. Important popularizers were enthralled by the religious implications of scientific discoveries and often interpreted the religious meaning of contemporary science for their readers. This raises some questions about the disappearance of the "narrative of natural theology" in the 1840s and the distinctiveness of the "narrative of natural history." Did the two exist in the early nineteenth century as separate narrative traditions? If not, was the "narrative of natural history" merely a not-so-secularized continuation of the "narrative of natural theology"? I will explore these questions of mutual interest to scholars in the history of science and in the field of science and literature by drawing upon the work of important popularizers of science such as J. G. Wood, Grant Allen and Robert Ball.

Susan Lindee
Department of History and Sociology of Science, University of Pennsylvania

Squashed spiders: The standardization and medicalization of the human chromosomes, 1959-1965

The University of Glasgow pathologist Bernard Lennox complained in 1961 of the "exasperating pictures" appearing in the Lancet and resembling "masses of squashed spiders." The pictures were in fact a triumph of human cytogenetics, images of highly processed human chromosomes retrieved from blood, a readily available clinical material. The techniques perfected in the late 1950s for observing human chromosomes permitted these cellular objects to become explicitly medical entities. Human cytogenetics became the point at which classical genetics and clinical medicine intersected. Between 1959 and 1965 human chromosomes acquired numbers, names, measurements and clinical stability, and they became quantitative signs of disease. In this paper, I explore the process of standardizing the human chromosomes and suggest that this process helped to produce a new kind of genetic disease, one that was visible both in the body and the cell. The ability to literally see the physical cause of a complicated syndrome–to count it out in that most elementary form of scientific observation–was particularly gratifying to the medical community. The technological success of cytogenetics enhanced the status of genetic disease and played a role in its increasing visibility in American medical education.

Diana E. Long
University of Southern Maine

Their secret gardens: Women and the pleasures of endocrine laboratory life, 1930-1960

From the time that I worked in the endocrine physiology lab of my father, C.N.H. Long, 1954-55, I have wondered why the women who worked in the lab came to work there, put up with the difficulties and abuses of their situations, and stayed in the enterprise. Understanding the women's roles, behaviors and motives in this setting is not easy: the women's life situations are complex (as scientists, women and workers) and the records of their laboratory experiences are scant. The scientific record and, as Parlee notes, the participants routinely erase their work. Yet everyone knew they were there, in some cases conspiciously so. Based on about 100 interviews of men and women that a colleague and I did in the late 1970s, this paper looks at the (self)representation of a half dozen women whose names appeared on published work from 1930-1960 for both the invisibility and pleasures that marked their work. I. Dorothea Raacke, who died in 1985, did most of these interviews while working on a biography of Herbert MacLean Evans and the Berkeley laboratory in which she had been a graduate student. I conducted interviews with scientists who had taken part in the development of sex endocrinology, starting with scientists who had worked at University College London, McGill and Yale University with C.N.H. Long.

Maria M Lopes and Silvia Fernanda de Mendonca Figueiroa
Instituto Geociencias-Universidade de Campinas- UNICAMP

Natural Sciences in Brazil: Local aspects of the 'mondialization'of sciences in the 19th century

This paper comments upon some aspects of the institutionalization of Natural Sciences in Brazil, since the end of the 18th century to the beginning of 20th century. It presents the main institutions that housed this process, that is, exploratory expeditions, botanical gardens, museums and scientific publications, as well as educational initiatives that implement them, and also the communities of naturalists that forged them. A critical state of art on historiography's interpretation of the Brazilian Natural Science's institutionalization process during the 19th century is presented. After presenting the main approaches, features and interpretations made on such process, a new position focused on the way these sciences were performed in the Brazilian society during the 19th century is offered. In order to enlighten this new approach the main empirical contribution and the major features of a series of recent researches are presented. To sum up this paper sketches the outstanding general traits in this process and that are opposed in several aspects to what until recently the historiography of science in our country sustained. Brazil did have scientific activities in the 19th century and these activities–in opposition to what was held earlier–did not appear only in the last decades of last century, were marked by the continuity of the scientific institutions that supported it.

Jeff Loveland
University of Cincinnati

Did Buffon Copy Price? When Bayesian Results are not Necessarily Bayesian

Much of Georges-Louis Leclerc de Buffon's "Essai d'arithmetique morale" (1777) consists of minimally edited reprints of materials he wrote in the 1730s and 1740s, but some scholars have speculated that he added to and completed the "Essai" around 1760. Recently, Sandy Zabell has argued that one key paragraph of Buffon's "Essai"–the one in which he calculates the probability of sunrise–was borrowed "almost word-for-word" and without attribution from Richard Price's appendix to Thomas Bayes' posthumous "Essay Towards Solving a Problem in the Doctrine of Chances" (1764). Since the paragraph in question is fully integrated in and critical to Buffon's "Essai," Zabell's attribution would prove that the "Essai" was still a work in progress in 1764. Furthermore, it would constitute an exception to Stephen Stigler's assertion that Bayes' results remained unknown on the Continent till about 1780. Here I will argue, though, that Zabell's conclusions are far from certain. While it is impossible to rule out the possibility that Buffon was familiar with the Bayes-Price "Essay," his prior experience with probability and philosophy gave him all the resources he needed to write the allegedly copied paragraph. In fact, though his results in the paragraph may seem to follow from Bayes' famous theorem of inverse probability, they very likely depend on a different mode of reasoning–one Buffon was already using in the 1740s.

Paul Lucier
Rensselaer Polytechnic Institute

The Great California Oil Swindle: Silliman, Whitney, and the Ethics of Scientific Consulting

The Silliman-Whitney controversy was a sensational event of the 1860s and 1870s. It was, arguably, one of the most important cases of alleged scientific misconduct in 19th century America. The controversy pitted Benjamin Silliman, Jr., a paid consultant for a number of California oil companies, against Josiah Dwight Whitney, the state geologist of California, over the question of oil in the area around Los Angeles. Whitney accused Silliman of being a mercenary professor–an expert who abuses the public's trust for his own monetary gain. He regarded Silliman's fee (reported at more than $45,000) along with his commercial behavior as prima facie evidence of fraud (or what he called a great swindle). Whitney turned to the National Academy of Sciences to judge whether Silliman's conduct (and scientific consulting in general) brought disgrace upon American science and thus warranted Silliman's removal from that distinguished body. Silliman's defense and the debates within the National Academy reveal the ethical and moral dilemmas of the relations of science and industry in the Gilded Age.

Michael Lynch
Brunel University

The composition of objects: False colour and digital images

The paper is based on a study several years ago of digital image processing in contemporary astronomy. The astronomers worked with digital information–digital signals from ground-based or orbiting detectors and emulsion photographs which had been digitally scanned–and used grey-scale and false-colour palettes to compose and recompose objects of interest. In principle, the equipment enabled researchers to produce an infinite variety of images from the same 'raw data', but in practice both the data and the images were modified in order to 'see' and to 'show' phenomena of interest. What was of interest, and how it was shown, varied with the purposes of the research and the intended audience. This paper focuses on the uses of colour. Consistent with the traditional Galilean view of colour as a secondary quality, many astronomers described colour as an option to be used when composing images for popular audiences. Black-and-white and graphic renderings tended to be preferred as documents for specialized research and publication. However, a close examination of the care that went into the composition of false-coloured and colour-enhanced images revealed a more subtle sensibility to the uses of colour for naturalistic and quasi-naturalistic compositions.

Peter K Machamer
University of Pittsburgh

Origins of Science as Mechanisms

The 17th Century began the rule of science as the search for mechanisms. Mechanisms, of course, had been known and appreciated in Antiquity and the Middle Ages. But in the 17th Century, natural knowledge became defined as discovering and knowing the mechanisms. The mechanical philosophy was the codification of this new way of looking at the world. By the end of the 17th century, mechanisms had begun to lose their importance in physics when Newton changed the rules of that discipline, but they had become dominant in all other modes of natural inquiry, often going under the name of Newtonianism. Whatever their name, in the life sciences, the human sciences, and even in the social sciences, science became identified with the search for mechanisms. Except for some late 19th and early 20th century philosophical aberrations, this conception of science as understanding through knowing the mechanism is very much with us today.

Maura P Mackowski
Arizona State University

Human Factors: Science, Technology, and Cold War Politics in the NASA Astronaut Selection Process

Cold War tensions during the period between the capture of German V-2 rocket technology and the beginning of the Apollo moon program in the mid-1960s forced the NASA astronaut selection process away from a civilian subject pool and toward a military population. Early decisions as to screening and selection were not uninfluenced or unchallenged by those in the scientific, medical, and engineering fields, however. Prior to the evaluation of any candidate for the Mercury program, Air Force and NASA scientists had spent more than a decade simulating and probing the environment of space using balloons, sounding rockets, jets, and ground equipment, and evaluating the human organism's ability to withstand these surroundings. This paper will trace the development of human factors research in aerospace science and medicine between 1946 and 1959 (ironically competing against Soviet scientists also using Luftwaffe equipment, methodology, and scientists), and link the lab and field test results with the political decision to use military jet test pilots as America's first astronauts.

Dave Madden
University of Chicago

Culture, Personality, and the Philosophy of Social Science in American Anthropology

I will present the work of three anthropologists in the Chicago area between the wars and use their ideas to highlight several philosophies of social science that emerged during the period. The presentation will begin by suggesting that a loose consensus formed among a large community of anthropologists (as well as several sociologists and social psychologists) around a set of concepts about the nature of culture and the relations between culture and personality to attempt to explain (or describe) the ways in which civilizations changed. The paper will conclude by detailing how they modified their philosophies of social science and their research methods in response to their discussions about culture, personality, and the process of social change.

Constance A. Malpas
Princeton University

Organizing Pathology: the Architecture of Anatomy at Mid-Century

In the middle years of the nineteenth century, Spanish physician Pedro Gonzalez Velasco toured the leading anatomical collections of London and Paris, noting the size, scope, and design of museums he considered worthy of emulation. He returned to Madrid convinced that the state government would need to play an active role in the creation and subvention of anatomical museums, if local institutions were ever to rival those of other European nations. Dr. John Rhinelander, of New York, agreed–but while he regretted that the "public bounty'' of the American states had not been "extended to Anatomists" like himself to further the creation of medical museums, he acknowledged that the entrepreneurial approach to institution-building had its own advantages, for private collections like his own were a significant source of personal financial gain. In effect, the penury of anatomical resources, and the lack of government regulation in the Anglo-American context encouraged a proliferation of independent, for-profit didactic museums. The situation was very different on the Continent, and especially in France, where a centrally-administered and state-run educational apparatus regulated practical instruction in anatomy, ruthlessly eliminating (or incorporating) all rival claims to pedagogical authority. This paper will examine the structural peculiarities of medical museology in the American and European contexts, providing a comparative perspective for the history of these under-examined institutions. Particular attention will be devoted the adaptive emulation of European models by American physicians like Thomas Dent Mütter, Caspar Wistar, and John Collins Warren–all independent collectors whose museums exist to this day.

Roxani E Margariti
Princeton University

Navigational Encounters: Theory and Practice of Indian Ocean Navigation by Arabs, Ottomans and Portuguese in the 16th Century

In his navigational work entitled "al-Muhit"(='the Encompassing," Arabic for the Indian Ocean), Sidi Ali Celebi, admiral of the Indian Ocean fleet of Suleyman the Magnificent, quotes among his sources the writings of his near-contemporary Arab navigators Suleyman al-Mahri, from Shihr in Yemen, and Ahmad Ibn Majid, from Julfar in Oman. After the partial publication of the Ottoman work in the early 19th century, the 1912 discovery and presentation of two manuscripts at the Bibliotheque Nationale of Paris, containing the very works of Suleyman al-Mahri and Ibn Majid upon which the "Muhit" was based, led to the recognition of these two men as outstanding figures in the history of Arab navigation. The Ottoman treatise proved, in fact, to be a poor translation of its stated sources. The Arab works, on the other hand, emerged as mature and accomplished specimens of a long and well-established tradition of Arab navigational writing. Ibn Majid himself attracted attention, not only for his codification of Arab navigational knowledge in verse and in prose, but also for his alleged role as the pilot who first revealed the Indian Ocean route to India to Vasco da Gama's expedition. Although all contemporary Portuguese sources make clear reference to the crucial services of a Muslim pilot, it is doubted that the man was, in fact, Ibn Majid himself. Despite its apocryphal nature, however, this tradition encapsulates the Indian Ocean encounter of discreet seafaring cultures and their navigational traditions in the beginning of the 16th century. A firm grasp of general navigational theory was prerequisite for sailing the open sea route to India, but the crossing could not be efficiently accomplished without the contribution of local knowledge. The study of Arab navigational works, their Ottoman successors, and their Portuguese counterparts, highlights the navigational theory-practice nexus, and addresses questions of transmission of navigational knowledge within and across cultural boundaries.

Anna K. Mayer
Department of HPS, Cambridge

Setting up a discipline: disputes on the Cambridge History of Science Cttee, 1936-51

From its foundation in 1936, the membership of the Cambridge (UK) History of Science Committee reflected what later came to be recognized as the disparate roots of the field in Britain: 1. science (represented by its founders, Joseph Needham and Walter Pagel, as well as by J.D. Bernal (the crystallographer and marxist activist) and H. Hamshaw Thomas, the botanist); 2. general history (represented by Herbert Butterfield, G.N. Clark, Michael Postan, Jean Lindsay and A. Rupert Hall); 3. natural theology (represented by Charles Raven); 4. English literature (represented by Basil Willey). (Note that philosophy was a postwar addition to the Committee). Initially, the chief goal of the Cttee was merely to organize occasional lectures, but the ambitions of some of its members soon focused on the idea of establishing a new subject for undergraduate studies at Cambridge. It was partly due to this expansion of its aims that the work of the Committee became marked by the kinds of alliances and discords one must expect from such a gathering in which so many different interests and experiences were represented. The history of the Cttee hence reflects what was later dubbed the two-culture-debate. However, it also is a history of engagement with the cultural and ideological issues the post-war world inherited from the 1930s. Materialist interpretations of science and its past are a theme in the work of the Cttee throughout the period 1936 to 1951, with Boris Hessen‚s famous paper of 1931 and the subsequent work of Bernal, J.G. Crowther and L. Hogben becoming the target of increasingly hostile criticism from the early 1940s onwards. This latter period marks the Œrise to power‚ of an alliance around Butterfield and Raven, who successfully secured the support of the Cambridge scientific establishment. Strangely eclipsed from the changing agenda of the Cttee became the entire historiography of the relations of science and magic, which the German refugee Walter Pagel had introduced.

Craig S McConnell
University of Wisconsin

Universal Myths: Narrative Expectations and the Origin of the Cosmos

In the 1950s and 1960s a debate over the origin of the universe raged between two camps of cosmologists. Though cosmology lacked credibility in the minds of many astronomers and physicists, the debate attracted the attention and allegiance of a growing number of scientists and captured the imaginations of vast public audiences. The debate was resolved in the late 1960s and early 1970s as a variety of observational discoveries gave progressively more support to the big bang theorists and made it increasingly difficult for the opposing steady state theorists to make their theory conform to observational data. As the debate was resolved, a number of young astrophysicists devoted their careers to cosmological work. Simultaneously, the presentation of cosmological arguments in the public sphere waned. By the 1980s, a growing number of popular and academic histories had reduced the decade-long process of resolution to a single event: the discovery of the cosmic background radiation in 1965. Though this discovery had been provocative, in 1965 it in no way discriminated between the big bang theory and the steady state theory. It is this meta-narrative, at odds with contemporaneous accounts, that is the subject of this paper–I will historicize the emergence of this meta-narrative and place it in the context of the prevailing narrative expectations visible in popular science literature.

Maureen A. Mccormick
University of Oklahoma

The Intersection of Environmental Determinism and Reproductive Limits in Frank Fraser Darling's Ecology

While historians of biology frequently interpret the environment as the antithesis of biological determinism my purpose is to suggest that biologists could understand the environment in a variety of ways that did not necessarily allow for greater human agency. Furthermore, ecologists through their cautionary tales about the limits of the earth were able to carve out a new niche for themselves on issues related to human reproduction. Through the work of ecologist Frank Fraser Darling this paper explores the regulation of animal and human population size. For Fraser Darling humans were an integral part of nature and therefore of ecology. Through his studies of mammalian, avian and human populations, Fraser Darling argued that the environment regulated the optimal size of the population. On this basis, when commissioned to study the conservation of natural resources in Kenya, Northern Rhodesia, and the Sudan, Fraser Darling advised governments of the need for population control.

J.E. McGuire
University of Pittsburgh

Capturing the Past to Seize the Future: Tradition and the Emergence of the New Science

I want to look at two notable phenomena that characterize the historical trajectory along which the "New Science" emerged. The first is the dynamic that allowed new forms of cognition to detach themselves from the dominant theological culture during the early modern period. The second is the later separation of "science" from "metaphysics" at the hands of Hobbes, Berkeley, and Hume. These two phenomena are interconnected, My discussion does not presuppose the standard historiography of the "scientific revolution".

Erin H. McLeary
University of Pennsylvania

Pathologists, Professionalism, and the Public: the Medical Museum enters the Twentieth Century

At the fifth annual meeting of the International Association of Medical Museums in April of 1912, pathologist A.S. Warthin, director of the University of Michigan's pathology museum, took the floor to discuss his recent work on industrial diseases and injuries. Warthin had found that the health situation of the working class was indeed urgent yet he did not advocate protective legislation or higher levels of medical care or improved working conditions to combat the problem. In the fight against disease and injury, Warthin concluded, what was needed was a medical museum. Medical museums are often conceived of as nineteenth-century institutions, exemplars of the Victorian urge to collect, arrange, and display. Yet the idea of the medical museum was one that remained for its North American supporters vital and vibrant well into the twentieth century. Medical museum proponents were organized and optimistic, and the American museum world in general was enjoying a period of unprecedented expansion. For advocates like Warthin, the medical museum offered a unique, objective sensory experience that possessed a pedagogical power unmatched by any other educational medium. This pedagogical power had already been applied–to varying degrees in North America–to teaching medical students pathology, and in the first four decades of the twentieth century American medical museum proponents began to seek ways to extend this educational experience to the general visitor to use the museum to solve medical problems. This paper will examine the efforts of medical museum advocates, primarily pathologists, to "go public" while maintaining professional control of the museum, at a time when the American museum world itself was professionalizing and expanding.

Everett Mendelsohn
Harvard

Science at a Crossroads: Defining and Prescribing an Uncertain Future

The paper focuses on two influential figures of the 1930's- '50's, Robert K. Merton and J.D. Bernal who entered the debates on the roles and functions of science and in writings both historical and political influenced critical research trends in the history of science and related fields. The similarities in the early years and then divergence in the post-war period pose interesting questions about the politics of the emerging disciplines of the history of science in particular and the less defined science studies of the period.

Andrew Mendelsohn
Max Planck Institute for the History of Science

The Scientist as Technocrat

We know of the existence of a technical or technocratic kind of scientist–and science–primarily from its critics. Certain scientists or schools, or indeed modern science itself, are said to be "narrow", "one-sided", and so on. Full blown, the charge is that science brings with it scientism. Rather than regard these vices as merely part of the history of polemics about and within science, this paper seeks to identify their corresponding virtues–such as focused, precise, solid, sober, practical, non-ideological, above debate–and thus take the critiques as clues to the existence of a model of scientific style and self-conduct. Drawing primarily on examples from 19th-century German and French science, particularly fields related to or part of "applied" areas such as medicine, agriculture, and engineering, the paper will seek to distinguish the technical or technocratic persona from that of the expert or professional at large and to relate it to changing styles of self-conduct and self-representation outside science, for example, in politics and the military. Indeed, sources of the technocratic persona, as it emerged in the 19th century, are to be found in the prior and concurrent history of state administration, the military, and engineering. One focus here will be on groups displaying aspects of the persona (one thinks of the French chemist-administrators Lavoisier, Chaptal, Dumas, Berthelot, and the German district medical officials, or Kreisärzte, of which the bacteriologist Robert Koch was one). Another focus will be on institutions where the model was instilled (École polytechnique, Kaiser-Wilhelms-Akademie für das militärärztliche Bildungswesen). A wider goal of the paper is to illuminate the nature of technical and administrative rationality. Elusive in the abstract, these are readily recognizable in the concrete, the embodied, the personal. The persona, though itself not easy for the historian to describe, may be the means by which something otherwise ineffable such as a problem-solving, technocratic style of science may crystallize and be made available.

Margaret Meredith
University of California, San Diego

How Knowledge Travels: Collaboration and Credit in Early American Natural Historical Inquiry

In recent years historians of science have recognized that natural knowledge and the practices for producing it have their origins in specific locations. How scientific knowledge travels remains one of the most persistent problems the history of science has faced. Older historiographies unsatisfyingly resort to vague models of dissemination whereas more recent localist studies generally do not address the problem at all. Early American natural history provides an exceptional opportunity for examining how knowledge travels. During the eighteenth century, knowledge about remote North American animals, plants, and minerals was integral to American and pan-European natural historical inquiry. The remoteness of the American frontier from metropolitan sites of knowledge-making rendered access to such knowledge a challenge for both American and European naturalists. Rather than presuppose that knowledge flowed freely within an international community, this paper examines how collaboration among investigators in the Atlantic world was achieved in the first place. Drawing upon extensive research on the correspondence of American and European naturalists, it argues that the acquisition of knowledge about American natural objects and its subsequent circulation within the transatlantic world were mediated through trusted sources. Knowledge and intelligence of all kinds flowed through networks of persons personally known to each other through previous face-to-face encounters. Thus, travel and polite conversation were as important as correspondence for the circulation of natural knowledge, since they were the main sites where friendships were established. These friendships, which formed the basis of collaboration and thus the exchange of knowledge, also served as an important source of sorting which accounts were valid or accurate and which ones were not. Networks of personal acquaintances, which I refer to as "familiar channels," enabled observations or claims about the natural world in one locality to be conveyed and credited in another. Thus, the travel of knowledge from its site of production to elsewhere was mediated by and contingent upon the establishment of familiar channels that made collaboration among dispersed savants possible.

Susan A Miller
University of Pennsylvania, History & Sociology of Science Dept.

'She Knows She is Master': Eugenics and the Camp Fire Girls

In June 1915, the Journal of Heredity ran an article by A.E. Hamilton of the extension office at Cold Spring Harbor in which he explained how the Camp Fire Girls had become "an organization which will create eugenic ideals in women in an indirect but effective way." Though this article about a recreational program for adolescent girls was unusual for the Journal, its focus well reflected an attitude pervasive among the founders of girls' organization in the first decades of the century. Leaders of the Girl Scouts, the YWCA's Girl Reserves, and the Girl Pioneers of America, as well as the Camp Fire Girls, all concurred that outdoor recreation could put girls in touch with their racial past in order to ensure a bright eugenic future. In this paper I explore how Luther Gulick, founder of the Camp Fire Girls and head of the recreation department at the Russell Sage Foundation, created a pastiche of positive eugenic rhetoric which he believed could serve as the educational underpinnings of his organization. "Camp Fire is an army, not a hospital," he wrote, and would therefore "recruit its ranks from those who have the ability to do and to help rather than from those who need help." Gulick believed the best way to create this "army of splendid women" who would "affect for good the parenthood of the next generation" was through a program that restored lost "romance" to domestic skills, while encouraging "intelligent control of one's own body and mind." Camp Fire Girls achieved the eugenic ideal of improved motherhood by developing self-reliance and showing mastery over both themselves and the natural environment. Both Hamilton's article in The Journal and the 1915 edition of the Camp Fire handbook contain the photograph of a girl paddling a canoe in choppy water, an activity that conferred "fitness for motherhood," by proving the girl had developed "self-reliance." The captions read: "Only those who have tried paddling a birch-bark canoe in rough water and wind can appreciate the kinds of qualities that this girl is developing... She knows she is master." It was the adolescent girl who could claim mastery over herself and her environment who would be able to bear the eugenic responsibility for the future of the race.

David E Millett and Cornelius Borck
Committee on the Conceptual Foundations of Science, University of Chicago

Navigating the sea of brain waves: Electroencephalography in the 1930s and 1940s

In the late 1920s, the human brain became an electrical component. It constituted the power source of an electrical circuit, and its electrical output was registered by a graphical recording device. The two-dimensional graph of the brain's electrical activity developed by Hans Berger soon became known as the electroencephalogram (EEG) and opened up a new space for inscribing psychological functions, neurological signs and psychiatric symptoms. This paper argues that, rather than simply adding a new layer of electrical knowledge, the practices of electroencephalography constructed an electrical brain, which emerged from superpositioning the recording with the object, and which took various shapes at different locations. The simple oscillations of a marking pen or cathode ray reduced the activity of the human brain to a pattern of single continuous lines in which amplitude, frequency and regularity were the only degrees of freedom. However, the scribbles were immediately discriminated, classified, and quantified as this space filled with correlations with a startling array of human behaviors, psychological processes, and clinical disorders. Whether thinking or consciousness, intelligence or alcoholism, disobedience or epilepsy--everything with a supposed link to the brain could be inscribed in the recorded traces. The blending of these two spaces, the chart and the brain, may be traced back to Berger's practice of sketching the brain directly onto the record. The most elaborate amalgam of biology and technology in EEG was perhaps Grey Walter's "toposcope," which displayed the four-dimensional electrical activity of the human brain in a mesmerizing cerebral array of spinning cathode ray tubes. By 1950, EEG had appeared in an astonishing heterogeneity of instrumental designs and institutional settings, reflecting various methods of data acquisition, analysis, and display as well as the clinical and social applications of EEG. These local explorations of the new technique entangle the cultural history of the brain as mind machine.

Philip Mirowski
University of Notre Dame

From Quantum Mechanics to Cyborgs: John von Neumann and 20th Century Economics

Probably the most significant figure in 20th century economics all the more impressive since for him it was a mere sideline. vN is the figure that links all the protagonists in the rise of neoclassical economics in postwar America: he is the linchpin, as well as major intellectual inspiration for cyborgs. Economics at One Remove. Marshals the evidence that vN was never very favorably inclined towards neoclassical economics throughout his career. Phase I: Purity. vN's early career up until 1932. Hilbert's program. The early paper on game theory. Themes out of quantum mechanics, and their relationship to game theory and thermodynamics. Phase II: Impurity. vN mid-30s to 1943. The beginning of his extraordinary collaboration with the military. The relationship of this turn to "applications" and the breakdown of the Hilbert program. Theory of Games an Economic Behavior as a new social physics, and a transitional meditation on the meanings of rationality. Phase III: Worldliness. vN from 1943 till his death in 1955. His major efforts shift to the computer and to the theory of automata. Themes out of Automata Theory. Hints that vN sought to interest economists in this shift in priorities. Increasingly, vN sees himself not engaged in simulation of brain functions, but rather attempting to extract the fundamental formal principles of computation, under the rubric of the theory of automata. vN also thinks of automata theory as beginning to formalize theories of evolution through the appearance of new levels of complexity. vN's development is met with incomprehension in economics, and even some hostility (Samuelson). Neoclassicals only love the axiomatization of expected utility, which for vN was just a detour to formalize game payoffs and sell the package to the economics community. They evince great respect for vN as a math genius, but rarely engage his ideas on an intellectual level. But they don't buy game theory, or the motivations behind it, which are to begin serious consideration of agents as information processors. Major point to be made: Game theory has a role in the first two programs, but shrinks to relative insignificance in third. However, vN's increasingly cozy relationship to military after the war keeps game theory alive. It was not initially welcome in economics.

Adam Mosley
Cambridge University

Truth and Correspondence: Some Comments on the Epistolary Genre and Early-Modern Astronomical Writings

Historians of early-modern astronomy have long made use of letters as documentary sources. However, relatively little attention has been paid to letters as a distinct genre of astronomical text, or to the role of correspondence in the practice of astronomy. The portion of Tycho Brahe's correspondence published by him as the Epistolarum astronomicarum liber primus (1596) provides an especially striking example of the use of letters, both for the exchange of astronomical data and for the promotion of an astronomer's reputation and claims to priority. With particular reference to the case of Tycho, I shall explore the nature of letters on astronomical subjects, both published and unpublished, and their use for a variety of purposes. It is my argument that the construction, circulation and deployment of such texts is illuminated by reference to the broader context of early-modern epistolography, but that there were also reasons for the construction and maintenance of correspondence networks specific to the discipline of astronomy.

Janice L Neri
University of California, Irvine

Truth, Deception and Illusion in Sixteenth-Century Images of Nature

In the late 1590s, Anselm de Boodt, a Flemish doctor living in Prague, commissioned two hundred watercolor paintings of plants from a painter identified only as "Elias, " presumed by modern scholars to be the Flemish painter Elias Verhulst. When de Boodt received the paintings, he was very unhappy with the artist‚s work in his opinion, the images had not been painted "from life. " This incident involving an artist and his patron raises a number of interesting questions about the ways in which "truth" and "deception" were conceived of in relation to images of nature in early modern Europe. The Verhulst watercolors are found interspersed within the pages of the Clutius botanical books, now owned by the Jagiellon University Library in Krakow and once greatly admired by de Boodt. Why were the Clutius paintings prized for their life-like qualities while the Verhulst paintings were scorned as mere copies? If Verhulst took his images from an artist's model book rather than "from life, " what set of skills or expectations did his patron possess that allowed him to make this distinction? In what context was it acceptable for an artist to use models, and in what context was this practice deemed deceitful? A comparison of the Verhulst and Clutius paintings with an extraordinary model book created by Joris Hoefnagel for the Emperor Rudolph II will show that such questions were addressed by the artists themselves in works whose ostensible goal it was to copy nature truthfully and accurately. It will be argued that in examining late sixteenth-century images of the natural world, we must be careful to distinguish between what Martin Kemp has called the "rhetoric of the real, " the techniques of pictorial illusionism developed by Renaissance artists to present convincing images of objects and people in space, and an image‚s accuracy or "truthfulness."

William R Newman
Indiana University

An Ungentlemanly Gentleman: Boyle's Appropriation of Chymical Knowledge

Robert Boyle is often lauded as the first mechanical philosopher to base his matter-theory firmly on experimental bases. Boyle viewed this work as part of natural philosophy rather than "chymistry." The latter was the realm of iatrochemistry, chemical technology, and transmutatory alchemy rather than the place for micro-structural explanations of matter. As I have argued elsewhere, however, Boyle appropriated a substantial part of his corpuscular matter-theory directly from alchemical sources, some originating in the early seventeenth century, others going back to the Middle Ages. Nowhere in his printed work does Boyle acknowledge this debt, or for that matter his other borrowings from chymical theory. In this paper I will examine Boyle's suppression of his sources, and place it within the context of early modern literary practice. In addition, I will explore the historiographical results that have followed from Boyle's lack of candour, for these have had a considerable impact on the received view of matter-theory in the Scientific Revolution.

Tara E. Nummedal
Stanford University

"Proper Bees" and "Rotten Drones": True and False Alchemists in Early Modern Europe

In the late sixteenth and early seventeenth centuries Europeans suddenly perceived a multitude of "false alchemists" in their midst. Certainly there were those who doubted alchemy's claims altogether, for practical, religious or philosophical reasons but these were not the loudest voices crying out against alchemical foul play. Rather, alchemy's harshest critics came from those who believed most zealously in its potential. Princes in the Holy Roman Empire who supported alchemists most generously, for example, were also the most likely to execute practitioners for fraud if they failed to fulfil their promises. Similarly, authors of natural-philosophical, mining and practical metallurgical treatises simultaneously praised alchemists' contributions to their knowledge of nature and condemned the art's more devious practitioners, criminals and tricksters all. Finally, even alchemists distanced themselves from their false imitators in polemics which are reminiscent of those which raged in the medical marketplace. Such debates beg an obvious question: what was a "true" alchemist in early modern Europe? This paper seeks to explore this issue by examining several alchemists' trials for fraud as well as polemical treatises written by practitioners themselves. A complicated picture emerges: the "true" alchemist, it appears, was defined as much by practices as by product and social identity. Perhaps more importantly, different groups involved with alchemy defined its "authenticity" differently. As a result, any attempt to pin down the "true" early modern alchemy, paradoxically, only reveals the stunning diversity of alchemy in early modern Europe.

Go to next page

14 March 2001 | Contact HSS | Contact the Web Editor | Return Home
© 1995-2001 by the History of Science Society, All Rights Reserved

We've Moved! This site is no longer updated.

Please use our new site at http://www.hssonline.org.