Meeting Program | Abstracts
The term "big science" covers a multitude of practices that need to be disentangled in order to develop sophisticated accounts of what has and has not changed in the working lives of physical scientists since World War II. This paper will offer a characterization of the types of scientific opportunities that have stimulated physical scientists and their patrons to form multi-institutional collaborations. Then it will suggest the extent to which there are patterns in the social frame works physical scientists construct and the goals they pursue. Findings are based on a qualitative analysis of structured interviews conducted with hundreds of participants in high-energy physics, space science geophysics, and several other specialties in which scientists formally collaborate.
In the early 20th century scientific bureaus of the USDA became highly visible, not only serving the public but also showing the extent of federal sponsorship of science in the government. As part of the USDA, the Bureau of Entomology, which began as the Division of Entomology in 1878 and was given Bureau status in 1904, was the insect fighting organization of the federal government. Leland O. Howard, who steered the Division's development into the more autonomous Bureau, used his position as director to advance the burgeoning science of economic entomology. Fieldwork was the backbone of the research done in the field stations distributed throughout the nation. The institutional form of bureau influenced the shaping of field science, and in particular, Howard directed the development of economic entomology as a science. Following the bureaucratic rules and codes of conduct was no less fundamental to the social dynamics among field scientists than producing scientific knowledge itself. As both bureaucrats and scientists, field scientists had to work efficiently and link their research with public interests. Conflicts over scientific practices in the field, including arguments over scientific credit, arose. The roles of Bureau field scientists were largely shaped through interaction with the leaders of the Bureau, all of whom used different styles to foster the institutional development of entomological service. This study of the Bureau of Entomology contributes to our understanding of bureaucratic practices of science in the government.
Throughout the late eighteenth and early nineteenth centuries, the Vienna School of Medicine underwent a transformation from a medicine based in the empirical-humoral-pathological approach of the Hippocratic tradition, to a medicine based increasingly on scientific methodology. Although it has long been thought that the Romantic movement impeded the development of scientific medicine, this paper will demonstrate that the brand of natural- philosophy which permeated Vienna in the early nineteenth century played a key role in the transition from the Hippocratic tradition to scientific medicine. It will likewise be demonstrated that Romanticism exerted a tremendous influence on the development of several medical specialties that were to become the defining contributions of the great Vienna School of Medicine in the nineteenth century.
Philosophers of science have repeatedly suggested that the ability to predict new facts about nature plays a large role in the acceptance of scientific theories. So far, however, few case studies support this suggestion. Recent geophysical research provides an example in which, prima facie, the ability to predict new facts should be a major criterion for the selection of theories. The dilatancy model of earthquake prediction was first developed in late 1972 to account for a number of empirically observed earthquake precursors, particul arly a noticeable decrease in the velocity of seismic waves prior to an earthquake. The model explained these precursors as resulting from the opening of aligned cracks in response to the buildup of stress before an earthquake. This model quickly aroused considerable excitement among geophysicists eager for a comprehensive theory to guide earthquake-prediction research and to justify increased funding for it. In mid-1973, the dilatancy model led to successful prediction of a small thrust earthquake in New York State. Efforts to predict the strike-slip earthquakes that predominate in California, however, met with equivocal success. These efforts did not rigorously falsify the dilatancy model. Nevertheless, American geophysicists abandoned the dilatancy model in favor of using foreshocks to predict earthquakes-a method that had even less of a track record of successful prediction but that promised prediction for all types of earthquakes. This case study thus shows that, in the field of earthquake prediction, proven predictive ability was less important for theory choice than the perceived potential for offering a generally applicable method.
This paper borrows the title of Herman Kahn's series of lectures that he delivered at the RAND Corporation in the mid 1950s. They were part of a week-long workshop on systems analysis offered to Air Force and aviation industry officials in 1955. Entitled "An Appreciation of Analysis for Military Decisions," the RAND course was intended to familiarize civilian and military consumers with the pitfalls as well as the possibilities enabled by the RAND approach to long-range planning. Kahn addressed the methodological problems involved in employing systems analysis, Monte Carlo, game theory, and war-gaming in order to generate fictitious data about future nuclear war. They are representative of RAND's distinctive approach to future war and the cumulation of a decade of interdisciplinary research. In this paper, I will discuss a number of methodological and epistemological themes to which Kahn repeatedly referred: the problem of the referent of hypothetical simulations, the problem of the distortions necessitated by efficient idealization of the model, the problem of detecting obscure but significant variables, the problem of modeling uncertainty, the epistemological function of intuition, the constraints and discipline of cost-accounting, and the ambivalent nature of the epistemological "freedom" afforded by simulation.
The aim of this paper is the examination of Matthew Hale's notion of natural active power with references to other contemporary solutions, in particular those by Henry More, Robert Boyle and Francis Glisson. In his writings concerning natural philosophy, Hale regarded More's spirit of nature as a result of his religious anxieties, but completely useless as an intermediate principle between nature and God. In addition, Hale asked More if the plastic spirit could be considered as a "a kind of connatural sense, suted to all material beings, together with a sentient perception and appetite, proportionate to the nature of every material or substantial created being, as Campanella and our learned countrimen hath lately asserted". The 'leaned countriman" who Hale is referring to is Francis Glisson. Glisson's tacit equation that every substance as such is living (that is, endowed with perception, appetite and movement), and that everything that is capable of perception and movement is a substance, could be accepted only partly, since the first part of the equation was completely wrong in Hale's opinion. In fact, matter is a kind of substance which is devoid of any vital power. While Glisson's theory of biusia can be rightly considered an ante literam evolutionary conception of life, on the contrary Hale's view on vital principles still rests of a "fissistic" metaphysics of the species. The cornerstone of Glisson's philosophy of nature-prime matter as the adequate and autonomous subject of its own transformation by virtue of its being originally and immediately endowed with life-and the scabrous view of natural forms as temporary variations of one material substratum, were rejected unhesitatingly. Both Hale and More could not accept the shift--which is wrong from their point of view, but consistent within Glisson's theory-according to which every substance, including material substance, involves the notion of activity. Accordingly, in order to lay the philosophical foundations of the origin and nature of the vital principles, the hypothesis of matter a s a natura naturans is the first to be discarded. Being passive, receptive, undifferentiated, devoid of vital reactiveness, matter is incapable of moving, living and self-differentiating into the variety of forms, and above all of giving itself a teleological development. According to Hale, nature's activity is the energy distributed to each being at the time of creation, each of them individuated within its own form and species. This is the energy that allows them to persist as such in the circuit of second causes. It is not therefore because of an immanent activity in matter, capable of self-differentiation ab intus (see Glisson), that the forms are educed and then fall again into the receptacle of matter. Activity is "superadded" to matter according to a pre-established plan. The dogma of the fixity of the species ("fixdness and determination of the species of Animals and Vegetables") runs parallel to that of their original institution by means of a supreme creative principle which puts "these common and homogeneal principles into determinate species." The hierarchy of ranks and species originated from God's initial creative act. The "Spirit of God" (Hale does not say exactly whether it is to be considered as the third person in the Holy Trinity, the Spirit of Nature or the anima mundi) introduced "powers or energies" into the primordial confused mass, so as to start the processes of differentiation. These "energies" rose already individuated within their species as "particular inherent active principles in particular subjects." In Hale's view the law-like aspect of natural activity is predominant: natural operations are in the final analysis the result of an arbitrary and positive, albeit divine, lawgiving. The same can be said of magnetisim and gravitation, the movements of the animals and other natural operations-in any phenomenon of nature it is the "impression and signature of the Supreme Lawgiver" that acts. Nature itself is nothing but the accomplishment of a primitive divine command. The presence of a "radical activity and causality" in each natural being does not mean that nature can be considered as the ultimate immanent cause in the material world. The ground is transcendent and lies in God's will. Contrary to the idea of nature as an automonous entity, Hale states that the course of nature requires "a knowing and perfectly intelligent being, the operates ex cognitione, intentione, et voluntate." Even though Hale's main aim was to save the real character of the activity of second causes, nevertheless, nature's operations were considered as the result of a divine, albeit conventional, institution, as regards both the ultimate origin of the activity and the arrangement of the ends to be accomplished. From this point of view, Hale's interpretation of natural teleology appears to be quite similar to that of Boyle and More, in that all three do not question the validity of the argument by design, tracing finality back to an external and heaven-sent disposition-a pre-established mechanical system in Boyle, a plastic disposition in More, a law given to each species and to each individual thing in Hale. One way or another, the order and the system of the ranks was safe.
From the sixteenth-century onwards, art historians have debated the working methods of medieval artists. More precisely, commentators have queried the extent to which medieval artists modeled their productions on nature rather than other works of art. The paper takes up this issue by probing the descriptive qualities of a diverse group of thirteenth-century images to ask how and in what ways these images functioned as what Charles Ivins called "containers" for visual information. In response to the question of how we might know whether any specific image was drawn from life observation, it offers a refined descriptive vocabulary, one that distinguished the "naturalism" of an image from its "descriptiveness" as a key step in asessing the maker's intention and practice.
My paper is the culmination of three years of research into a virtually unknown territory the history of scientific expert testimony. In previous HSS meeting my papers addressed specific case studies. This time I want to present my overarching argument that the lively contemporary debate concerning the problem of scientific expert testimony in America's courts is misconstrued. One camp, as represented by Peter Huber's "Galileo's Revenge: Junk Science in the Courtroom," formulates the problem in terms of the growing volume and complexity of scientific evidence, and the difficulties the court, and especially the jury, have in handling it. The other camp, represented by Sheila Jasanoff's "Science at the Bar: Law, Science, and Technology in America," formulates the problem in terms of the growing mistrust and alienation from science and technology in our culture. Both camps, however, share one basic premise that the problem of expert testimony is relatively new and escalating in nature. I argue that problem is neither new nor escalating. I demonstrate the problem to be at least two centuries old, and argue that an historical perspective fundamentally reshapes the debate. Instead of representing the problem of expert testimony as pathological and demanding a quick cure, I suggest, we should see it as a chronic condition that the courts have been constantly struggling to live with. The legal entity of the scientific expert witness, as we know it today, was molded during the late 18th century, when the newly adversarial legal system had to cope with the increasing load of industrial cases. While the volume of scientific expert testimony constantly increased during the 19th century, the respect paid to it by the courts and the public constantly diminished by the constant spectacle of the experts contradicting each other on the witness stand. The chronic inability of the court to bridge the widening gap between experts and juries had finally forced the American Courts in the early 20th century to abandon t heir traditional neutral role and become increasingly active in development of scientific proofs in their courtroom. Frye v. U.S., the 1923 precedent described in the current scholarship as the founding rationale for the admissibility of scientific evidence, is revealed as the culmination of 150 years of legal disenchantment with the scientific expert.
This is a geographical analysis of the purposes and distribution patterns of local scientific societies in nineteenth-century America. These organizations were the most ubiquitous institutions supporting scientific activity during this period. Both their founders and modern historians have emphasized the autonomy and indeed the isolation of each society. However, they are more fruitfully understood as being embedded in a complex set of relationships with the rest of the scientific world. This paper draws on models developed by cultural geographers to portray local societies as urban places in a scientific landscape. Their variety and function in the scientific world were analogous to those of nineteenth-century towns and cities in the world of commerce. In addition to sponsoring local science, these societies also shaped activity throughout more or less extensive hinterlands. However, their most important function was to link the local community and its hinterland to all the other institutions and individuals that comprised the national scientific community. The proliferation of these societies and the corresponding growth of the national scientific community in the second half of the century was thus made possible in part by the decreasing cost and increasing reliability of communication-specifically in publishing and transportation-during this period.
Several historians have argued that electrical projects of the late 19th century were constituted by a holistic "seamless web" of diverse resources from science, technology and society. Others have argued that there was strong but disputed demarcation between the electrical theory of "professors" and its application in practice by non-academic "practitioners." This paper aims to reconcile these two historiographical positions by investigating the different ways in which late-Victorian physicist and electrical engineers sought to quantify the peculiar electrical phenomenon of self-induction. I will show that the resolution of their diverse approaches to mathematics-based respectively on algebraic and geometrical techniques-show that these two "cultures" could be constructively complementary whilst maintaining considerable mutual autonomy.
Dmitrii Mendeleev is generally known for his periodic table, which was produced when he was a young professor at St. Petersburg University. After retiring from that institution amidst political scandal, he became the first director of the newly reconstituted Central Bureau for Weights and Measures, the Russian equivalent of the Bureau of Standards, charged with the task of putting Russia on the path to standardizing the metric system to replace Russia's old system of units. This new metrological bureau was self-consciously formed by Mendeleev into a modern scientific laboratory founded on ideals of precision measurement. While he was working on this metrological project, he simultaneously was completing the last stage of a lifelong quest to find a place for the luminiferous ether in the periodic table. The ether was not an abstract mathematical model for Mendeleev, but a primary atomic substance which could be measured in a laboratory. This paper is an attempt to show how the metric project and the ether search were part of the same endeavor to find a source of unity in late Imperial Russia: on the one hand, unity for an empire in danger of disintegration on the other, unity to undergird the Newtonian worldview that Mendeleev saw himself as completing. In the process, I hope to provide an alternative view of the periodic table, which has all too often been examined only from Mendeleev's work on it in 1869, and carry the analysis of its development in Mendeleev's own mind as he approached the end of his life, and the life of the empire he had tried so hard to serve.
In 1570, Henry Savile (1549-1622) was appointed the ordinary lecturer in astronomy at the University of Oxford. Others who had held this position had done little more than to read through a simple handbook such as Sacrobosco's Sphaera. Savile, in contrast, chose Ptolemy's Almagest as a text, and over the next few years taught both ancient planetary astronomy and the new developments of Regiomontanus and Copernicus. To these lectures (now in the Bodleian Library, Oxford) he prefaced a long exhortation to the study of astronomy, an introduction to the individual mathematical arts, and a history of mathematics from Adam to Ptolemy. In this introductory material he returns again and again to the same question: why should students at Oxford study the mathematical sciences? In answering this question, he attempted to overcome the prejudice of his students that mathematics was a practical skill, synonymous with the expertise of instrument-makers and navigators. This paper will explore the alternative justifications he advanced for the study of mathematics by gentlemen: that it was an essential part of the humanist programme, and that it was central to Platonist (and even Peripatetic) philosophy; that it was, in short, profoundly "liberal." Although the sincerity of his apology for the sciences cannot be questioned, I shall also explore the larger motives behind his ambitious lectures, among them the reestablishment of English universities as the "agenda-setters" among European institutions of higher learning-as they were, he maintained, during the medieval heyday of the Merton School. To some extent he accomplished this renewal at the end of his life, with his foundation of the Savilian Chairs in Astronomy and Geometry. By then, however, he had at least partially acknowledged the place of expertise in the mathematical sciences.
In the exact sciences of early nineteenth-century Germany, it is impossible to extricate exact measurement from issues of tranparency and visibility. Quantitative statements required, as one of their conditions of possibility, making the object of study visible in some way against a background with equally visible, fixed divsions (such as a scale or ruler). At the same time, devices used for making phenomena visible must themselves be constructed to have as little influence on the phenomena as possible. In his work in terrestrial magnetism, C. F. Gauss's first task was to formalize the way in which imponderable magnetic fluids manifest their visible actions in ponderable bodies by establishing a system of absolute units. His conceptualization of terrestial magnetic force as displacement of a ponderable body through which it manifested itself gave values independent of the magnetic state of the particular body (needle) used. In doing so, Gauss aimed to make the ponderable object (the needle) a transparent medium through which the imponderable force could present itself. But securing the relationship between needle movements and terrestrial magnetic force did not make magnetometers transparent enough. Oscillations in single-thread magnetometers only gave the average value of the field's intensity over a period of time, within which, of course, the intensity could change dramatically. Gauss and Weber's solution was to build a magetometer that allowed one to measure the displacement of the needle from an equilibrium state constituted by the simultaneous actions of the magnetic force acting continuously on the needle and the restoring force of the apparatus when displaced from equilibrium. This apparatus, the bifilar magnetometer, made it possible to measure intensity independent of the particular needle used. In this way, Gauss achieved an even more neutral, more transparent medium by means of which he could observe imponderable forces acting on ponderable bodies. In this paper, I will discuss how Gauss's notion of precision was coupled to the development of ever-more transparent device that would in turn make phenomena ever more visible, and I will discuss these findings in light of recent historical work connecting instruments, objectivity, and precision.
In his seminal papers, "Work and Waste," Norton Wise demonstrated the intimate relations between natural philosophy and political economy in nineteenth-century Britain. In particular, he noted a concomitant shift in the early 1800s from theories which emphasized a balance of forces to theories modeled on the steam engine, which explained change and harnessed energy with maximum efficiency. I will argue that in this same period British natural philosophers underwent a similar shift in modelling within mental-moral philosophy (physiology, moral philosophy, philosophy of mind, and philosophy of science). In the late eighteenth century, discussions of the nervous system followed the same model of balancing forces which Wise elucidates. In the first half of the nineteenth century, natural philosophers increasingly described the body as a machine that required the same kind of careful management and attention to efficiency that steam engines did. In this latter model, natural philosophers could acknowledge their abnormality as the kind of natural inefficiency one expected from a machine, and also achieve recognition for their superior skills in managing a complex engine. I therefore object to the conventional wisdom that modern (male) scientists have always sought to erase any signs of their bodies' presence in scientific practice. Rather, the very act of managing physical abnormality rendered authority upon the natural philosophical elite. I will use the specific case of hemiopsy to make this point. Hemiopsy was a temporary but recurring nervous affliction that affected the vision of several prominent natural philosophers (including Wollaston, Brewster, Airy, and J. Herschel). Each interpreted his affliction as an inefficiency issue that required the kind of corrective measures the new machine philosophy could offer.
One of the first working cyclotrons outside of Berkeley was built in Leningrad in 1933. Throughout the 1930s institutes in Leningrad, Moscow, and Kharkov entertained ambitious plans for various regimes of particle accelerators, plans which foundered in part because of acrimonious competition for government resources and in part because of disagreements about whether Soviet physics could pursue other research programs to greater effect. My talk will be devoted to Soviet attempts to build a cutting-edge accelerator up through the work of Vladimir Veksler on phase stability at the end of World War II. For the moment, it appears that this account will be dominated by the vagaries of Stalinist politics more than by instrumental practices, but I believe the experience of the Ukrainian Physico-Technical Institute in Kharkov may lend itself to conclusions of broader interest to historians of physics. Accelerator advocates in Kharkov were at odds with a growing contingent of cryogenics experts who suspected that pouring resources into the "Lawrence machines" in an attempt to best the Americans with a show of physics on the "Soviet scale" was an exercise calculated to appeal to Party bureaucrats, but was not necessarily the right route to the best physics. It may well be that the social dynamics of these exchanges will help explain in part why the Soviet contribution to the archetypal Inward Bound narrative of high-energy physics in the postwar era proved to be disappointingly modest in the end.
In his Gifford Lectures of 1963, The Living Stream, Sir Alistair Hardy referred to Form and Function, Edward Stuart Russell's well-known philosophical history of morphology, as a "zoological classic." Hardy, by then Linacre Professor of Zoology at Oxford, began his scientific career as an assistant naturalist working under Russell in the Lowestoft laboratory of the Fisheries Ministry. In his opinion, Russell would "be remembered by the historians of zoology for his influence in three directions: in fishery science, in the study of animal behavior, and especially in the fundamental philosophy of our subject." Russell's scientific career reflects the important growth of non-university sites of research during what Lynn Nyhart has referred to as the institutional expansion of zoological research in the early twentieth century. As a student, Russell worked on collections from the fishery vessel Goldseeker. When the Board of Agriculture and Fisheries took over the North Sea fishery investigations and advertised for a zoologist, he turned down a fellowship to take the position. When the Board became a Ministry, Russell was appointed Director of Scientific Investigations, and Hardy joined him at Lowestoft. This was Hardy's "good fortune," for Russell "encouraged each of us to develop research along the lines in which we were especially interested. In so doing he got the best out of his staff." In this paper, I will investigate the character of Russell's research program at the Lowestoft laboratory and its significance for understanding the history of zoological research in the early twentieth century.
Historians of science have argued that radar research in World War II laid the foundation for the invention of the transistor. This paper contrasts the well known story of the transistor invention at the Bell-Labs to a lesser known history of the European development of transistors. I argue that while the Mott-Schottky-Theory of Semiconductors provided a useful working model for the production of radar detectors, it became a substantial hindrance for European scientists in the invention of the transistor in the late 1940s. An examination of the continuity between the radar research of Heinrich Welker and Herbert Mataré and their postwar search for transistor action shows how the success of the theory for radar detection inhibited the further development. This paper contends that intimate involvement in radar research hindered the invention of the transistor. Therefore, it was not by chance that scientists at the Bell Labs, who had not been closely related to wartime radar research, produced the first working transistors.
Historians of science are increasingly sensitive to the ways in which women have been excluded from traditional historiography-excluded by authorities in their own time, and often overlooked by later historians. This paper will offer strategies for reconceptualizing the many, complex ways that early modern European women participated in natural philosophy and medicine. Mainstream women's history, with its traditional emphasis on contributionist, heroic, and oppression/resistance models, now includes two newer approaches that have the potential to dramatically affect the future historiography of science: gender history and the history of women's experiences. When combined with the recent move within the history of science to examine the history of practice, gender history and the history of women's experiences can illuminate how early modern women negotiated and reconfigured the boundaries that strove to exclude them from natural philosophical activities. Case studes from early modern England-including female herb-sellers and their interaction with London apothecaries, female medical practitioners who clustered around London hospitals, the daughters of Matthew L'Obel who were known for their gardening expertise, and female scientific "managers" like Lady Mountjoy-will provide specific examples and illustrations.
This paper takes a micro-historical approach to study mathematics in its cultural context-the humanistic scholarship of the late Ming. The philosophical system outlined in the Records of Music (an example of what Needham termed "correlative thinking") linked proper governance to proper expressions of music: the rites ordered heaven and earth music harmonized them. The perceived decline in the late Ming moral order led to efforts to reconstruct the ritual systems outlined in the Confucian classics. Zhu Zaiyu systematically studied musical ceremony-instruments, dance, musical scores, and mathematical harmonics-to recover these systems and presented his proposals to the imperial court as solutions to Ming crises. He developed the base-nine number system and the equal-temperament of the musical scale placing nine abacuses together, he calculated the lengths of musical pitch-pipes to twenty-five decimal place.
Duns Scotus redefined the final cause as a representation of the soul. Jesuit commentators adopt the language of Scotus claiming that the final cause only causes metaphorically not by a real influence. However, they still ascribe final causation to physical things even though physical things lack cognition. The seventeenth century anti-Aristotelian philsopher Sebastian Basso attacks this teleological view of nature via his critique of the Jesuit view of God's concurrence. According to the Jesuits, created things act toward a goal by their own powers, but they do so simultaneously and by the same action as God. Basso argues that for this to be the case all created things must possess cognition so that they can anticipate God's goals and match their causal action to his. Since physical things lack cognition, Basso concludes that they only act insofar as they are moved by God as his instruments. Basso's attack on the Jesuit view of causation supports his anti-Aristotelian view of nature which does away with substantial forms and active powers in matter. For Basso, matter consists in particles that are moved externally by the physical ether. Basso's causal explanations have much in common with René Descartes'. Both philosophers explain natural phenomena in terms of the local motion of material parts. However, while Basso locates the source of motion in the world soul which pervades nature, Descartes lcoates it in God. Furthermore, Descartes draws on the Jesuit view of concurrence to distinguish between the first cause of motion and the second causes. Descartes' distinctly modern view of causation thus contains both Aristotelian and anti-Aristotelian elements.
In the late 1810s and early 1820s John Herschel was involved in the three activities out of which spectroscopy developed: chemistry, optics and astronomy. Suggestive comments Herschel made in the late 1820s about spectra and flame colors have led historians to wonder why he did not put these components together is such a way so as to develop spectroscopy. He had all the components of spectroscopy, but this had to wait another 30 years. What, then, was Herschel doing? How was he putting these pieces together? I argue that Herschel's work in optics during this period was aimed at investigating the physical behavior of light. Specifically, he was trying to ascertain the laws that govern the motion of light as it passed through various media. He labeled this project "physical optics." Within the limits of this project, his observations of spectra and flame colors begin to take on a richer meaning. Furthermore, his optical investigations prompted him to adopt a particular conception of spectra that facilitated these experiments, but deterred him from asking other questions about the nature of spectra. I also suggest that his motivation for his research was the improvement of refracting telescopes.
Historians have long recognized the ascension of science to a position of pre-eminent intellectual authority in the Anglo-American world as a central feature of the intellectual history of the late nineteenth and early twentieth centuries. Yet despite wide acknowledgement that varying forms of naturalism dominated the creation of knowledge by the early twentieth century, scholars have concentrated their work primarily on the negative aspects of the emerging scientific naturalism, particularly as it relates to the corresponding marginalization of religion from American intellectual life. This project lays out the positive arguments offered by the promoters of science to justify its claim to pre-eminence in the creation of knowledge in America. The public allies of science were not shy in advocating a new way of thinking for Americans after the publication of Darwin's Origin of Species in 1859. Their vigorous efforts to promote science centered upon an emphasis that science could more accurately fit the needs of a changing world than the antequated knowledge formed by the religious mind. This was particularly true in the promise of science to provide hope for progress and unity with intellectual, technological, moral, and social implications. This dialogue, as found primarily in the learned periodical press, highlighted the potential of science to meet the demands of a progressive and expanding, even evolving, intellectual world. This paper aims to place the arguments used to promote science in the context of the professionalization of scientific endeavors. Central in linking the rapid professionalization of American life with the growing authority of science was the concept of consensus. To speak with an authoritative voice, the assembling community of experts needed to develop, and present, a robust notion of agreement amongst themselves. Partly embedded in the theoretical writings of C. S. Peirce, but also embodied in the practice and rhetoric of other scientists, consensus was fundamental in dev eloping professionalism, cementing their growing authority, and promoting the impression that knowledge of the natural world was steadily progressing toward unity. Allowing the community of experts to speak as "we scientists," consensus proved fundamental in confirming and expanding their intellectual authority and prestige in the economy of ideas.
Terms such as scholastic, peripatetic, modern, and innovator were thrown about with almost reckless abandon by practitioners of the natural sciences during the Scientific Revolution. This usage was often purely the rhetorical deployment of familiar tropes as a form of community and boundary building, but such terms could, at times, also accurately describe practitioners' own self-perceptions. However, terms such a scholastic, peripatetic, Aristotelian were by no means synonyms. Although self-professed moderns tossed such terms almost interchangeably at their opponents, those who employed these terms self-descriptively were more discerning, accepting for example, the term peripatetic, but vigorously refusing the label scholastic. Using the example of the German Jesuit in the seventeenth and eighteenth century, this paper will unpack terms such as moderns, scholastics, and peripatetic. It will examine what it meant to describe oneself as a peripatetic at that time. It will also show how such peripateticsë often biting denunciations of "modern" physics can be quite harmoniously reconciled with the fact that Jesuit physics was suffused with aspects of the new.
Renaissance artists such as Giorgio Vasari were able to differentiate the social status of architects, sculptors, and artists who employed the mathematical arts of design (disegno) from other more common artists who remained in masonry, carpentry, or other artisanal trades. This more elevated artist re-invented the relationship between theory and practice, and deeply informed new notions about the relationship between art and science, indeed between art and experience, in elite circles. Design hung immediate between the fantasy (or ingegno) of the artist and actual created things, and could be seen representationally as a diagram or object. The artistic representation could serve as a bridge between texts and the world texts were meant to refer to, but were also cultivated as a place-holder for memory itself. The Italian expositors of disegno in fact claimed artistic, mathematical design was better and more accurate than either words or memory. We can see these two strains coming together in pedgogical literature such as Commenius's schemes for language learning. Finally, the very act of image making fit into notions of art as an imitation of nature's own material thing-making. This analogy struck deeply at both Paracelsian alchemical practitioners, and seemingly orthodox physicians such as William Harvey. Harvey's preface to Animal Generation discusses the relation between art, nature, and experience at length, and utilizes the analogy in discussion of both his method, and of generation itself. The former of Harvey's discussion locates art primarily in terms of memory the latter in terms of creation-and both in terms of imagination. This paper will explore the developing notions of artistic design in relation to developing notions of scientific discourse and practice, and as a crossroads between the new roles of mathematics, memory, and material creation within emerging science.
In the 1930s, American archaeologists used the metaphor "archaeology is history" in order to establish themselves as scholarly professionals and as public advocates for the preservation of archaeological sites as national landmarks. The metaphor not only served as rhetoric, it reflected the effort to connect written history with archaeological evidence. For the developing community of professionals and amateurs in Nebraska, it also reflected their self-image. As their field journals reveal, W. Duncan Strong and his field party acted out an historical pageant of the Old West; they emulated their icons, the nineteenth-century American frontiersmen.
Writing about the lives of the Russian women scientists of the 1860s perhaps is particularly difficult because of their unique group consciousness, and because their activities appear to have been much more varied (and overtly political) than those of the average woman in science. Particular attention is paid to the role of the women as science popularizers, pioneers in graduate education in the sciences, and conscious feminists and social activists. Emphasis is also placed on the unusual characteristics of their shared vision of science and the scientific community, and their relationships with prominent male scientists of their time.
Had it been completed, the Superconducting Super Collider (SSC) would have been the largest (and most costly) scientific tool ever built. Seen in America as a frontier project, internationally the SSC was widely viewed as the result of nationalistic "coopting" of the cooperative world accelerator proposed by the International Committee on Future Accelerators. This paper discusses the period of the SSC before the project moved to Waxahachie, Texas-encompassing "Phase 0," the SSC's Reference Designs Study (RDS) (1983-84) and "Phase 1," the Central Design Group (CDG) (1984-88), both based at the Lawrence Berkeley Laboratory-in terms of American themes of frontier venture, settlement, and conquest. We employ as a model the construct of the frontier outpost, a temporary fort or staging ground at the border of unknown territory. Traditionally the outpost phase is a period of excitement, courage, and uncertainty. The disillusionment of pioneers when their outpost falls is a familiar theme in American history. When the Berkely outpost of the SSC collapsed at the end of 1988, all members of the project's upper level of management abruptly left the project, creating a disjunction between those who designed the machine and those who controlled its future course. This disjunction resulted from a clash between the traditional research practices of physicists in the outpost community and new ones demanded by the United States Department of Energy (DOE). The clash helped set the stage for the demise of the SSC in October 1993.
The history of Swedish astronomy 1860-1940 is surveyed. The period saw changes in the practices and theories of astronomers. Patterns of international contacts changed from mostly collaboration with Europe to increased contacts with the U. S. Spectroscopy and photography changed the practice of astronomy. Stellar statistics utilised the increased volume and mobility of astronomical data that came as a result of the new technologies. Kurt Lundmark and Bertil Lindblad participated in the development of new astronomcial models. Comparisons are made with the results in John Lankford's book, American Astronomy.
This paper examines the mediating role of hybrids between physics and practical engineers, who were interest in, and contributed to, science and technology. William Thomson (later Lord Kelvin) worked on both electromagnetic theory and submarine telegraphy, combining theory and practice into one. He thereby provided a model for scientific electrical engineering. Oliver Heaviside, in telephone engineering, followed the route that Thomson had paved to combine the mathematical techniques with the real world of telephone technology. In power engineering, John Ambrose Fleming played a significant role for constructing the core of engineering knowledge. I will contrast the differences in mediation made by Heaviside and Fleming, and explore personal, methodological, and social factors that resulted in these differences.
Julius von Rohr, a medical doctor of German birth, was appointed municipal buildings inspector and government land surveyor of the Danish West Indies (now the United States Virgin Islands) in 1757. His commission also directed him to busy himself with the study of the natural history of the islands, and this-particularly botany-was clearly his passion. He corresponded with major figures of natural history in Denmark and elsewhere. He established a botanical garden on the island of St. Croix. At the government's expense, he made an agronomical study of cotton all down the Antilles to South America. In the 1790s, when Denmark's abolition of the Atlantic slave trade was imminent-a measure which would doom the rich sugar plantations of the Danish West Indies-the government asked von Rohr, who by this time had more than thirty years' scientific and colonial administrative experience in the tropics, to undertake an expedition to the Danish slaving establishments on the West African coast to assess the potential of the territory around the forts for plantation agriculture. He sent ahead of him a natural historian's equipment, surveying instruments, and a substantial library, which included a considerable number of books and periodicals obtained for him and forwarded to the Coast from England by Joseph Banks. The ship carrying von Rohr to Africa vanished at sea, but the record of his earlier work, of the government's reliance on him, and of the literature he expected to be useful to him in this African venture is interesting evidence of the place of natural history in colonialism, of contemporary views of the coherence of the Atlantic plantation world, and of the importance of the West Indian model in European speculations and projects for the African tropics at the turn of the nineteenth century.
Beginning in the 1840s, American ethnologist Lewis Henry Morgan became fascinated by the kinship system which seemed to form the basis of the famous Iroquois League. During the 1850s and 1860s he conducted field work on a variety of Native American reservations and distributed kinship questionnaires throughout the world. On the basis of that research Morgan developed a conception of the relations between "blood" (inheritance) and social structure. Within that analysis, Morgan focused on the role of various approaches to property as the central locus of social change. His views of property reflect the altering approaches to property in the U. S. as representatives of the modern market confronted subsistence farmers and Native Americans. In this paper I argue that Morgan's kinship research was a synthesis of both the polygenist and monogenist positions which had divided the ethnological community in the United States and Europe. By shifting from race to a conception of social inheritance, Morgan was able to use the study of language and social structure to supercede generalizations based on anatomical differences while still asserting a biological basis for his conclusions. I will show that the roots for the project lay in the epistemological struggle between polygenists and monogenists and the divisiveness of the republic during the Secession Crisis (1850-1860).
This paper will consider Anne Conway's theory of substance in relation to that of some of her contemporaries, especially John Finch and Francis Mercury van Helmont. In playing Platonic themes against the demands of the mechanical philosophy, she dealt with questions about the activity of matter, matter and spirit, and related issues.
This paper examines the early career of Charles Christopher Adams in the context of the development of natural history and the emergence of ecology. Beginning as a student of variation under C. B. Davenport, Adams's early career was a continuous attempt to reform natural history. He considered that the experimental study of evolution as pursued by embryologists and physiologists was bound to be incomplete, since it did not pay attention to the evolution of the natural environments in which organic evolution takes place. At the same time, he protested the static condition of natural history, claiming that naturalists had become mere collectors. Torn between these two ways of practising biology, Adams's foremost concern during the first two decades of the twentieth century became the articulation of what he called the "new natural history," epitomized in his call for naturalists to move from "things" to "processes." This paper examines how Adams's idea of a new natural history, was expressed at three different levels: methodologically, in Adams's notion of the "genetic" method institutionally, in his initiatives as head of several museums and at the Roosevelt Wild Life Forest Experiment Station and, finally, at the level of content, in Adamss increasing interest in the study of the behavior of organisms.
In 1961 Alabama Governor, John Patterson commissioned a study to explore the "scientific dimensions of the race issue." The study was written by Wesley C. George, professor emeritus of histology and embryology at the University of North Carolina. George delivered his report, "The Biology of the Race Problem" to Governor Patterson in October, 1962. George argued that the brains were "less developed" than the brains of white Americans, particularly in the frontal lobes, "the part of the brain most involved in the activities of higher civilization." Because these morphological characteristics largely were heredity, George concluded that "the ability to develop a high culture is conditioned by the genetic endowment of a population group" and that "to compel White parents to send their children to school in as bad an environment as Negroes can and do create is neither social justice nor wise national policy." "The Biology of the Race Problem" was ignored by the scientific community and hooted at by civil rights organizations. It would have sunk into obscurity but for its tireless promotion by Carleton Putnam, the retired CEO of Delta Airlines. In the late 1950s Putnam became a tireless segregationist pamphleteer, arguing that integration was the product of a "scientific hoax" perpetrated by Franz Boas and his students. Putnam had connections both to the segregationist political establishment of the South as well as to a network of white supremacist scientists. For Putnam, the evidence presented by George was decisive for "this was the substance of inheritance, a disclosure of just what it was the Negro inherited." George's scientific views on African Americans were shared only by a tiny minority of practicing scientists. Nonetheless, by relentless promotion and connecting it to the vital political issue of the day, segregation, Putnam was able to transform "The Biology of the Race Problem" into a central document in the South's fight to preserve segregation as well as a document debated in the pag es of mainstream scientific journals. The historical episode outlined above is important for three reasons. First, by unpacking the network of political and scientific defenders of segregation I hope to show how scientists could be activists for conservative, rather than progressive, social causes. Second, I hope to show how biology had persuasive power over and above that of the social scientists. George, as a biologist, was able to offer arguments that were more persuasive to the general public because of his ability to point to physical, rather than psychological, traits that caused the inferiority of African Americans. Third, science is more easily enrolled in social causes when it is "preaching to the choir." Because the white South was convinced of the inferiority of African Americans, George's scientific proof was all the more persuasive to them.
Sarah Jansen (Max-Planck-Institute for
the History of Science)
Entomologists' conceptions of insect-related plant diseases underwent a pronounced transformation between 1880 and 1920: The outbreak of a disease, which had earlier been understood as occasioned by a variety of conditions, was now ascribed to a single disease-causing agent. This development parallels the emergence of the bacteriological paradigm in human medicine, in which the germ was posited as the cause of disease. Rather than investigating a transformation of ideas, or the social history of this development, I will reconstruct how this transformation in etiology traced out changes in the representation of and intervention in the insect/plant complex. Emphasis will be given to the introduction of warfare techniques such as chemical agents and aircraft into economic entomology.
In 1967, RAND re-deployed its intellectual weapons so as to provide research support for Lyndon Johnson's War on Poverty and other social welfare initiatives. RAND's first decisive step in this diversification was the corporation's partnership with the City of New York in creating the New York City-RAND Institute-a "think tank" devoted to the application of "systems" processes to urban problems. The institutional and intellectual history of the New York City-RAND Institute, which is the topic of the present proposed paper, provides an ideal opportunity to analyze the widespread push to convert military methodologies, especially "systems" processes, to civilian purposes during the 1960s and 1970s. The relationship between the municipal government of New York City and RAND began in January 1968, when Mayor John Lindsay and RAND President Henry Rowen signed four contracts for six-month studies of the New York Police Department, the Fire Department, the Housing and Development Administration, and the Health Services Administration. In 1969, the partnership between RAND and the Lindsay administration was formally institutionalized with the joint creation of the New York City-RAND Institute. This institute remained in operation for six years, and its staff members engaged in a broad range of research efforts, which are well-documented in RAND's previously inaccessible archives. While short-lived, the New York City-RAND Institute provided RAND with an exceptionally rich "laboratory" in which to test its retooled intellectual arsenal, and provides historians of science a unique window on attempts to de-militarize systems methodologies.
Bringing together different mechanical arts, drawings of instruments and defenses of practical knowledge, the Renaissance manuals and treatises develops a discourse on techniques which is the origin of what we call technology. Before it caught the attention of 16th and 17th century natural philosophers, technĂ was usually considered an inferior form of knowledge. The exception was the skeptics. They attacked metaphysical philosophy but respected what they considered more trustworthy technical knowledge. Thanks to Popkin's work, ancient skepticism has been recognized as having played an important role in the rise of modern thought, especially in its mitigated and constructive formulations (Mersenne, Gassendi, Boyle). However, not enough attention has been given and nature. Widely acknowledged for his effort in bringing together arts and sciences, Francis Bacon endorsed part of the skeptical outlook but disapproved their more extreme conclusions. Perez-Ramos (1989,1996), in his interpretation of Bacon's idea of science and of the maker's knowledge tradition, makes an effort to clarify Bacon's ambiguous doctrine of form. His reevaluation takes into account the operative aspects of Bacon's criticism of the sterility of the philosophical tradition, i.e. his ars destruans, but also in his attempts to provide an epistemological bases to the mechanical arts. Attention will be given to the views of the skeptical philosophers of the period and to their notions of arts and nature.
Hobbes's encounter with the mathematics of the seventeenth century has been described as a love affair that ended in tragedy. Aubrey tells us that Hobbes fell "in love with geometry," and his infatuation with the subject led the author of Leviathan to declare geometry "the only science it hath pleased God hitherto to bestow upon mankind." The source of Hobbes's admiration for geometry lies in the fact that it is a model of proper method: secure first principles lead by rigorous deduction to recondite conclusions, which are thereby established as solidly as the first principles themselves. Unfortunately for Hobbes, his own attempts to solve such problems as the quadrature of the circle led to disaster. His efforts were refuted by John Wallis, and the principal outcome of their long and bitter public controversy was the utter demolition of Hobbes's mathematical reputation. My purpose here is not, however, with the downfall of Hobbes's mathematics. Instead, I will investigate his account of mathematical method and compare it with Jesuit teaching on the nature of mathematics and the criteria for demonstrative knowledge. Jesuit discussion of the philosophy of mathematics may seem an unlikely source for Hobbesian Methodological doctrines. After all, Hobbes was second to none in his profession of distaste for scholastic philosophy; moreover, his theory of sovereignty treats Papal claim of authority with contempt, and he regarded the Society of Jesus as not far removed from a criminal organization. Nevertheless, Jesuit authors participate in an important debate over the status of mathematics among the sciences, and their conclusions on the requisites for demonstrative knowledge are quite close to Hobbes's own. Further, the critique of Cavalieri (and his account of the proper formulation of the method indivisibles) can be seen as the attempt to put the new method on a secure footing, and indeed a footing that satisfies the criteria for demonstrative reasoning propounded by the Jesuits.
During the 1920s and 1930s, a series of theoretical and applied developments at the Bell Laboratories of AT&T provided the basic theory and tools for the modern field of control engineering. In 1927, working with the mathematician H. Nyquist, H. S. Black developed the first stable feedback amplifier for use in telephone applications. Five years later, Nyquist published his theoretical treatment of stability in feedback circuits, terming his approach regeneration theory. Finally, just prior to the war, H. Bode synthesized a general method for the design of stable feedback circuits. Subsequently, in 1945 he published these ideas as Network Analysis and Feedback Amplifier Design. In the postwar, as the field of control engineering shifted from wartime weapons research to peacetime applications in the steel, chemicals, and machine tool industries, these men, especially the theorists Bode and Nyquist, took on the roles of founding fathers in the field, widely respected and acknowledged as providers of insights fundamental to the formation of the fledgling discipline. In Japan, as in the United States, following the war, the field of control engineering took off. Led by Y. Takahashi, Y. Ľshima, and R. Sawai, the Jid˙ Seigy˙ Kenky§kai promoted the use and theory of control engineering. Although these men had been well taught in the control and stability theories of German control engineers such as W. Oppelt and A. Leonhard, the American occupation provided them with the opportunity to study American ideas. Easy access to American engineering information by way of the Economics and Science section of GHQ and the GHQ Technical Library at Hibiya shifted the focus of Japanese control engineering theory and practice from Germany to America. Although Takahashi and his colleagues adopted an essentially complete American theory of analog control, they did not adopt the same pantheon of founding fathers. In the chaos, hunger, and strife of postwar Japan, these men searched for practical solutions to the problem s confronting the Japanese nation. While useful, theory could be more of an impediment than an aid in the rapid solution of pressing problems. For an engineering community still more at ease with the German language than English, American engineers, who wrote in simple, clear style with a strong emphasis on application, made the most convincing arguments and grew the most attention. Consequently, while the Japanese control community recognized Nyquist, Bode, and Black for their contributions, there respect and admiration went to engineers, like S. Nichols, H. Brown, and R. Oldenberger, who simplified and clarified these methods for easy and general application.
In early eighteenth-century London, two commercial prospects seemed particularly appealing to the struggling small craftsman. Newspapers offered an opportunity to profit immediately almost without demanding vast initial investment; and patent medicines promised even greater rewards for perhaps no investment at all. So it was that the entrepreneurial creators of the newspaper business in the age of Walpole were also among the peddlers of universal elixirs. The coincidence of credit-building strategies in these twin realms of news and medicine is particularly instructive, coming as it does at a pivotal moment in the development of both commercial authorship and medical credit. This paper will discuss the implications by examining perhaps the most important of all these figures-the printer, plotter, and "alchymist" William Rayner. As well as being a major creator of the newspaper industry in the mid-eighteenth century, Rayner aggressively marketed "universal elixirs" from his warehouse in Southward. He was instrumental in creating an oppositional press, only to prove equally instrumental in destroying it through both piracy and collaboration with government informers. His reprinting activities gave him a walk-one part in the history of copyright. And a restless careeer culminated in 1745, when sympathy for the Jacobite cause led to Rayner's involvement in a bizarre plot to blow up parliament. Yet Rayner himself survived to retire in modest affluence-testimony to the success his commercial and creative strategies had fostered. In retracing his story we see the character and importance of those strategies.
Protozoa were important objects of biological research at the turn of the century. However, the psychological study of single-celled organisms was also an active domain of scientific investigation. The study of the "psychic life" of microorganisms, often referred to as "cellular psychology," was the stage upon which some of the most important issues in the life sciences were debated: vitalism versus mechanism, the evolution of consciousness, the objectification of psychological phenomena, and the disciplinary boundaries between biology and psychology. In this paper we examine the historical circumstances surrounding the appearance and eventual disappearance of microorganisms as valid objects of psychological study, and the impact of these studies upon subsequent developments in psychology. We do so by treating the career of the French psychologist Alfred Binet as indicative of these broader movements. Binet promoted the study of the psychic life of microorganisms as "individual psychology," before turning to his better-known studies of child development and mental testing. We trace the origins of these studies to another lesser-known tradition; proponents of "general physiology" attempted to shift physiological study away from its focus on humans and higher organisms, and to ground the science anew in the study of life's most basic units, cells, making it applicable to all of the living world. Under the influence of Ernst Haeckel, Wilhelm Preyer, and Max Verworn, general physiology also encompassed the psychological study of the most fundamental forms of life. During this period, researchers such as Karl Möebius, Jacques Loeb, Herbert Spencer Jennings, and Robert Yerkes investigated psychological phenomena in lower organisms. Notions of the uniqueness of the individual, which were established in these studies of microorganisms, were taken up later in developmental psychology and the psychology of personality.
The success of German chemists before the First World War in classical synthetic organic chemistry is in part due to a remarkable symbiotic relationship between academic and industrial development. Although the war decade brought about significant restructuring of both academic and industrial institutions, there is still no clear consensus among historians as to its consequences for the professional and disciplinary development of chemistry in Germany. To what extent did other fields such as physical chemistry develop similar patterns of academic-industrial interaction in competition with organic chemistry, and with what success? To what extent did rhetoric reflect the situation of the typical academic and industrial chemist in the "lower ranks"? How might these conditions have affected chemical research in the various subdisciplines? The goal of this paper is to provide a more solid basis for a comparative analysis of patterns of professionalization and academic-industrial interactions in chemistry, so as to examine the realities behind the rhetoric of symbiosis that became especially common between the wars. Utilizing quantitative evidence compiled from personnel files of the Bayer corporation with selected comparative data from other corporate archives, as well as a database of academic chemists, I propose in this paper to examine the social origins and careers of sample generations of German academic and industrial chemists at intervals between 1909 and 1939, suggesting the interaction between the chemists' social and disciplinary characteristics and their changing roles and career prospects in corporate as well as academic settings. I will pay particularly close attention to variations in industrial recruiting and research support by subdisciplines and possible effects of these demand shifts on academic chemists, as well as to career mobility between industry and academe. I will conclude with some general reflections on the social construction of professions, careers, and scientific research.
Descartes famously complained that the mathematics of his day rested on the fortuitous discovery of results through some combination of luck and genius. New results were produced neither systematically nor with certainty. To make mathematics secure and productive demanded effacing occult and tacit expertises, in favor of non-opaque and reproducible techniques. Yet Descartes did not go so far as to offer an algorithm for the production of mathematical truths. He offered his method and his geometry as systematic "spiritual exercises," designed to help produce new knowers with new sorts of certain knowledge. From his earliest sketches of mathematics to his Geometric, Descartes enunciated the simultaneous reform of geometry and geometers, in a vision of the dual function of philosophy captured by the notion of "spiritual exercises." In seeking tools to characterize "expertise" during the period, we should ask how contemporaries understood how philosophy altered philosopher and content at once. Such a contemporaneous view of expertise not only will help to see how Descartesë geometry could epitomize his method, but equally will help to resolve persistent questions about the apparently arbitrary limitations Descartes placed on the range of mathematical objects.
Ralph Gerard, in a 1952 address before the American Physiological Society, proposed that the professional identity of physiology was shaped largely by its "ambivalent attachment to the two worlds, of biological science and medical practice." During the first half of the twentieth century, American physiologists confronted not one but two crises of disciplinary identity. The first was the tension between physiological research and medical science as physiologists struggled to create a professional identity independent from clinical medicine. By mid-century, a second crisis emerged as the disciplinary boundaries between physiology and biophysics, molecular biology, and other emerging fields became blurred. Under these circumstances, the questions "what is physiology" and "who are physiologists" were not easily answered, least of all by physiologists themselves. In the immediate post-World War II period, American physiologists feared that their discipline was disappearing by virtue of splintering into a mélange of new fields with their own professional organizations, meetings, and journals, and research techniques, agendas, and questions. The professional anxiety posed by these developments was best reflected in two important surveys conducted by the American Physiological Society. The first survey in 1946, led by Edward Adolph, was meant largely to assess the impact of World War II on the American physiological community. The second survey, led by Ralph Gerard, began in 1952, and was sponsored by the newly established National Science Foundation. The results from both surveys reveal a discipline in dramatic flux, examining and negotiating its relationship to the new biological sciences dominated by molecular and biophysical approaches. In doing so, physiologists reasserted the perspective that their discipline, concerned with the functional study of biological processes, was not peripheral but central to the future of all biological research.
Placing Max Planck in wider context of debates concerning the proper approach to the creation of physical theory, I present a new picture of Planck's physics. Viewing Planck as engaged in debates about the proper approach to take in investigating the micro-world, and as having developed a coherent a pproach to that question, makes Planck's work appear highly consistent and structured. Planck's work on the quantum is interpreted within this framework as flowing from his broader methodology. Pace Kuhn, the influence of statistical considerations appears less important, while an approach derived from the work of Helmholtz, Kirchoff and others takes a more central place. It is demonstrated that Planck consciously sought to localize and approach the micro-world using a structured set theoretical tools and overall strategy. After developing a description of the tradition surrounding their work, and showing the coherence of Planck's research methodology with those traditions, Planck's work leading to the Blackbody formula is then viewed as tightly connected to this larger program, yielding a new perspective on its meaning and function. This perspective at once treats the details of Planck's work and connects it to larger debated about the proper character of physical theory.
14 March 2001 | Contact
HSS | Contact
the Web Editor | Return
© 1995-2001 by the History of Science Society, All Rights Reserved
We've Moved! This site is no longer updated.
Please use our new site at http://www.hssonline.org.