Full text
Dang, Haixin. Do Collaborators in Science Need to Agree?
2019, Philosophy of Science 86, 1029-1040
Expand entry
Added by: Björn Freter, Contributed by: Dana Tulodziecki
Abstract: I argue that collaborators do not need to reach broad agreement over the justification of a consensus claim. This is because maintaining a diversity of justifiers within a scientific collaboration has important epistemic value. I develop a view of collective justification that depends on the diversity of epistemic perspectives present in a group. I argue that a group can be collectively justified in asserting that P as long as the disagreement among collaborators over the reasons for P is itself justified. In conclusion, I make a case for multimethod collaborative research and work through an example in the social sciences.
Comment: Reading connecting philosophy of science and social epistemology; suitable for lower-level classes and up; good article for highlighting one way in which science is a social epistemic enterprise
Full textRead freeBlue print
De Toffoli, Silvia, Giardino, Valeria. An Inquiry into the Practice of Proving in Low-Dimensional Topology
2015, in From Logic to Practice, Gabriele Lolli, Giorgio Venturi and Marco Panza (eds.). Springer International Publishing.
Expand entry
Added by: Fenner Stanley Tanswell
Abstract: The aim of this article is to investigate specific aspects connected with visualization in the practice of a mathematical subfield: low-dimensional topology. Through a case study, it will be established that visualization can play an epistemic role. The background assumption is that the consideration of the actual practice of mathematics is relevant to address epistemological issues. It will be shown that in low-dimensional topology, justifications can be based on sequences of pictures. Three theses will be defended. First, the representations used in the practice are an integral part of the mathematical reasoning. As a matter of fact, they convey in a material form the relevant transitions and thus allow experts to draw inferential connections. Second, in low-dimensional topology experts exploit a particular type of manipulative imagination which is connected to intuition of two- and three-dimensional space and motor agency. This imagination allows recognizing the transformations which connect different pictures in an argument. Third, the epistemic—and inferential—actions performed are permissible only within a specific practice: this form of reasoning is subject-matter dependent. Local criteria of validity are established to assure the soundness of representationally heterogeneous arguments in low-dimensional topology.
Comment (from this Blueprint): De Toffoli and Giardino look at proof practices in low-dimensional topology, and especially a proof by Rolfsen that relies on epistemic actions on a diagrammatic representation. They make the case that the many diagrams are used to trigger our manipulative imagination to make inferential moves which cannot be reduced to formal statements without loss of intuition.
Full textBlue print
De Toffoli, Silvia. Groundwork for a Fallibilist Account of Mathematics
2021, The Philosophical Quarterly, 71(4).
Expand entry
Added by: Fenner Stanley Tanswell
Abstract: According to the received view, genuine mathematical justification derives from proofs. In this article, I challenge this view. First, I sketch a notion of proof that cannot be reduced to deduction from the axioms but rather is tailored to human agents. Secondly, I identify a tension between the received view and mathematical practice. In some cases, cognitively diligent, well-functioning mathematicians go wrong. In these cases, it is plausible to think that proof sets the bar for justification too high. I then propose a fallibilist account of mathematical justification. I show that the main function of mathematical justification is to guarantee that the mathematical community can correct the errors that inevitably arise from our fallible practices.
Comment (from this Blueprint): De Toffoli makes a strong case for the importance of mathematical practice in addressing important issues about mathematics. In this paper, she looks at proof and justification, with an emphasis on the fact that mathematicians are fallible. With this in mind, she argues that there are circumstances under which we can have mathematical justification, despite a possibility of being wrong. This paper touches on many cases and questions that will reappear later across the Blueprint, such as collaboration, testimony, computer proofs, and diagrams.
Full text
Demarest, Heather. Fundamental Properties and the Laws of Nature
2015, Philosophy Compass 10(5) 224-344.
Expand entry
Added by: Laura Jimenez
Abstract: Fundamental properties and the laws of nature go hand in hand: mass and gravitation, charge and electromagnetism, spin and quantum mechanics. So, it is unsurprising that one's account of fundamental properties affects one's view of the laws of nature and vice versa. In this essay,the author surveys a variety of recent attempts to provide a joint account of the fundamental properties and the laws of nature. Many of these accounts are new and unexplored. Some of them posit surprising entities, such as counterfacts. Other accounts posit surprising laws of nature, such as instantaneous laws that constrain the initial configuration of particles. These exciting developments challenge our assumptions about our basic ontology and provide fertile ground for further exploration.
Comment: The article introduces in a simple way some fundamental concepts such as ‘law of nature’, ‘properties’, the notion of ‘categorical’ and ‘dispositional’ or the distinction between the governing and the systems approaches. It could serve as an introduction for those undergraduates that have never heard of these concepts before, or as a further reading for those in need of clarification. Some examples of modern fundamental physics are used as examples.
Full textRead freeBlue print
Dick, Stephanie. AfterMath: The Work of Proof in the Age of Human–Machine Collaboration
2011, Isis, 102(3): 494-505.
Expand entry
Added by: Fenner Stanley Tanswell
Abstract: During the 1970s and 1980s, a team of Automated Theorem Proving researchers at the Argonne National Laboratory near Chicago developed the Automated Reasoning Assistant, or AURA, to assist human users in the search for mathematical proofs. The resulting hybrid humans+AURA system developed the capacity to make novel contributions to pure mathematics by very untraditional means. This essay traces how these unconventional contributions were made and made possible through negotiations between the humans and the AURA at Argonne and the transformation in mathematical intuition they produced. At play in these negotiations were experimental practices, nonhumans, and nonmathematical modes of knowing. This story invites an earnest engagement between historians of mathematics and scholars in the history of science and science studies interested in experimental practice, material culture, and the roles of nonhumans in knowledge making.
Comment (from this Blueprint): Dick traces the history of the AURA automated reasoning assistant in the 1970s and 80s, arguing that the introduction of the computer system led to novel contributions to mathematics by unconventional means. Dick’s emphasis is on the AURA system as changing the material culture of mathematics, and thereby leading to collaboration and even negotiations between the mathematicians and the computer system.
Full text
Dissanayake, Ellen. Becoming Homo Aestheticus: Sources of Aesthetic Imagination in Mother-Infant Interactions
2001, Substance 30 (1/2):85.
Expand entry
Added by: Chris Blake-Turner, Contributed by: Christy Mag Uidhir
Introduction: Along with the vital abilities to cry and to suckle, human neonates are born with remarkable capacities that predispose them for social interaction with others. For example, newborns prefer human faces and human voices to any other sight or sound (Johnson et al. 1991, 11). They can imitate face, mouth, and hand movements and respond appropriately to another person's emotional expressions of sadness, fear, and surprise. It is perhaps less well known that at birth, infants can also estimate and anticipate intervals of time and temporal sequences (DeCasper and Carstens 1980). They can remember these temporal patterns and categorize them in both time and space, and in terms of affect and arousal (Beebe, Lachman and Jaffe 1997). By six weeks of age, these innate perceptual and cognitive abilities permit normal infants to engage in complex communicative interchanges with adult partners--the playful behavior that is commonly or colloquially called "babytalk."
Comment:
Full textBlue print
Dōgen. Dōgen 道元 (1200–1253)
2011, In James W. Heisig, Thomas P. Kasulis and John C. Maraldo (eds.) Japanese Philosophy. A Sourcebook. Honolulu: University of Hawai’i Press, pp. 141-162
Expand entry
Added by: Björn Freter
Abstract: In Japanese religious history, Dōgen (1200–1253) is revered as the founder of the Japanese school of Sōtō Zen Buddhism. Tradition says he was born of an aristocratic family, orphaned, and at the age of twelve joined the Tendai Buddhist monastic community on Mt Hiei in northeastern Kyoto. In search of an ideal teacher, he soon wandered off from the central community on the mountain and ended up in a small temple in eastern Kyoto, Kennin-ji.
Comment (from this Blueprint): Excerpts from Shōbōgenzō (Repository of the Eye for the Truth), the major philosophical work of Dōgen (1200–1253), founder of the Japanese school of Sōtō Zen Buddhism allowing to deepen his philosophical understanding of nature.
Full textBlue print
Donaldson, Sue, Kymlicka, Will. Zoopolis: A Political Theory of Animal Rights
2011, Oxford University Press
Expand entry
Added by: Björn Freter
Publisher’s Note: Zoopolis offers a new agenda for the theory and practice of animal rights. Most animal rights theory focuses on the intrinsic capacities or interests of animals, and the moral status and moral rights that these intrinsic characteristics give rise to. Zoopolis shifts the debate from the realm of moral theory and applied ethics to the realm of political theory, focusing on the relational obligations that arise from the varied ways that animals relate to human societies and institutions. Building on recent developments in the political theory of group-differentiated citizenship, Zoopolis introduces us to the genuine "political animal". It argues that different types of animals stand in different relationships to human political communities. Domesticated animals should be seen as full members of human-animal mixed communities, participating in the cooperative project of shared citizenship. Wilderness animals, by contrast, form their own sovereign communities entitled to protection against colonization, invasion, domination and other threats to self-determination. `Liminal' animals who are wild but live in the midst of human settlement (such as crows or raccoons) should be seen as "denizens", resident of our societies, but not fully included in rights and responsibilities of citizenship. To all of these animals we owe respect for their basic inviolable rights. But we inevitably and appropriately have very different relations with them, with different types of obligations. Humans and animals are inextricably bound in a complex web of relationships, and Zoopolis offers an original and profoundly affirmative vision of how to ground this complex web of relations on principles of justice and compassion.
Comment (from this Blueprint): An introduction to the groundbreaking theory of Zoopolis focussing on developing a political vision of human aninmals and non-human animals living together.
Full text
Douglas, Heather. Inductive Risk and Values in Science
2000, Philosophy of Science 67(4): 559-579.
Expand entry
Added by: Nick Novelli
Abstract: Although epistemic values have become widely accepted as part of scientific reasoning, non-epistemic values have been largely relegated to the "external" parts of science (the selection of hypotheses, restrictions on methodologies, and the use of scientific technologies). I argue that because of inductive risk, or the risk of error, non-epistemic values are required in science wherever non-epistemic consequences of error should be considered. I use examples from dioxin studies to illustrate how non-epistemic consequences of error can and should be considered in the internal stages of science: choice of methodology, characterization of data, and interpretation of results.
Comment: A good challenge to the "value-free" status of science, interrogating some of the assumptions about scientific methodology. Uses real-world examples effectively. Suitable for undergraduate teaching.
Full text
Douglas, Heather. Science, Policy, and the Value-Free Ideal
2009, University of Pittsburgh Press.
Expand entry
Added by: Simon Fokt, Contributed by: Patricia Rich

Publisher's Note: The role of science in policymaking has gained unprecedented stature in the United States, raising questions about the place of science and scientific expertise in the democratic process. Some scientists have been given considerable epistemic authority in shaping policy on issues of great moral and cultural significance, and the politicizing of these issues has become highly contentious.

Since World War II, most philosophers of science have purported the concept that science should be “value-free.” In Science, Policy and the Value-Free Ideal, Heather E. Douglas argues that such an ideal is neither adequate nor desirable for science. She contends that the moral responsibilities of scientists require the consideration of values even at the heart of science. She lobbies for a new ideal in which values serve an essential function throughout scientific inquiry, but where the role values play is constrained at key points, thus protecting the integrity and objectivity of science. In this vein, Douglas outlines a system for the application of values to guide scientists through points of uncertainty fraught with moral valence.

Following a philosophical analysis of the historical background of science advising and the value-free ideal, Douglas defines how values should-and should not-function in science. She discusses the distinctive direct and indirect roles for values in reasoning, and outlines seven senses of objectivity, showing how each can be employed to determine the reliability of scientific claims. Douglas then uses these philosophical insights to clarify the distinction between junk science and sound science to be used in policymaking. In conclusion, she calls for greater openness on the values utilized in policymaking, and more public participation in the policymaking process, by suggesting various models for effective use of both the public and experts in key risk assessments.

Comment: Chapter 5, 'The structure of values in science', is a good introduction to the topic of the role of values in science, while defending a particular perspective. Basic familiarity with philosophy of science or science itself should be enough to understand and engage with it.
Full text
Douglas, Heather. Values in Social Science
2014, In: Philosophy of Social Science A New Introduction. Edited by Nancy Cartwright and Eleonora Montuschi
Expand entry
Added by: Simon Fokt, Contributed by: Karoline Paier

Introduction: The social sciences have long had an inferiority complex. Because the social sciences emerged as distinct disciplines after the natural sciences, comparisons between the mature and successful natural sciences and the fledgling social sciences were quickly made. One of the primary concerns that arose was over the role of values in the social sciences. There were several reasons for this. First, the social sciences did not have the clear empirical successes that the natural sciences did in the seventeenth and eighteenth centuries to bolster confidence in their reliability. Some postulated that an undue influence of values on the social sciences contributed to this deficit of empirical success. Second, social sciences such as economics and psychology emerged from their philosophical precursors gradually and often carried with them the clear normative trappings of their disciplinary origins. Third, although formal rules on the treatment of human subjects would not emerge until the second half of the twentieth century, by the time the social sciences emerged, it was obvious there were both ethical and epistemic challenges to experimenting on human subjects and human communities. Controlled settings were (and are) often difficult to achieve (or are unethical to achieve), making clear empirical success even more elusive. Finally, there is the additional com-plication that social sciences invariably study and/or comment upon human values. All of these considerations lent credence to the view that social sciences were inevitably more value-laden, and as a result less reliable, than the natural sciences.

Comment:
Full text
Drewery, Alice. Essentialism and the Necessity of the Laws of Nature
2005, Synthese 144(3): 381-396.
Expand entry
Added by: Laura Jimenez
Abstract: In this paper the author discusses and evaluates different arguments for the view that the laws of nature are metaphysically necessary. She conclude that essentialist arguments from the nature of natural kinds fail to establish that essences are ontologically more basic than laws, and fail to offer an a priori argument for the necessity of all causal laws. Similar considerations carry across to the argument from the dispositionalist view of properties, which may end up placing unreasonable constraints on property identity across possible worlds. None of her arguments preclude the possibility that the laws may turn out to be metaphysically necessary after all, but she argues that this can only be established by a posteriori scientific investigation. She argues for what may seem to be a surprising conclusion: that a fundamental metaphysical question - the modal status of laws of nature - depends on empirical facts rather than purely on a priori reasoning.
Comment: An excellent paper that could serve as further or specialized reading for postgraduate courses in philosophy of science, in particular, for modules related to the study of the laws of nature. The paper offers an in-depth discussion of essentialist arguments, but also touches upon many other fundamental concepts such as grounding, natural kinds, dispositions and necessity.
Full text
Duflo, Esther. Field Experiments in Development Economics
2006, Advances in Economics and Econometrics: Theory and Applications, Ninth World Congress (Econometric Society Monographs), R. Blundell, W. Newey, & T. Persson (eds.), 322-348
Expand entry
Added by: Björn Freter, Contributed by: Johanna Thoma
Abstract: There is a long tradition in development economics of collecting original data to test specific hypotheses. Over the last 10 years, this tradition has merged with an expertise in setting up randomized field experiments, resulting in an increasingly large number of studies where an original experiment has been set up to test economic theories and hypotheses. This paper extracts some substantive and methodological lessons from such studies in three domains: incentives, social learning, and time-inconsistent preferences. The paper argues that we need both to continue testing existing theories and to start thinking of how the theories may be adapted to make sense of the field experiment results, many of which are starting to challenge them. This new framework could then guide a new round of experiments.
Comment: Duflo, of the MIT Poverty Action Lab and recent Nobel Prize Winner, summarizes some of the successes of randomized field evaluations in development economics. She then argues that the way forward for development economics should indeed involve some theorizing, but theorizing on the basis of our new empirical evidence - which might end up looking quite different from standard economic theory. This is a very useful (opinionated) introduction to field experiments for a week on field experiments in a philosophy of economics or philosophy of the social sciences course.
Full textSee used
Dutilh Novaes, Catarina. Formal Languages in Logic: A Philosophical and Cognitive Analysis
2012, Cambridge: Cambridge University Press
Expand entry
Added by: Jie Gao
Publisher’s Note: Formal languages are widely regarded as being above all mathematical objects and as producing a greater level of precision and technical complexity in logical investigations because of this. Yet defining formal languages exclusively in this way offers only a partial and limited explanation of the impact which their use (and the uses of formalisms more generally elsewhere) actually has. In this book, Catarina Dutilh Novaes adopts a much wider conception of formal languages so as to investigate more broadly what exactly is going on when theorists put these tools to use. She looks at the history and philosophy of formal languages and focuses on the cognitive impact of formal languages on human reasoning, drawing on their historical development, psychology, cognitive science and philosophy. Her wide-ranging study will be valuable for both students and researchers in philosophy, logic, psychology and cognitive and computer science.
Comment: This book addresses important questions about formal languages: why formalization works and the limitations of formalization. The questions are answered from cognitive, historical and logical points of view. It is a good introductory material for teaching on formal language and psychology of reasoning.
Full textBlue print
Dutilh Novaes, Catarina. The Dialogical Roots of Deduction: Historical, Cognitive, and Philosophical Perspectives on Reasoning
2020, Cambridge University Press.
Expand entry
Added by: Fenner Stanley Tanswell
Publisher’s Note: This comprehensive account of the concept and practices of deduction is the first to bring together perspectives from philosophy, history, psychology and cognitive science, and mathematical practice. Catarina Dutilh Novaes draws on all of these perspectives to argue for an overarching conceptualization of deduction as a dialogical practice: deduction has dialogical roots, and these dialogical roots are still largely present both in theories and in practices of deduction. Dutilh Novaes' account also highlights the deeply human and in fact social nature of deduction, as embedded in actual human practices; as such, it presents a highly innovative account of deduction. The book will be of interest to a wide range of readers, from advanced students to senior scholars, and from philosophers to mathematicians and cognitive scientists.
Comment (from this Blueprint): This book by Dutilh Novaes recently won the coveted Lakatos Award. In it, she develops a dialogical account of deduction, where she argues that deduction is implicitly dialogical. Proofs represent dialogues between Prover, who is aiming to establish the theorem, and Skeptic, who is trying to block the theorem. However, the dialogue is both partially adversarial (the two characters have opposite goals) and partially cooperative: the Skeptic’s objections make sure that the Prover must make their proof clear, convincing, and correct. In this chapter, Dutilh Novaes applies her model to mathematical practice, and looks at the way social features of maths embody the Prover-Skeptic dialogical model.
Can’t find it?
Contribute the texts you think should be here and we’ll add them soon!