Deep South Philosophy of Biology Workshop

University of Alabama at Birmingham, March 31 - April 1, 2017

 

Abstracts (in author name order)

 

Selection for Misperception?  Prediction Error Minimization, R-strategies, and Culture

Marshall Abrams, Department of Philosophy, University of Alabama at Birmingham

 

Karl Friston and others developed Prediction Error Minimization (PEM, aka predictive coding, predictive processing, free-energy minimization) models of how nervous systems implement perceptual processes.  There is substantial though inconclusive evidence that PEM is realized in some parts of the brain.  Friston argues that PEM allows brains to infer causes of stimuli, and more generally to infer even complex causal relationships realized in the environment.  Because of this, he argues that PEM, if present, would be favored by natural selection.  I argue that there are evolutionary contexts in which natural selection would instead favor PEM-like processes resulting in individuals who routinely misperceive.  These are certain cases in which what's known as an r-strategy is selected for.  In such cases, natural selection would favor something like prediction error minimization distributed across individual organisms.  I argue that though it seems unlikely that human perceptual processes exemplify this kind of case, some of the ways in which we learn about the world through social learning (learning from others) do follow the r-strategy model that I describe.


Further details: Natural selection sometimes favors evolutionary "r-strategies" over "K-strategies".  A type of organism implements an r-strategy when it is evolutionarily successful by producing many offspring at relatively low cost even though many of them fail to reproduce.  Some organisms, by contrast, implement K-strategies, investing a lot of energy in producing few offspring who succeed in reproducing with relatively high probability.  Thus there may be cases in which accurate PEM-based perception is too costly, so natural selection favors cheaper neural processes--such as simpler implementations of PEM--that lead to many offspring having systematically inaccurate perceptions in common environmental conditions.  I illustrate this possibility with an agent-based simulation in which animals with a simpler PEM system compete against those with a more sophisticated one.  Such a case is not merely an instance of an evolutionary tradeoff that most advocates of PEM would accept: PEM systems will result in misperception in some conditions because of tradeoffs in the costs of developing a system for each individual. By contrast, selection for an r-strategy with respect to PEM can favor perceptual processes so environmentally inappropriate, for many individuals, that they routinely fail to reproduce.  Nevertheless, the PEM model may still be partly applicable to such perceptual r-strategies cases, but at a higher level. Where natural selection favors an r-strategy with respect to PEM, it favors perception that's accurate on average within a lineage though inaccurate in many individuals.  In PEM r-strategy cases, natural selection favors a generalized PEM process that's distributed over members of a lineage, either because communication allows a multi-individual PEM process, or in a more abstract sense.  It might be thought these points are irrelevant to the species of greatest interest to PEM researchers: humans, who invest a great deal of energy in producing few offspring who often reproduce, and are thus K-strategists par excellence.  However, being a K- or r-strategist is a matter of degree, and "perceptual" processes can be accurate about some aspects of the world but inaccurate about others.  It's been argued that cultural complexity in human social life routinely leads to maladaptive assessments of social and environmental conditions, producing extinction of groups, migration, or adoption of new cultural variants from more successful individuals.  I suggest that we may be r-strategists with respect to higher-level, culturally mediated "perceptual" processes, even though we are K-strategists for basic perceptual and cognitive systems.

 


Tool Development: How Experiment-Driven Sciences Progress

John Bickle, Philosophy, Psychology, and Institute for Imaging and Analytic Technologies (I2AT), Mississippi State University; Neurobiology, University of Mississippi Medical Center

 

Philosophers of science have explored numerous ways that science progresses, but one way which hasn't been explored enough is through tool development for novel experimental interventions. This is unfortunate because tool development of this sort is the principal way that contemporary experiment-driven sciences progress. Cellular/molecular neurobiology coupled with behavioral neuroscience using animal models are such sciences. Numerous case studies are available for reflection. I'll focus here on optogenetics and DREADDs (Designer Receptors Exclusively Activated by Designer Drugs) technologies, which offers experimenters unprecedented control over activity in specific neurons in behaving organisms. This case is especially interesting because it involves an experimental tool still being developed--tool development progress in the making, right before our eyes. I'll use initial metascientific concepts I've developed (in Bickle 2016), motivating problem and initial and second-phase hook experiments, to investigate some details of this case study. The results are philosophically interesting for at least three reasons, (1) They offer a distinctively un-Kuhnian model of what drives actual revolutions in experiment-driven sciences. (2) They focus attention on features of this case which are highly reminiscent of Ian Hacking's earlier suggestions about the relative independence of experiment from "theory." And (3) they challenge the still-prevalent assumption that physics constitutes the "paradigmatic" science. I'll develop all three of these philosophical implications.

Bickle, J. (2016). "Revolutions in neuroscience: Tool development." Frontiers in Systems Neuroscience, March 2016,  http://journal.frontiersin.org/article/10.3389/fnsys.2016.00024

 


Inferentialism in Biological Practice

Daniel Burnston, Department of Philosophy, Tulane University

 

I offer a practice-based argument for inferentialism about scientific representation.  The vast majority of views on scientific representation are referentialist, in one form or other—they posit that scientific representations have their explanatory value in virtue of being similar to, isomorphic to, or producing a model of the system of interest.  The inferentialism defended by Suarez is the lone non-referentialist view currently on offer.  Suarez claims that referentialist views posit the wrong semantic properties for explaining representation in general, and thus that inferentialism is the best option.  While these arguments are effective as far as they go, they do not connect inferentialism to scientific practice.  My aim is to give a positive argument for inferentialism based on a case study from mammalian chronobiology.  Chronobiologists study biological time.  The case study I will discuss involves the representation of molecular "clock" systems within individual cells, wherein the interaction of gene regulators and promoters produces oscillations in molecular quantities over a 24 hour period. 


I will focus on a particular form of representational practice: that of representing the same data set in distinct ways.  In a series of papers, Ukai-Tadenuma and colleagues developed a variety of ways of representing relationships between promoters and gene products, specifically to show important phase relationships between their activity and quantities.  These representations include data graphs, a vector model for representing phase relationships, and a network diagram for showing the casual sequence between interacting components.  I argue that each of these representations contributes something ineliminable for explaining circadian rhythms in mammalian cells, and that inferentialism is the only account that can explain this ineliminability. 


The main argument against referentialism is that the distinct representations play distinct explanatory roles that are not based on their having distinct referential roles.  In fact, a vital aspect of the relative uses of the representations is that they represent the same relationships in distinct ways.  Since referentialism ties explanatory role to a reference relation, it cannot accommodate distinct explanatory roles where there is referential overlap.  I claim that this is true for any variety of referentialism. 


I then offer a characterization of inferentialism that can explain the representational practice in the case study.  I argue that a given representation should be characterized as a triple, comprising (1) the set of inferences entailed by the representation (absent defeaters), (2) the set of inferences compatible with, but not entailed by the representation, and (3) the set of inferences incompatible with the representation.  Individual representations are developed to convey specific inferences which they entail—however, these limited entailments can only explain parts of the phenomenon.  Alternative forms of representation are developed to convey explicit inferences which entail other aspects of the phenomenon, and are in the set of compatible inferences for the already established types.   I claim that this view adequately explains both the development and the explanatory import of the representations in the case study.  I thus offer both a defense and extension of inferentialism through a detailed analysis of practice.

 


A Diagrammatic Framework to Aid Discovery of Genetic Disease Mechanisms

Lindley Darden , Professor of Philosophy and of History at the University of Maryland College Park, and member of the Moult Laboratory for Computational Biology at the Institute for Bioscience and Biotechnology Research at UM Shady Grove

 

The nature of the product to be discovered guides the reasoning to discover it. Biologists and medical researchers often search for mechanisms. The "new mechanistic philosophy of science" provides resources about the nature of biological mechanisms that aid the discovery of mechanisms. Philosophers have begun applying it to the discovery of mechanisms in medicine. A diagrammatic representation of disease mechanisms indicating both what is known and not known at the given time guides the researcher and collaborators in discovery. Mechanisms of genetic diseases provide the examples.

 


Non-Mechanistic Explanations with Mechanistic Grounds

Nicholaos Jones, Department of Philosophy, University of Alabama in Huntsville

 

Paradigmatic explanations in molecular and cell biology are mechanistic. Recent systems approaches, however, pursue strategies that depart from this paradigm, appealing to topological properties, organizing principles, and other non-mechanistic features of biological systems. The explanatory status of these strategies is unclear. Insofar as the non-mechanistic features presuppose mechanistic details, the strategies seem to be explanatory by virtue of being mechanistic; but insofar as the same features float free from such details, the strategies seem either to lack explanatory power or to provide only sketches awaiting proper mechanistic detail. I resolve this issue by interpreting the systems-oriented strategies as explanatory and non-mechanistic despite their reliance upon mechanistic detail. I do so by distinguishing explanatory power from explanatory grounding, arguing that explanations with mechanistic grounding need not receive their power from mechanistic details

 


Mathematical Explanation and the Biological Optimality Fallacy

James Justus, Department of Philosophy, Florida State University

(with Samantha Wakil, Department of Philosophy, University of North Carolina)

 

Pure mathematics can play an indispensable role explaining empirical phenomena if recent accounts of insect evolution are correct. In particular, the prime life-cycles of cicadas and the geometric structure of honeycombs are taken to undergird an inference to the best explanation about mathematical entities. Neither example supports this inference or the mathematical realism it is intended to establish. Both incorrectly assume facts about mathematical optimality drove selection for the respective traits and explain why they exist. Finally, we show how this problem can be avoided, identify limitations of explanatory indispensability arguments, and attempt to clarify the nature of mathematical explanation.

 


What Role for "Fitness"?

Charles H. Pence, Department of Philosophy, Louisiana State University

 

A number of recent works in philosophy of biology have--in perhaps unexpected ways, I will argue--raised an interesting question about fitness. Namely, what is fitness actually for in the philosophy of biology? I will offer some novel arguments about the status of "predictive fitness" (in the sense of Matthen and Ariew (2002)), and will connect these to recent work by Abrams, Birch, and Millstein. I conclude by drawing some speculative morals for where we should direct future work.

 


Niche Construction and the Biology of Normativity

Richard Richards, Department of Philosophy, University of Alabama

 

A typical response to naturalistic accounts of normativity focuses on the so-called "normativity gap" between explanatory reasons and normative reasons, arguing that science can only provide explanations of why we do things, but cannot provide the reasons we have to do things, independent of actual desires, preferences, etc. But standard naturalistic accounts are not based on a full naturalistic framework that includes human ecology and niche construction. When we look at human niche engineering, we can see that there is a niche-dependent normativity that is social and is based on the cognitive and institutional technologies within a niche. But there is also a niche-independent normativity that is personal and individual, and independent of these cognitive and institutional technologies. One insight of this account of normativity is the recognition of a common conflict between these niche-dependent and niche-independent streams of normativity.

 


Tomb of unkown soldier at the base of the Arc de Triomphe, with inscription 
'ICI REPOSE UN SOLDAT FRANCAISE MORT POUR LA PATRIE Christianity and Darwinism: The Paradox of War

Michael Ruse , Lucyle T. Werkmeister Professor and Director of the History and Philosophy of Science Program at Florida State University

 

We are taking note of the entry of the United States into the Great War (WWI) in the spring of 1917. To pay tribute to the men and women of all sides in that dreadful conflict, I look at the rival views of Christians and Darwinians about war, and especially about that war. I argue that there is an interesting paradox: Christians hate war but think it inevitable; Darwinians think that in some sense war was a good thing, but now we must and can transcend it.

 



My Unlucky Generation: The Unanticipated Casualties of the Chickenpox Vaccine

Laura Seger, Department of History and Philosophy of Science and Medicine, Indiana University, Bloomington; and Department of Philosophy, University of Alabama at Birmingham

 

"They get younger every year" is a common anecdotal observation among physicians regarding patients with shingles. What was once (and, for most of the public, still is) regarded as an old person’s disease, shingles is now appearing more and more frequently among those in their 30s. One theory for this decrease in average susceptible age centers on the US introduction of the chickenpox vaccine in 1995. Shingles and chickenpox are caused by the same virus, which lies dormant in nerve clusters along the spinal cord following the primary infection (i.e. chickenpox). Reactivation of the virus results in a painful, localized rash known as shingles. Compromised immune systems are less able to suppress viral reactivation, and immune systems typically weaken with age, which is why shingles has historically been viewed as a disease of the elderly. But following the introduction of the chickenpox vaccine in 1995, those with the dormant virus stopped receiving "booster shots" of antibodies from being exposed to the live virus shed by those suffering from chickenpox. In simpler terms, parents are now getting sick because their kids aren't. Since the shingles vaccine (which is essentially just a larger dose of the chickenpox vaccine) is only approved by the FDA for those aged 50 and over, anyone younger than that who had chickenpox prior to 1995--which includes almost everyone now in their 30s or 40s--has no way to boost their chickenpox/shingles antibodies. This unlucky collection of approximately 84 million individuals constitutes a full quarter of the entire US population. My paper explores the bioethical ramifications of lowering the approved age for the shingles vaccine and argues, instead, for public health initiatives aimed at disseminating the early warning signs of the disease, particularly to those who mistakenly believe they're too young to be at risk.