Ontology
&
Methodology 2013
Titles and Abstracts
Special
Invited Speakers
David Danks, “The Myriad Influences of Goals on Ontology”
Much of the normative literature on scientific ontology focuses on
correspondence with “truth” (in some sense of that word). Roughly, the
thought is that our scientific ontology ought to correspond to the
“true” divisions in the world, where deviations arise because of
epistemic constraints (e.g., we are unable to acquire the relevant
data) or accidental features of our scientific investigations (e.g.,
the particular devices that we use in our experiments). On this
picture, pragmatic elements such as goals or intended functions do not
play any normative role, but simply act as distractions. In contrast, I
will argue in this talk that such pragmatic factors actually have a
normative role in science: the ontology that we ought to have depends
in part on our goals. Moreover, multiple aspects of these pragmatic
elements matter, including (but not only) type and timescale of desired
prediction, relevant computational constraints, and range of
applicability of a theory. That is, the normative role of pragmatic
factors is not limited to just one small aspect, but rather permeates
scientific theories and ontologies.
Peter Godfrey-Smith, "Evolution and Agency: A Case Study in Ontology and Methodology"
Kevin Hoover, “The Ontological Status of Shocks and Trends in
Macroeconomics”
Modern empirical macroeconomic models, known as
structural autoregressions (SVARs) are dynamic models that typically
claim to represent a causal order among contemporaneously valued
variables and to merely represent non-structural (reduced-form)
co-occurrence between lagged variables and contemporaneous variables.
The strategy is held to meet the minimal requirements for identifying
the residual errors in particular equations in the model with
independent, though otherwise not directly observable, exogenous causes
(“shocks”) that ultimately account for change in the model. In
nonstationary models, such shocks accumulate so that variables have
discernible trends. Econometricians have conceived of variables that
trend in sympathy with each other (so-called “cointegrated variables”)
as sharing one or more of these unobserved trends as a common cause. It
is possible for both the otherwise unobservable individual shocks and
the otherwise unobservable common trends to back out estimates of their
values. The issue addressed in this paper is whether and in what
circumstances these values can be regarded as observations of real
entities rather than merely artifacts of the representation of
variables in the model. The issue is related, on the one hand, to
practical methodological problems in the use of SVARs for policy
analysis – e.g., does it make sense to estimate of shocks or trends in
one model and then use them as measures of variables in a conceptually
distinct model? The issue is also related to debates in the
philosophical analysis of causation – particularly, whether we are
entitled as assumed by the developers of Bayes Nets approaches to rely
on the causal Markov condition (a generalization of Reichenbach’s
common-cause condition) or whether cointegration generates a practical
example of Nancy Cartwright’s “byproducts” objection to the causal
Markov condition.
Laura Ruetsche, "Method, Metaphysics, and Quantum Theory"
I will express suspicion of the idea (call it “naturalism") that there exist scientific grounds for choosing any unified metaphysics, much less some best metaphysics. And I will express support for a counter position (call it “locavoracity”) according to which the "metaphysics" of modern science are various, and healthily so. To motivate both my suspicion and my support, I will distinguish the "naturalist" position from the locavore one, paying attention to the ways the project of interpretation informs each. Then I will champion a methodological commitment that, in the arena of interpretation, gives the "naturalist" position a fighting chance to establish itself, but leaves room for the locavore to operate. Finally, I will draw from projects of interpreting a significant family of physical theories, including quantum field theories, considerations in favor of locavoracity.
James Woodward, "The Problem of Variable Choice"
Issues about the choice of variables for causal analysis (which one can think of a sort of naturalistic analog of issues about choice of ontology) arise across many different disciplines. For example, the results of any procedure for inferring causal relationships from statistical information (plus other assumptions) will be sensitive to the variables one chooses to employ because (among other considerations) statistical independence and dependence relations depend on variable choice: if X and Y are independent, then V= X+Y, U=X-Y will be dependent. A procedure that finds no causal relationship between X and Y, may find such a relationship between U and V. Standard procedures for causal inference give us advice about what conclusions to draw given statistical relationships among some set of variables but have little or nothing to say about where the variables themselves come from or when one choice of variables is superior to another. As another illustration, causal inference for complex systems requires (among other things) variable reduction in which causal relationships in systems consisting of components characterizable by very large number of variables are represented by a much smaller number of aggregate or coarse grained variables, with the results of the analysis again being sensitive to the choice of aggregation procedure.
One possible attitude toward the issue of variable choice is that there is nothing systematic to say about it—appropriate choice of variables depends upon the details of the particular system under analysis in a way that defies description by any general principles. (It is an “art” belonging to the context of discovery, or the product of some hazy, holistic (and ineffable) Quinean process of belief adjustment.) At the end of the day, this position may be correct but I propose, quixotically, to explore the opposite possibility: that there are useful heuristics of some generality that can be used to guide variable choice. Here are some possible candidates (there are lots of others):
1) Choose variables that have unambiguous effects under manipulation, rather than variables that have heterogeneous effects. So total cholesterol is a bad variable if we are interested in effect on cardiac health and splitting it into low and high density cholesterol is better.
2) Choose variables which allow for the formulation of causal
relationships satisfying the Markov Condition or which satisfy standard expectations about the relationship between causation and screening off screening off relations.
3) Choose variables that are casually specific in the sense that inference procedures applied to those variables yield the result that they affect only a limited number of other variables in some set of interest rather than indiscriminately affecting all of them.
4) Choose variables so that causes are proportional to their effects in Yablo’s sense. Woodward (2010) attempts to spell out this idea within an interventionist framework. The idea is also related to the notion of a minimal model understood as a model that includes all and only difference making factors for some set of explananda.
One should not think of these as heuristics that can be applied completely de novo or in a presuppostionless Cartesian fashion, guaranteeing success regardless of starting point. I assume instead that we begin with an initial set of variables (which correspond to some initial empirical assumptions about the things and relationships in the domain of interest) and then apply the heuristics to them, which may mean that the heuristics won’t work at all if we begin at a sufficiently unfavorable starting point (e.g. some grotesquely gruified initial set of variables.) But to the extent that the heuristics turn out to be useful one may perhaps think of them as part of a process of learning an appropriate ontology for some domain of interest and in this respect potentially more illuminating that analytic metaphysicians appeals to ideas about perfectly natural properties and the like.
References
Wooodward (2010) “Causation in Biology: Stability, Specificity, and the Choice of Levels of Explanation”. Biology and Philosophy 25: 287-318.
Yablo, S. 2003, “Causal Relevance”, Philosophical Issues, 13, Philosophy of Mind, pp. 316-28.
Virginia
Tech Speakers
Benjamin Jantzen, “The Algebraic Conception of Natural Kinds”
Deborah Mayo & Aris Spanos, “Ontology &
Methodology in Statistical Modeling”
Lydia Patton, “Theory Assessment and Ontological Argument”
Contributed
Papers
Erik Angner, “Behavioral vs. Neoclassical Economics: A
Weberian Analysis”
This paper examines the epistemological status of
neoclassical economic theory within behavioral economics: the attempt
to increase the explanatory and predictive power of economic theory by
providing it with more psychologically plausible foundations, where
“psychologically plausible” means consistent with the best available
psychological theory. Behavioral economics, the runaway success story
of contemporary economics, is often described as having established
that the assumptions underlying orthodox, neoclassical economic theory
of rational decision making are false. Yet, behavioral economists have
gone out of their way to praise those very assumptions. Consider
Matthew Rabin, who writes that behavioral economics “is not only built
on the premise that economic methods are great, but also that most
mainstream economic assumptions are great. It does not abandon the
correct insights of neoclassical economics, but supplements these
insights with the insights to be had from realistic new assumptions.”
These apparently contradictory attitudes toward neoclassical theory
raises the question of what, exactly, the epistemological status of
neoclassical theory within behavioral economics is. This paper argues
that apparently contradictory attitudes can be reconciled, and the
questions answered, if we think of the epistemological status of
neoclassical economics within behavioral economics as that of Max
Weber’s ideal types: analytical constructs which are not intended to be
empirically adequate but which nevertheless can be used for a variety
of theoretical purposes. The episode illustrates how (a) initial
conjectures about the entities and processes under scrutiny influence
the structure of theories, the choice of variables, and how to
interpret what they really say about the world; (b) how historical
analysis of the development of scientific theories illuminates the
interplay between scientific methodology, theory building, and the
interpretation of scientific theories; and (c) how specifications of
data generation, statistical modeling and analysis influence the
construction and appraisal of theories.
Hayley Clatterbuck, “Drift Beyond Wright-Fisher: The
Predictive Inequivalency of Drift Models”
A lively debate in the philosophy of biology of late has
been concerned with the very nature of evolutionary theory. The
dominant view, which I will call the causal theory, holds that the
theory of evolution by natural selection describes how various causes
impinging on individuals and populations interact to produce changes in
trait frequencies (Sober 1984; Stephens 2004; Shapiro and Sober 2007;
Reisman and Forber 2005). A competing view, the statistical theory,
inspired by the mathematization of population genetics, holds that
evolution is a statistical theory which abstracts from the causes of
individual lives and deaths to describe statistical trends in trait
frequencies (Matthen and Ariew 2002; Walsh 2000; Walsh, Lewens, and
Ariew 2002). Much of this debate has been concerned with the status of
drift, in particular. On the causal theory, drift is counted as one of
the causes of evolution, and to explain some evolutionary outcome as
the result of drift is to offer a causal explanation. On the other
side, statistical theorists often hold that drift is not a cause of
evolution, and further, that drift explanations are non-causal, purely
statistical explanations. Proponents of both of these theories have
implicitly, and oftentimes explicitly, assumed that there is some
unitary measure of drift. Typically, this measure is assumed to be the
effective population size as described by the Wright-Fisher model of
genetic drift. However, recent work by Der, Epstein, and Plotkin (2011)
has shown that there are alternative models of drift that share several
key features of drift as described by the Wright-Fisher model- features
that most authors have taken to be constitutive of drift- but differ
significantly in their predictions regarding other evolutionary
outcomes. While previous work in philosophy of biology has acknowledged
the presence of alternative models of drift, it has largely assumed
that these models are predictively equivalent and differ only in
tractability or the biological plausibility of their assumptions
(Millstein, Skipper, and Dietrich 2009). My goal will be to examine the
extent to which arguments regarding the causal nature of drift rely on
the assumption that there is a single model or a plurality of
predictively equivalent models of drift. If we reject the notion that
the effective population size always determines (either causally or
statistically) the outcome of drift the same way in all populations, as
the work of Der, Epstein, and Plotkin suggests that we should, does
either theory become less plausible? I will argue in the affirmative;
once it is acknowledged that there are different kinds of drift,
arguments for the causal nature of drift and drift explanations can be
buttressed against criticisms from the statistical theorists. The two
key statistical features of drift are (a) that the expected (mean)
allele frequency in generation k+1 is the same as in generation k, and
(b) that the variance of allele frequencies in the k+1 generation is a
function of the effective population size, where allele frequencies
departing from the mean are increasingly likely as the effective
population size decreases. The type of drift described by the standard
Wright-Fisher model displays these formal properties, as does drift as
described by other models, such as Moran, Eldon-Wakeley, or Cannings
models. This equivalency with respect to the first two statistical
moments of drift processes, paired with the common assumption among
philosophers of biology that the standard Wright-Fisher model is the
model of drift, has created the illusion that there is a single type of
drift with a magnitude determined by the effective population size.
However, as Der, Epstein, and Plotkin (2011) show, different models do
make different predictions about outcomes corresponding to higher
statistical moments of drift, such as the expected time to fixation of
a neutral allele and the probability of fixation of a selectively
favored allele. Importantly, some of these alternative models are more
accurate models of certain populations and drift processes. While the
Wright-Fisher model is a good model of gametic sampling, it is an
unrealistic model for some forms of parental sampling and for
populations with skewed offspring distributions. More accurate models
of these populations make strikingly different predictions. For
instance: In particular, the form of genetic drift that arises in
populations with reproductive skew tends to amplify the effects of
selection, relative to the standard form of Wright–Fisherian drift—even
when both models are normalized to have the same variance-effective
population size. The extent of selection amplification in the
Eldon–Wakeley model can be so dramatic that advantageous alleles may
even be guaranteed fixation in a population, despite the presence of
genetic drift (Der, Epstein, and Plotkin 2012, 1332). These results
have important ramifications for the debate regarding the causal status
of drift and drift explanations. Statistical theorists have argued that
drift explanations cite merely mathematical features of populations and
therefore do not reveal drift to be a cause; instead, they show that
drift is the type of statistical error common to any stochastic process
with a limited number of trials. However, I argue that to explain why
some particular allelic state obtains in a population evolving by
drift, merely citing the population’s size is sometimes insufficient.
The explanation can be deepened by citing causal features of the
evolving population which make some outcomes more probable than others.
In support of this argument, I explain the alternative models
discovered by Der, Epstein, and Plotkin and then generalize their
theoretical work to identify some causal features of populations
evolving under drift that make a difference to their dynamics. I will
then show how these features can be manipulated to yield a change in
evolutionary outcomes. These manipulations redeem drift as a cause of
evolution, and are not subject to the criticisms that have been leveled
against Reisman and Forber’s (2005) manipulationist argument for the
causal theory. Lastly, I will illustrate the form that causal drift
explanations take with the example of a debate regarding the
plausibility of the drift processes postulated by Wright’s Shifting
Balance Theory.
Koray Karaca, “The method of robustness analysis and the
problem of data-selection at the ATLAS experiment”
In the first part, I characterize and distinguish between
two problems of “methodological justification” that arise in the
context of scientific experimentation. What I shall call the “problem
of validation” concerns the accuracy and reliability of experimental
procedures through which a particular set of experimental data is first
acquired and later transformed into an experimental result. Therefore,
the problem of validation can be phrased as follows: how to justify
that a particular set of data as well as the procedures that transform
it into an experimental result are accurate and reliable, so that the
experimental result obtained at the end of the experiment can be taken
as valid. On the other hand, what I shall call the “problem of
exploration” is concerned with the methodological question of whether
an experiment is able, either or both, (1) to provide a genuine test of
the conclusions of a scientific theory or hypothesis if the theory in
question has not been previously (experimentally) tested, or to provide
a novel test if the theory or hypothesis in question has already been
tested, and (2) to discover completely novel phenomena; i.e., phenomena
which have not been predicted by present theories and detected in
previous theories. Even though the problem of validation and the ways
it is dealt with in scientific practice has been thoroughly discussed
in the literature of scientific experimentation, the significance of
the problem of exploration has not yet been fully appreciated. In this
work, I shall address this problem and examine the way it is handled in
the present-day high collision-rate particle physics experiments. To
this end, I shall consider the ATLAS experiment, which is one of the
Large Hadron Collider (LHC) experiments currently running at CERN. In
present-day particle physics experiments, such as the ATLAS experiment,
due to the technical limitations both in terms of data storage capacity
and rate, only a very small fraction of the total collision events can
be recorded by the detectors when two beams of particles collide inside
the collider. What are called “interesting events” are those collision
events that are taken to serve to test the as-yet-untested predictions
of the Standard Model of particle physics (SM) and its possible
extensions, as well as to discover completely novel phenomena not
predicted before by any theories or theoretical models. Interesting
events have very small occurrence rates and swamped by the background
coming from more probable events. In present-day particle physics
experiments, the process of data-acquisition is carried out by means of
“trigger systems” that select data according to certain predetermined
criteria. In the case of the ATLAS experiment, the data-acquisition
process is performed in three sequential levels of increasing
complexity and delicacy. Potentially interesting events are first
acquired at what is called the “level-1 trigger.” This first level
provides a “crude” selection of the interesting events out of the total
collision events detected by the sub-detectors; within 2.5 microseconds
it provides a trigger decision and reduces the LHC event rate of ~ 40
MHz down to the range of 75-100 kHz. The second stage in the data
selection consists of two sub-stages, namely “level-2 trigger” and
“event building.” The second level of data selection is much more
fine-grained than the first level and its event accept rate is around 2
kHz. Finally, the third level, which is called “even filter,” has an
accept rate of 200 Hz and thus can perform a more refined event
selection than the second level. It is to be noted that the level-1
trigger is of great importance for the data selection process at the
ATLAS experiment. The event data are first acquired at this level; if
an interesting event is missed out at that level, it is strictly
impossible to retrieve it at the subsequent levels of selection. This
built-in selectivity causes a problem of exploration that concerns in
the first place the justification of the selection criteria adopted at
the level-1 trigger; namely that how to justify that the selection
criteria ensure a range of data that is as broad as to be of use to
test not only the SM’s prediction of the Higgs particle but also
certain predictions of some possible extensions of the SM—such as
models based on supersymmetry, technicolor and extra dimensions—which
the ATLAS experiment is aimed to explore. In the final part, referring
to the publications of the ATLAS Collaboration, I shall first show that
the problem of exploration I have characterized above is of great
concern for the physicists involved in the ATLAS experiment. Then, I
shall argue that this problem of exploration is solved through a
particular method of robustness that consists of four steps. In Step-1,
in addition to the SM, a sufficiently diverse set of models/theories
beyond the SM (such as super-symmetric models, extra-dimensional models
and models with heavy gauge bosons, etc.) are considered. In Step-2,
various models/theories considered in Step-1 are examined and a
common/robust property among them is identified. The robust property
identified in the case of the ATLAS experiment is the prediction of a
heavy particle by each of the models/theories considered in Step-1. In
Step-3, a hypothesis whose phenomenological conclusions can be
experimentally explored is formulated. In the ATLAS case, the relevant
phenomenological hypothesis is: “Heavy particles predicted by each of
the theories/models considered in Step-1 decay into “high-transverse
momentum” particles or jets up to the TeV energy scale.” It is to be
noted that this hypothesis can be experimentally ascertained in the
ATLAS experiment, which can probe energies up to the TeV energy scale.
The final step, namely Step-4, consists of two sub-steps: First,
various selection criteria relevant to the phenomenological hypothesis
constructed in Step-3 are established. Second, those selection criteria
are slightly changed to check whether experimental results remain
nearly stable under slight variations of selection criteria. Lastly, I
shall point out that the robustness analysis used in the case of the
ATLAS experiment serves to secure the data-selection procedures rather
than the accuracy of experimental results obtained, and in this sense
it qualitatively differs from the form of robustness—previously
discussed by Allan Franklin (1998, 2002) in various experimental
contexts—that amounts to the stability of the experimental results
across different subsets of the data sample.
References:
Franklin, A., (1998): “Selectivity and the production of experimental
results,” Archive for History of Exact Sciences, 53: 399-485.
Franklin, A., (2002): Selectivity and Discord: Two problems of
Experiment, 2002, University of Pittsburgh Press. Steinle, F., (1997):
“Entering New Fields: Exploratory Uses of Experimentation,” Philosophy
of Science 64 (Supplement): S65-S74.
Steinle, F., (2002): “Experiments in History and Philosophy of
Science,” Perspectives on Science 10:408-432.
Alexandre Marcellesi, “Counterfactuals and ‘nonmanipulable’
properties: When bad methodology leads to bad metaphysics (and
vice-versa)”
I argue that advocates of the counterfactual approach to
causal inference (e.g. Rubin and Holland) are wrong to dismiss
so-called `nonmanipulable' properties such as race and sex as lacking
well-defined causal effects. I argue that they are led to this
problematic metaphysical conclusion by following a rule, which I call
'Holland's Rule', that is inadequate given the way the counterfactual
approach defines causal effects.
Elay Shech, “Phase Transitions, Ontology and Earman's Sound
Principle”
I present a novel approach to the recent scholarly debate
that has arisen with respect to the philosophical import one should
infer from scientific accounts of “Phase Transitions,” by appealing to
a distinction between “representation” understood as “denotation,” and
“faithful representation” understood as a type of “guide to ontology.”
It is argued that the entire debate of phase transitions is misguided
for it stems from a pseudo-paradox that does not license the type of
claims made by scholars, and that what is really interesting about
phase transition is the manner by which they force us to rethink issues
regarding scientific representation.