Value Theory Workshops
Workshops for 2020-21
All workshops will take place online via Zoom until further notice.
Session 1: Sunday September 27
10:00 am - 11:15 am MST
H. Orri Stef谩nsson (University of Stockholm & Swedish Collegium for Advanced Studies)
Jake Nebel (University of Southern California)
"Calibration Dilemmas in the Ethics of Distribution"
This paper presents a new kind of problem in the ethics of distribution. The problem takes the form of several "calibration dilemmas," in which intuitively reasonable aversion to small-stakes inequalities requires leading theories of distribution to recommend intuitively unreasonable aversion to large-stakes inequalities---e.g., inequalities in which half the population would gain an arbitrarily large quantity of well-being or resources. We first lay out a series of such dilemmas for a family of broadly prioritarian theories. We then consider a widely endorsed family of egalitarian views and show that, despite avoiding the dilemmas for prioritarianism, they are subject to even more forceful calibration dilemmas. We then show how our results challenge common utilitarian accounts of the badness of inequalities in resources (e.g., wealth inequality). These dilemmas leave us with a few options, all of which we find unpalatable. We conclude by laying out these options and suggesting avenues for further research.
(access passcode s!s6u57B)
Session 2: Sunday November 1
10:00am - 11:15 am MST
Glauber De Bona (Universidade de Sao Paulo)
Julia Staffel (University of 欧美口爆视频 at Boulder)
"Updating Incoherent Credences - Extending the Dutch Strategy Argument for Conditionalization"
In this paper, we ask: how should an agent who has incoherent credences update when they learn new evidence? The standard Bayesian answer for coherent agents is that they should conditionalize; however, this updating rule is not defined for incoherent starting credences. We show how one of the main arguments for conditionalization, the Dutch strategy argument, can be extended to devise a target property for updating plans that can apply to them regardless of whether the agent starts out with coherent or incoherent credences. The main idea behind this extension is that the agent should avoid updating plans that increase the possible sure loss from Dutch strategies. This happens to be equivalent to avoiding updating plans that increase incoherence according to a distance-based incoherence measure.
Session 3: Sunday December 6
10:00am - 11:15 am MST
Ittay Nissan and Jonathan Fiat (Hebrew University)
"Attitudes to Risk when Choosing for Others"
There is an apparent difference between the normative status of adopting different attitudes to risk when choosing for oneself and for choosing for others. While in the case of choosing for oneself, agents can adopt both a risk-avoidant and a risk-inclined attitude, in the case of choosing for others they are required to be risk-avoidant (or risk-neutral). Here we suggest a justification for this intuitive claim. The justification limits the scope of the claim. While it holds in cases in which the decision affects more than one patient, it does not hold in cases the decision affects a single patient.
Session 4: Sunday January 24th
10:00am MST
Kenny Easwaran (Texas A&M)
"A New Method of Value Aggregation"
Many ethical theories say that the rightness or wrongness of options is in some sense grounded in the aggregate of the goodness or the badness of these options for distinct individuals. This is most obvious for utilitarian consequentialism, but other theories have this feature as well. Most commonly, this is done by giving a numerical representation for these goodnesses, summing (or averaging) them over all individuals, and taking the expectation of this result if there is uncertainty about the outcome of an option. Philosophers like Mark Johnston, Nick Bostrom, and Frank Arntzenius say this theory has problems dealing with infinite populations, for which sums or averages are infinite or undefined. It seems to fetishize certain mathematical operations, in a subject that is not inherently mathematical. And perhaps most significantly, the fact that it is the sum or average of the individual goodnesses that is the object of the theory is said to mean that the theory ignores the separateness of persons, and treats the individuals as mere receptacles of value, which matters in an impersonal way.
The method I propose is an extension of work in my 2014 paper, "Decision Theory without Representation Theorems". I start with a partial ordering on options that is grounded only in individual goodnesses, without using any representation of aggregate goodness, and supplement it with various accounts of when one option is equally good as another. I illustrate how the resulting theory can account for a case involving an infinite population, dealing with the objections by Johnston, Bostrom, and Arntzenius (and in a more elegant way than the responses by Bostrom and Arntzenius). I connect this to the theory of measurement, to explain why the mathematical operations of addition and expectation can be coextensive with the results given by this method in cases where they are defined, without grounding it. And because the method works in cases with an infinite population, where various results by economists show that there can be no numerical representation of aggregate value, I address the worry that utilitarianism treats individuals only in the aggregate.
Session 5: Sunday February 21
Jennifer Carr, Assistant Professo of Philosophy, UCSD
"The Hard Problem of Intertheoretic Comparisons"
Metanormativists hold that moral uncertainty can affect how we ought, in some sense, to act. Many metanormativists aim to generalize expected utility theory for normative uncertainty. Such accounts face the easy problem of intertheoretic comparisons: the worry that distinct theories' utility or choiceworthiness assignments are incomparable. While there are reasons for optimism that the easy problem is resolvable, another problem looms: while some moral theories assign cardinal degrees of choiceworthiness, other theories' choiceworthiness assignments may be merely ordinal. When an agent assigns positive credence to such a theory, expected choiceworthiness is undefined. Call this the hard problem of intertheoretic comparisons.
This paper argues that to solve the hard problem, we should model moral theories, and moral hypotheses in general, with imprecise choiceworthiness. Imprecise choiceworthiness assignments can model incomplete cardinal information about choiceworthiness, with precise cardinal choiceworthiness and merely ordinal choiceworthiness as limiting cases. Generalizing familiar decision theories for imprecise choiceworthiness to the case of moral uncertainty generates puzzles, however: natural generalizations seem to require reifying parts of the model that don't correspond to anything in normative reality. Metaphorically: if we interpret imprecise choiceworthinesses on the familiar committee model, moral theories each have their own committees. But many kinds of decision theory for imprecise choiceworthiness will require that committee members serve on multiple competing committees. I discuss three ways of addressing this problem: (i) by constructing alternative decision theories that don't consult individual committee members; (ii) by avoiding arbitrariness by positing cross-committee members as promiscuously as possible; and (iii) by reassessing the objects of moral uncertainty that are relevant to metanormative decision theory.
Session 6: Sunday March 21
Krister Bykvist, Professor of Philosophy, University of Stockholm and the Institute for Futures Studies
鈥淰alue magnitudes鈥
Recently, there has been a revival in taking empirical magnitudes seriously. Weights, heights, velocities and the like have been accepted as abstract entities in their own right rather than just equivalence classes of objects. The aim of my paper is to show that this revival should includevalue magnitudes. If we posit such magnitudes, important value comparisons (cross-world, cross-time, mind to world, cross-theory, cross-polarity, ratio) can be easily explained; it becomes easier to satisfy the axioms for measurement of value; goodness, badness, and neutrality can be given univocal definitions; value aggregation can be given a non-mathematical understanding which allows for Moorean organic unities. Of course, this does not come for free. One has to accept a rich ontology of abstract value magnitudes, but, to quote David Lewis, 鈥楾he price is right; the benefits in theoretical unity and economy are well worth the entities.鈥
Session 7: Sunday April 18
Wlodek Rabinowicz, Professor Emeritus of Philosophy, Lund University
"Incommensurability Meets Risk"
This talk focuses on value comparisons between risky actions whose outcomes are bound to be incommensurable in value, whatever state the world is in. It might seem obvious that such actions would themselves have to be incommensurable. Relatedly, it might seem obvious that one action cannot be better than another if its outcome would not be better than that of the other action, whatever state the world is in. But, as I argue, these intuitions are misleading. There are cases in which they lead us astray.
The problem in its main outline is due to Caspar Hare (2010), who presents a case like this. While Hare views it as a problem for rational preferences and rational choice, one can also see it, as I do, as a challenge for formal axiology. I approach it using a formal account of value relations that is inspired by the Fitting-Attitudes Analysis. This allows me to explain how one action can be better than another even though their outcomes are bound to be incommensurable. But the account I rely on also leads to a residual issue that I do not know how to resolve.