“A Unificationist Defense of Revealed Preferences”, Economics and Philosophy (forthcoming) Penultimate Version
Abstract: Revealed preference approaches to modeling agents’ choices face two seemingly devastating explanatory objections. The no-self explanation objection imputes a problematic explanatory circularity to revealed preference approaches, while the causal explanation objection argues that, all things equal, a scientific theory should provide causal explanations, but revealed preference approaches decidedly do not. Both objections assume a view of explanation, the constraint-based view, that the revealed preference theorist ought to reject. Instead, the revealed preference theorist should adopt a unificationist account of explanation, allowing her to escape the two explanatory problems discussed in this paper.
Selected Works in Progress
“Is There a Right to Explanation?”
I argue for a right to explanation of automated decisions, against the threats posed by algorithmic opacity. This right is grounded in individuals’ interest in fair treatment, and takes shape in institutional contexts that rely on the individual self-advocacy for their fair and efficient functioning. I also argue for two specific rights protections: free access to expert advice, and algorithmic impact statements.
“An Interventionist Defense of Revealed Preferences”
This paper picks up a thread in “A Unificationist Defense of Revealed Preferences.” In it, I argue that hypothetical revealed preferences do sometimes causally explain an agent’s choices, on an interventionist account of explanation.
“Discrimination and Causal versus Statistical Remedies” (with Cat Wade)
Computer science work on discrimination seems to be locked in disagreement regarding which formal tools are better means for avoiding discrimination. Two prominent research programs are considered: causal modeling and statistical criteria in fair machine learning. We suggest that the apparent disagreement between these programs is in fact due to differing background assumptions regarding two normative issues: 1) the ‘standard of discrimination’, i.e., what makes discrimination wrong and 2) the metaphysics of social identity. We argue that accounts of 1) and 2) that mutually support each other in fact justify one or the other of these programs. Moreover we propose that paying attention to the normative assumptions of these research programs can provide insight into how to better develop the formal tools that they champion.
“Beyond Unenviable Matches: Preferences, Priorities, and Reasons”
Many welfare economists deny that they are engaged in a normative enterprise at all. Among the remainder, there is a common picture that welfare economics adopts fairly uncontroversial standards based on facts about agent’s preferences. In this paper, I argue that many economists and others have mischaracterized their practice, using the example of mechanism design. Furthermore, economists are right to draw on these other normative considerations, and should do so more systematically, by incorporating reasons into their models.
My dissertation examines how epistemic, practical, and ethical commitments influence modeling in the social sciences. It does so using two case studies: revealed preference approaches and market design. I argue that revealed preferences do sometimes explain an agent’s choices, on either a unificationist or an interventionist account of explanation. Here I oppose a widespread consensus that revealed preferences have no power to explain consumption and other choices. This explanatory defense is driven by the epistemic and practical goals behind revealed preference modeling, such as the efficient summary of patterns. Second, I argue that ethical commitments other than preference utilitarianism, such as equality of opportunity and inequality, guide market design, that they should do so, and that economists should draw on these other commitments more systematically.