Publications

“A Unificationist Defense of Revealed Preferences”, Economics and Philosophy (forthcoming) Penultimate Version

Abstract: Revealed preference approaches to modeling agents’ choices face two seemingly devastating explanatory objections. The no-self explanation objection imputes a problematic explanatory circularity to revealed preference approaches, while the causal explanation objection argues that, all things equal, a scientific theory should provide causal explanations, but revealed preference approaches decidedly do not. Both objections assume a view of explanation, the constraint-based view, that the revealed preference theorist ought to reject. Instead, the revealed preference theorist should adopt a unificationist account of explanation, allowing her to escape the two explanatory problems discussed in this paper. 

Selected Works in Progress

“Is There a Right to Explanation?”

I argue for a right to explanation of automated decisions, against the threats posed by algorithmic opacity. This right is grounded in individuals’ interest in fair treatment, and takes shape in institutional contexts that rely on the individual self-advocacy for their fair and efficient functioning. I also argue for two specific rights protections: free access to expert advice, and algorithmic impact statements.

“An Interventionist Defense of Revealed Preferences”

This paper picks up a thread in “A Unificationist Defense of Revealed Preferences.” In it, I argue that hypothetical revealed preferences do sometimes causally explain an agent’s choices, on an interventionist account of explanation.

“Discrimination and Causal versus Statistical Remedies” (with Cat Wade)

Computer science work on discrimination seems to be locked in disagreement regarding which formal tools are better means for avoiding discrimination. Two prominent research programs are considered: causal modeling and statistical criteria in fair machine learning. We suggest that the apparent disagreement between these programs is in fact due to differing background assumptions regarding two normative issues: 1) the ‘standard of discrimination’, i.e., what makes discrimination wrong and 2) the metaphysics of social identity. We argue that accounts of 1) and 2) that mutually support each other in fact justify one or the other of these programs. Moreover we propose that paying attention to the normative assumptions of these research programs can provide insight into how to better develop the formal tools that they champion.

“Beyond Unenviable Matches: Preferences, Priorities, and Reasons”

Many welfare economists deny that they are engaged in a normative enterprise at all. Among the remainder, there is a common picture that welfare economics adopts fairly uncontroversial standards based on facts about agent’s preferences. In this paper, I argue that many economists and others have mischaracterized their practice, using the example of mechanism design. Furthermore, economists are right to draw on these other normative considerations, and should do so more systematically, by incorporating reasons into their models.

Research

The overarching motivation guiding my research is to understand how background commitments influence modeling in the social sciences and computer science, to reflect on how they should, and to build fairer models on that basis. I’m also interested in the political and ethical questions inspired by the use of technology and social science by corporations and by governments.