Research
My current research interests are broadly in ethics, social/feminist philosophy (especially microaggressions), and philosophy of AI.
Papers Under Review
Paper on a new structural account of microaggressions (Draft available upon request)
** This paper won the Minorities and Philosophy Ethicist Prize at the Rocky Mountain Ethics Congress 2024. **
Theories of microaggressions collectively face the following two problems: (i) the problems of reliable identification and (ii) the problem of unresolved disagreements. I first argue that four existing theories of microaggressions—psychological, experiential, structural, and expressivist—fall short of addressing these problems because they rely on knowledge tied to individual actors, individual recipients, or causal impacts of individual acts in identifying microaggressions. I then argue that to make progress, we need a theory that draws on knowledge of microaggressions that is (i) reliably accessible to marginalized groups in light of their experiential expertise and (ii) partly transmissible to other groups who may lack such expertise. Lastly, I propose a new approach to structuralism as a promising alternative. Unlike the original structural approach that identifies microaggressions based on whether individual acts causally contribute to an oppressive system, the new structural account examines how these acts constitute patterns of injustice that distinctly contribute to this system. In particular, it identifies microaggressions based on their distinct feature of ‘passability’ that enables them to carry out the roles of normalizing injustice and policing resistance.
Paper on microaggressions and blame (Draft available upon request)
In this paper, I question one popular philosophical argument against blaming individual actors of microaggressions—namely, that blame is not a fitting response, as most of these actors are not blameworthy. I first argue that we ought to reframe this argument as an epistemic argument. The problem is not that most individual microaggressors are in fact not blameworthy but that due to the characteristic nature of microaggressions that makes it a distinct species of injustice, individual recipients cannot be warranted in judging whether these actors are blameworthy even when they are in fact so. I argue that properly understanding the scope of this epistemic challenge posed by microaggressions unveils the limits of blame as a ‘fault-finding’ response. I then propose an alternative conception of blame as a response to a ‘meaning’ of our actions, which is in part determined by and imported from facts external to individual agents’ internal states of mind. I close by examining whether our responses to an action’s ‘external’ meaning should be construed as a species of blame or a distinct ‘blame-like’ moral response.
Selected Papers in Progress
Humanness in the Loop (scheduled to be presented in Penn-Georgetown Digital Ethics Workshop 2025 and The Mentoring Workshop For Early Career Women* Faculty in Philosophy 2025)
In this paper, I introduce a dilemma in HMT systems, which arises when the distinct “human errors” that we want to correct with our use of AI constitutes the distinct elements of “humanness” that we want to preserve in these systems. I show how this dilemma arises in the context of risk assessment and military scenarios and argue that the two prominent solutions to this dilemma—(i) encoding humanness in the machines and (ii) outsourcing humanness to humans in the loop—are both limited.
Can AI Systems be Moral Agents without Being Moral Patients? (accepted as a workshop paper at NeurIPS 2023)
A standard assumption in contemporary philosophical debates on moral status is that moral agency imposes a higher bar than moral patiency—all moral agents (e.g., humans) have moral patiency, but many moral patients (e.g., non-human animals) lack moral agency. I argue that recent developments in artificial intelligence (AI) may challenge this assumption. Some AI systems may meet the bar for moral agency far before they meet the bar for patiency; if so, there could be some periods during which we have artificial moral agents lacking moral patiency. This observation has interesting implications in both fields. In philosophy, it may imply that contrary to our assumption, our moral circle allows for moral agents lacking moral patiency. Alternatively, it may imply that moral agency and patiency are not independent notions; rather, there may be a deeper, constitutive relation between them. In AI development, this may reveal that discussions of consciousness or anthropomorphizing of AI may be secondary, if not orthogonal, to the role of AI as moral agents.
Believing for Reasons of Love (scheduled to be presented in APA Pacific 2025)
Epistemic partialism posits the following two key claims: (i) love and friendship demand that we form certain partial beliefs about our loved ones, and (ii) these demands are in conflict with canonical epistemic norms to believe in accordance with evidence. Critics of this view argue that accepting (i) and (ii) commits us to the contentious theses of pragmatism and doxastic voluntarism. I argue that accepting (i) and (ii) need not commit us to pragmatism or doxastic voluntarism. Drawing on the interaction between the characters in David Auburn’s play Proof (2000), I illustrate that properly carving out space for love and friendship in the doxastic domain may involve more than having partial beliefs for our loved ones; it may also involve that we form these beliefs for distinctive non-evidential reasons that importantly come apart from pragmatic reasons, which I tentatively call reasons of love. I further explain that believing our loved ones on the basis of reasons of lover must remain non-voluntary for it to be a genuine manifestation of love.
Sulking as Performance (presented in 2024 Southern Society for Philosophy and Psychology)
Studies in psychology and psychotherapy typically investigate sulking as an instantiation of particular emotional states, such as anger or hurt feelings. This approach emphasizes the vulnerable or narcissistic selves of individuals disclosed by their sulking behaviors. In this paper, I examine sulking as a distinct type of performative act. Rather than focusing on the internal states of sulking individuals, it shifts our attention to what sulking does and instigates within an interpersonal relationship situated in a broader sociopolitical context. Specifically, it examines how sulking as performance is both enabled by and reinforces patriarchal social structures.
Anti-solutionism as a Pedagogical Strategy with Joel de Lara (presented at the 2024 Association for Practical and Professional Ethics International Conference)
Thought experiments remain a popular pedagogical tool in philosophy. While these experiments can be a helpful tool for practicing philosophical argumentation, some have criticized them for obscuring complex, real-world problems. We argue that a deeper concern underlying the use of thought experiments in philosophy classrooms (especially in applied ethics) is the broader focus on solutionism and its worrisome impacts on students. We then argue for the value of promoting anti-solutionism as an alternative pedagogical strategy. In defense of this approach, we demonstrate two class activities designed for bioethics and environmental ethics courses with the goal of cultivating an anti-solutionist mindset. Rather than challenging students to solve the problems, we guide them to understand the complex and multifaceted nature of these issues and to recognize the value of moral sensitivity, collaboration, as well as epistemic and moral humility.
Translational Ethics Project
In 2024-2025, I am a project manager for the collaborative interdisciplinary research project between Georgetown's Ethics Lab and Center for Security and Emerging Technology on developing ethical framework for Human-Machine Teaming (HMT) systems in the military context.
In particular, we are currently working on the following two papers (draft available soon!):
-
The fluctuating status of AI in military operations
-
A systems approach to military applications of AI
Dissertation
Fitting Blame without Blameworthiness (under the supervision of Dr. Susan Wolf).
Recall the worst decision that you ever made. Perhaps you passed up an exciting opportunity to pursue your dream, playing it safe by remaining in a secure but unfulfilling job. Looking back, you know that you meant well and were doing the best you could for your future self. You could not have known that your decision would end up being a terrible mistake. Still, you blame yourself. Or, suppose your mother discouraged you from a path that you knew was right for you (e.g., pursuing a Ph.D. in philosophy). You know that she meant well. Your mother is a good person, who loves you and wants what is best for you. She just did not understand what philosophy meant to you or why you were not interested in marrying a ‘nice boy.’ Still, you find yourself blaming your mother.
Fitting blame is commonly thought to require a blameworthy agent, who is in some sense ‘at fault’ for their problematic behavior. Warranted blame requires a warranted judgment that the behavior in question manifests some kind of fault in the agent (e.g., problematic motives, faulty character, or deficient quality of will). However, in life, we do often find ourselves blaming an agent both (i) when we cannot reasonably judge whether they are blameworthy and (ii) when we can reasonably judge that they are not blameworthy. In a conventional framework, our phenomenology of blaming and striving to forgive people in these types of situations is rendered incoherent or unwarranted. Instead of dismissing our phenomenology in such cases, I articulate an expanded picture of blame and forgiveness that supports our experiences that play a vital role in shaping our interpersonal lives and fighting structural injustice.