apriiori

two answers(?) from a moral intuitionist-sympathizer(?)

am i a moral intuitionist? i might be a moral intuitionist

In a recent article, Moralla W. Within poses some questions to moral intuitionists. She defines moral intuitionism as follows:

I define a “moral intuitionist” as someone who thinks (i) we can’t reason our way to correct moral conclusions “from the outside,” so to speak; we have to take certain substantive moral claims for granted (perhaps defeasibly) and reason from there, otherwise we would never be able to show that e.g. it’s wrong to torture people for fun11 Unless they’re into it.22 The discussion of torture, which is scary, is my excuse for publishing this article during October.; and that (ii) correct claims of moral obligation have authority even over those who cannot reason their way to them, e.g. those who start off with inaccurate moral intuitions.

I do think (i) seems somewhat correct, though I am a little confused on the point. It seems conceivable that, even without taking any substantive moral claims for granted, one could come to understand things like “ᴛɪᴛ-ꜰᴏʀ-ᴛᴀᴛ ᴡɪᴛʜ ꜰᴏʀɢɪᴠᴇɴᴇꜱꜱ is a good strategy in some iterated prisoner’s dilemmas”, “humans in modern cultures tend to think murder is bad”, or “to comprehend the totality of mathematical theory about how agents can cooperate with each other, it is helpful to read these three thousand, one hundred and seventy-eight technical papers, these several dozen textbooks, and also these glowfics.”

But when you've collected enough claims of this sort, is there still some additional thing you need to say, to actually get a moral ought? Or is the idea of the ought somehow fully implicit in that massive network of is statements, such that even if someone did not find “morality” a compelling reason to change their behavior, they would certainly be able to understand themself as evil? And if there is something left to say, how much is it? Is it just a short sentence where you can say “this shiny standout cluster of ideas that anyone would notice is morality, and you ought to act in accordance with it”? If that’s all you need to say, is there any difference between the so-called “is-ought” gap and, say, the “the-word-horse–🐴” gap? People33 Or at least, the sort of people to have attitudes I consider reasonable towards LLMs don’t seem very scared of the “horse-🐴” gap44 Though you will sometimes hear linguists mention a horse-🐴 merger..

But regardless, I don’t think I strongly object to the idea that you need, or at least probably want, substantive moral premises to reason from. This seems basically right to me, hypotheticals around people who deeply study everything conceivably related to morality without ever taking moral premises notwithstanding.

And regarding (ii), I do think that it remains wrong to murder even if you lack the intuition that murder is wrong. It’s mitigating, perhaps, and maybe for some particular sorts of acts it could be so mitigating as to make the act not immoral at all55 Anyone have ideas for examples?, but in general I do not think a lack of moral intuition is arbitrarily exculpatory.

So, sure. I have moral intuitionist sympathies, at the very least.

Permissibility vs. Requirement

To not just copy the entirety of Ms. Within’s post, I had Claude Sonnet 4.5 summarize:

People can act differently for two distinct reasons. Sometimes different actions simply reflect different preferences, where each person chooses their preferred option from among several acceptable alternatives. For instance, one person might go hiking while another visits a museum, with neither viewing the other’s choice as wrong—they’re just pursuing different but equally permissible activities6. Other times, people act based on what they believe is required, genuinely disagreeing with alternative courses of action and ruling them out as impermissible. Someone who refuses to torture innocents, for example, doesn’t merely prefer a different activity than someone who tortures for pleasure—they reject torture as fundamentally wrong. The key question is whether this distinction between acting from preference versus acting from requirement can be explained by differences in practical attitudes, such as the intentions people form, the principles they adopt, or the plans they make for handling various situations.​​​​​​​​​​​​​​​​

I suppose one of my worries here is that I’n not sure how much about a person could ever fail to be explicable in terms of “intentions, the principles for how to act that one has actually adopted, plans for how to act given that certain contingencies arise, and so on.” Perhaps I might wish to draw on concepts like capability or knowledge, but I don’t really think those are going to be what draw the line between actions out of mere preference and out of requirement, and really I’m not sure knowledge couldn’t be explained in terms of plans for how to act, or especially in terms of and so on. I think probably capabilities can’t, at least?

Maybe one tack you could try to take to explaining the difference between permissibility and requirement is to consider what one’s attitude towards a pill which would alter their behavior. I might not really want to take a pill that changes my ice cream preferences, but it’s less horrifying than a pill which makes me want to (and decide to) torture innocents. I’m not sure this gets at the heart of the difference.

Another distinction is that I would, if in a position to do so, be much more interested in preventing someone from torturing innocents than from engaging in a harmless leisure activity I’m not personally interested in. So maybe if I say that I act out of requirement rather than preference, this means that I wish for the action to somehow be enforced, or at least strongly incentivized.

Ms. Within says:

I’d guess that most intuitionists would say that no, the difference is not reducible to a mere difference in practical attitude. Rather, the difference is determined by whether one’s action is accompanied (in a proper way) by a certain normative belief: Aaron acts out of mere permission because his action is accompanied by a belief that it would be okay to go to the museum if one wanted to, whereas Dexter refrains from torturing innocents because he acts with a belief that one shouldn’t torture innocents no matter how much one wants to.

I think this statement about a “normative belief” is a pretty good way of describing the difference. I’m just not totally sure whether it makes sense to consider such normative beliefs as largely shorthand for or otherwise somehow equivalent to some set of differences in practical attitudes. The idea of a normative belief seems like it might be a little bleggy—maybe once you know all about your practical attitudes, it might feel like you still haven’t decided whether to have a normative belief, even though in general normative beliefs are basically just a quick way to group together things that you have similar practical attitudes towards.

In contrast, if you’re a Kantian, you’ll think the difference comes down to how one is motivated. Aaron judges going to the museum is permissible because he does not determine himself to go for a hike conditional on finding himself with Betty’s preferences, whereas Dexter acts out of requirement because he decides not to torture innocents even conditional on being like Caligula. This kind of understanding is associated with Kant, but an intuitionist could still have the view that whether someone acts out of mere preference vs. out of requirement is determined by something about their motivations/practical attitudes.

This also basically sounds like a reasonable way to view it. I think I like the normative belief one a tad better?

RightA and RightB

Again a Claude summary, with a slight clarification by me:

Consider two philosophers, Alice and Bob, who have both achieved reflective equilibrium—they’ve examined every argument, thought experiment, and piece of empirical data such that no new consideration could change their judgments. They’re intellectual equals in every respect and possess remarkable integrity, always following through on their deliberative conclusions without being swayed by emotion or weakness of will.

Alice and Bob don’t use standard normative vocabulary like “right” or “courageous.” Instead, they employ their own idiosyncratic terms: Alice speaks of what is “rightA” and what she has “reasonA” to do, while Bob talks about what is “rightB” and what actions are “courageousB.” Despite this linguistic peculiarity, these terms function identically to how ordinary normative language works—Alice acts on what she considers rightA, Bob feels shame at doing what he considers immoralB, and so forth.

Crucially, Alice and Bob disagree about which actions fall under their respective concepts. [FOR EXAMPLE, MAYBE] Alice considers those actions rightA that maximize happiness and minimize suffering, while Bob considers those actions rightB that satisfy Kant’s Formula of Universal Law. In essence, both use their normative concepts to guide action in subjectively identical ways, differing only in that Alice’s concept picks out utility-maximizing actions while Bob’s picks out universalizable actions.​​​​​​​​​​​​​​​​

And the question:

My first sub-question: given what I have said, do Alice and Bob necessarily have any beliefs that disagree with each others’?

I think Alice and Bob do not disagree. Perhaps if they both maintain that there is some shared concept of right, such that it is either right to behave rightAly or to behave rightBly, then we could say they disagree about what’s right. But I do not think they inherently disagree on anything otherwise.

A modification: I have not yet wanted to prejudice your opinion about what rightA and rightB refer to, but now suppose we change things as follows. Suppose Alice and Bob take themselves to be referring to different properties with rightA and rightB. So they discuss with each other what’s rightA and what’s rightB, and find themselves in perfect agreement. But Bob, for example, says “I agree with you that we shouldA pull the lever in the Trolley Problem, since it is rightA to maximize utility. But I will not pull the lever, because doing so is immoralB, and I see no reasonB to care about what’s rightA.” And Alice says similar to Bob. Now do you think they have any beliefs that disagree with each other, such that they are mistaken in thinking “Pulling the lever is rightA” and “Pulling the lever is rightB” express different propositions? Or do they agree with each other about everything, as they believe they do, and merely act differently?

This is what I imagined the first time and doesn’t change my answer.

  1. Unless they’re into it.

  2. The discussion of torture, which is scary, is my excuse for publishing this article during October.

  3. Or at least, the sort of people to have attitudes I consider reasonable towards LLMs

  4. Though you will sometimes hear linguists mention a horse-🐴 merger.

  5. Anyone have ideas for examples?

  6. In a footnote, Ms. Within says:

    One way of interpreting this case would have it that they really have the same practical attitudes and so on, but merely find themselves in different circumstances. E.g. they both intend to do what they enjoy most, and they happen to enjoy different things. I intend to rule this out by saying their activities here are non-instrumentally preferred, and not e.g. taken as a means to some further thing, like enjoyment.

    I am not certain whether I believe in non-instrumental values in this sense. I’m not convinced that people aren’t typically able to goal factor all the way up. Or maybe eventually you goal factor your way into a loop. Or maybe you goal factor until your goals start fraying at the seams and you have no idea why you want the things you list but also they don’t quite seem fully terminal per se?

    I think I am capable of avoiding the idea that Aaron and Betty “have the same practical attitudes and so on”, I’m sure there are cases where saying they have the same practical attitude doesn’t make sense, so maybe my suspiciousness doesn’t really matter to the question, but I’m a little suspicious of this whole “non-instrumental value” thing.