Response: The Path of the Righteous Man…

…is beset on all sides by the inequities of the selfish and the tyranny of evil men. Blessed is he who, in the name of charity and good will, shepherds the weak through the valley of darkness. For he is truly his brother’s keeper and the finder of lost children. And I will strike down upon thee with great vengeance and furious anger those who attempt to poison and destroy my brothers. And you will know I am the Lord when I lay my vengeance upon you.

– Ezekiel 25:17, the Quentin Tarantino Version

In which I address some recent comments e-mailed to me which, I believe, for the most part represent a profound misreading of the corpus at hand, but which nonetheless raise some points I might as well answer.

I suppose it’s slightly unfair of me to go off on you without giving you some background on where I’m coming from, but that comment chain touched a little on an issue that I’ve been turning over in my head for a long while, both in my worldbuilding as a core theme of the storyline within the setting (one of these days I’ll actually write it down instead of building “castles in the air” in my imagination…) and in my own life: What is the nature of violence? What is the proper role of force in relations between two rational creatures? Is it possible for a “reasonable person” to desire the death of another — even if they would never act on that desire outside of certain “acceptable” boundaries? In cases where retaliatory force is justified, where does the boundary lie between “acceptable” and “overkill”?

I wouldn’t exactly call myself a pacifist (although certain strains of pacifism have probably influenced my thought in the course of my investigation),

As I’ve implied before, say, here for example, pacifism is very poorly thought of in Imperial culture, because in their opinion it’s a self-justifying morally supine position; which is to say, it’s the position of “First they came for the $VICTIM, and I did not a single gorram useful thing because it was more important to me not to get my hands wet.” Shrugging at evil to its face, and saying, “Well, at least I didn’t…”.

Blessed is he who, in the name of charity and good will, shepherds the weak through the valley of darkness.

And accursed is he who leaves the weak to suffer what they must.

but I’ve come to the conclusion that when it comes to the use of force by one being capable of reason against another, where are essentially two elements, each of which is a morally and ethically independent consideration from the other: The external *means and circumstances of application*, and the internal *motivation of the applicator*; or, in short, the “use of force” vs. the “will to kill.”

The “use of force” consideration is essentially what people talk about when debating the merits of “coercion” vs. “self-defense.” In that sense, I consider myself a conventional believer in the Non-Aggression Principle: Initiating force — even non-lethal force — without cause is always wrong; using retaliatory force — even lethal force, and even *wittingly* lethal force — is right when done in an appropriately proportional manner to deflect, oppose, or counteract an illegitimate act of force.

(Note that, above, I’m drawing a distinction between a *witting* — performing an action with foreknowledge of a certain or highly probable consequence; the desirability of that particular consequence being, for the moment, irrelevant — and *willing* — that is, acting with the intention of causing a specific consequence.)

However, that seems to be only half the battle.

Violence against another living thing is, in a fundamental sense, an inherently entropic act: The violent actor is expending energy by applying force against an ordered system (the living target) with the aim of causing that system to break down and expend its energy chaotically. It would seem to me that acting with the specific intention of causing that sort of outcome is, essentially, acting with the desire for entropy to win, however limited the scope of that particular “victory” may be.

If entropy is a thing that should rationally be avoided, then it stands to reason that a reasoning sophont is no more capable of willing the death of one of its peers and remaining rational at the same time, than it is that one can desire the destruction of the Universe Entire and remain rational. This is a consideration entirely independent of the *external* context of the use of force.

Here is the obvious question they would ask at this point:

Is it moral to cure cancer?

Obviously it is when you can use sophisticated medicine to retrain the cancer cells into being honest, upstanding members of their tissue.

But what if you’re using carcinophages, or chemotherapy, or radiotherapy, or old-fashioned surgery to cut the tumor out? That’s entropic in the exact same way: you are forcibly destroying an ordered, living system, and you are, in fact, hoping for your tightly-focused entropy to win this small victory. Is that wrong?

No, says the Healer’s Code, because what the above argument fails to recognize is that the tumor is an entropy generator which is itself destroying a more complex ordered system, and the position you are in is having to apply this focused entropy in order to preserve that greater system.

(There is more on this here from the point of view of the Stratarchy of Indirection and Subtlety, and this should also illuminate just how far Imperial doctrine goes to use minimal force for necessary effect. As residents of a planet that bans quiet assassination in favor of mass warfare, I don’t think they’d be willing to accept correction from us on this point.)

I have, in the past, described the Imperial justice system as surgical in its approach. This is the underlying truth: some cancers have to be cut out, in order to save the patient. It is an unfortunate circumstance that such things exist at all in the first place, but since they do, this is the choice with which one is presented.

(At this point, usually someone complains that you can’t compare a sophont being to cancer.

Indeed you can’t, they say. The cancer is merely programmed tissue acting out its programming; its destructiveness is entirely unintentional, no more willful than a mosquito, a virus or a falling rock. The sophont, on the other hand, has the power of choice, and willingly chose against the good; it is thus far worse and merits destruction substantially more than, say, the unfortunate bacteria we poison with vancomycin to save sophont lives.)

In short, I believe that it’s possible to act in a way that any third-party observer with knowledge of both the cause and effect would consider to be justifiable self-defense, while also being guilty of murder because you acted with *murderous intent* independently of whether the action itself was the correct thing to do at the time. Even if you balk at calling it “murder” and ascribing to it the culpability thereof, I still consider it a species of viciousness that should be neither tolerated nor encouraged.

Or, still more briefly: While *wittingly* causing someone’s death may be justifiable if one does so for the right reasons, *willingly* causing someone’s death is always wrong — even if the circumstances and the actual actions taken are exactly the same in both situations.

(Or, perhaps more pointedly: “While lethal force may be unfortunately necessary to deal with the worst sorts of scum, anyone who both claims to be rational and *willfully* kills or causes the death of another soph — or endorses such an action — is either deluding themselves or committing the most dangerous and fundamental sort of fraud possible.”)

To which the obvious follow-up question would be:

Is it immoral to be happy that you’ve cured cancer, even if you had to kill the cancer to do it?

…no.

And while the ignorant can be educated, the primitive uplifted, and the sick-in-mind cured, likewise, it’s not immoral to be happy that you have killed a walking sophont cancer whose very existence made the world around them worse. The doctor has repaired the future life of her patient and those around him; the sentinel has repaired the lives of everyone who would otherwise have been harmed, directly or indirectly, by the ex-soph in question.

This is, so far as their ethical calculus is concerned, an inarguably good act of entropy-minimization.

What worries me when I read things like the excerpt from this post ( https://eldraeverse.com/2016/12/04/a-question-grab-bag/ ) below:

But once you have cold-mindedly ensured that you have the right target and have done the proper strategic and tactical planning, then go ahead and strike down upon those who attempt to poison and destroy your brothers with great vengeance and furious anger, and other colorful metaphors. It is… appropriate. Empowering one for such unpleasant necessities is what wrath is for.

I refer you here to the empowering paradox of passion and reason.

Or from here ( https://eldraeverse.com/2014/05/31/the-bear-necessities-historical-trivia/ ):

After hearing the testimony of the children and bystanders, the Near Orbit District Court ruled that ‘***they needed killing***; jolly well done’.

The people in question were child kidnappers. If that’s not an example of people whose existence poisons the world and who need killing both individually and as a class, who in all the world is?

And slogans like:

> Civilization has enemies; kill the bastards.

ObReference in canon, from here:

The official motto of the Imperial Military Service is “Between the Flame and the Fire”. Unofficially, the paraphrase “civilization has enemies; we kill the bastards” has been usually tolerated.

Which is to say: it’s an unofficial military motto. (I’ll leave it to any actual veterans reading this to supply examples of the real thing, by which standard this is kinda milquetoast.) This is the self-summary and mutual reminder of the rough men who stand guard on the walls mentioned below. If you want a good reference for actual sentinel attitudes, it’s here. (Scroll down.)

I should like to draw your attention to this part:

We live in Utopia.  We have no war, no crime.  No disease, barely any injury, and certainly no death that can’t be easily reversed.  Thanks to the autofac, we’ve never known poverty, and we live on worlds where no-one for generations ever has.  In societies where, by the Contract and the Code and the tireless efforts of archai like Unification, we can always trust, people always care, and happy endings always happen for good people, which is to say, everyone.  We go through our lives without experiencing more than the briefest moments of the mildest pain, or even inconvenience, and few but the eldest of us remember the true taste of suffering, or injustice, or fear, or loss.

That’s right, folks. Remember, the Empire was founded by people who, essentially, read through some trope pages for things like Mary Suetopia, and Sugar Bowl, and said: Yes. This is right, this is true and beautiful, this is how the universe ought to work. And then made both it (locally) and themselves that way. They have the sort of rates of crime, social dysfunction, anomie and alienation otherwise best seen “once upon a time, in the magical land of Equestria”. (At least if you discount the monster attacks.)

So let’s just look at our world though today’s twitter, as an example.

  • The “Leader of the Free World” is an orange fascist who would lose an intellect contest with a bowl of jello.
  • At least two of our supposedly-civilized, advanced, etc. countries run concentration camps specifically for children.
  • Then there’s the ongoing #MeToo scandal, in which it seems increasingly clear that much of Hollywood and more than a few other places are stuffed with people now suffering social sanctions for things that, *there*, would unquestionably count as rape, straight up.
  • Not to mention all those places in the world where such things and even worse variants on them don’t even go remarked upon.
  • And at this point, I’ve stuck to things that even the average human finds offensive. I haven’t even started touching on things that are specifically offensive to Imperial sensibilities…

And there are lots of places in the galaxy that are just like us, though the details differ, and I’m not talking about the Iltine Union or the Theomachy of Galia. I’m talking about places whose self-image is at least as smug as that of the average First World country.

There are certainly, all praise to Rúnel, plenty of more civilized places than Earth around – hell, even the Vonnies do somewhat better – but nonetheless, if the hainadar appear sometimes to be channeling the attitude of the Roman legionary watching the dark forests across the Rhine, or the guards posted along the Great Wall – well, that’s because they do see themselves as the thin indigo line between the warmly-lit, gentle garden of civilization and a never-ending parade of savages and atrocities, and have perfectly legitimate reasons for so doing.

They want them on that wall. They need them on that wall.

You want to explain to them how they’re wrong about that, Earthling? Maybe tell them how the barbarians haven’t earned the name a dozen or two times over?

Myself, I think it’s a bloody miracle and possibly a tribute to self-control and respect for freedom of choice that what you get is attitude, overt and covert manipulation towards improvement, and a few Renegades – and not, say, The Ultimate Crusade of Ultimate Destiny…

Is that — and I hope you interpret this as coming from a friend expressing concern, and not an enemy seeking to condemn — something of this distinction is being either lost or glossed over without serious examination, and that all this talk of “barbarians” tacitly divides the Universe into an “elect” chosen few and a vast mass of “damned” whom it is alright to want to kill provided you can find the right opportunity to do so — even if the eldrae themselves might find such a view abhorrent if presented that way, I worry that that’s what their philosophy towards force and violence adds up to when all the pieces are put together.

…which would be justified, if it came to that, not by some sense of the elect, but by the things that its carefully selected targets have actually done and continue to do.

If you see a murder, a rape, a kidnapping, a robbery, etc., then by ethics and the Contract and the Charter, you are obliged to intervene to stop it, and if stopping it and preventing it from happening again and again and again requires it, then in the absence of proper formal process, whether or not you want to, you are obliged to do so with lethal force.

But more, if you see people who fit that latter definition, you should want to, because you should want to do the right thing, and when faced with cancer, the right thing is to cure it.

This argument does not lose any of its force when you scale it up; an organization, or a culture, that institutionalizes these things is no less guilty than an individual that does so. The problems with the Ultimate Crusade of Ultimate Destiny are (a) its impracticability – as demonstrated in the small by our various failed efforts at nation-building – and (b) difficulty in appropriately handling the majority – the ignorant, the primitive, and the mind-sick. These make the slow extension of cultural spheres, educational efforts, and the aforementioned overt and covert the optimal path in the long run, Renegades and proscribed groups notwithstanding. But there’s nothing wrong with its ethical justification.

Because, as it turns out, the wild universe is dark and full of horrors.

 

16 thoughts on “Response: The Path of the Righteous Man…

  1. Is it moral to cure cancer?

    Obviously it is when you can use sophisticated medicine to retrain the cancer cells into being honest, upstanding members of their tissue.

    But what if you’re using carcinophages, or chemotherapy, or radiotherapy, or old-fashioned surgery to cut the tumor out? That’s entropic in the exact same way: you are forcibly destroying an ordered, living system, and you are, in fact, hoping for your tightly-focused entropy to win this small victory. Is that wrong?

    No, says the Healer’s Code, because what the above argument fails to recognize is that the tumor is an entropy generator which is itself destroying a more complex ordered system, and the position you are in is having to apply this focused entropy in order to preserve that greater system.

    The irony is, I actually don’t disagree with this at all, for precisely the reason you’ve given — in this case, preserving the patient’s life is the greater ethical good, and if you can’t do that without killing the cancerous cells, then by all means, that is the option you should take.

    The flipside of this, however, is that if you do have access to the ability to genetically retrain the cancer cells, then why — outside of a triage situation, where you may have to evaluate the priority for treatment of any given patient to judiciously apply the resources at hand — would you make recourse to the “old-fashioned” methods when they are less efficient and may expose the patient to needless collateral damage?

    Is it immoral to be happy that you’ve cured cancer, even if you had to kill the cancer to do it?

    …no.

    And while the ignorant can be educated, the primitive uplifted, and the sick-in-mind cured, likewise, it’s not immoral to be happy that you have killed a walking sophont cancer whose very existence made the world around them worse. The doctor has repaired the future life of her patient and those around him; the sentinel has repaired the lives of everyone who would otherwise have been harmed, directly or indirectly, by the ex-soph in question.

    Again, I don’t disagree with this notion in abstract, but is the life of “the ex-soph in question” itself too insignificant to have any bearing on the equation?

    Hell, it’s even possible to have your cake and eat it, too: Given that the medical knowledge and sophotech of the Associated Worlds are as advanced as they are, it’s presumably perfectly feasible that you could “kill” someone, recover and preserve their corpse or their personality data, and repair or reinstantiate them yourself, complete with a newly-“patched” personality that would no longer have whatever broken drives that prompted them to act in the first place. Even if you have some sort of brain-bug about that being too close to coercion or “paracoercion” for personal taste, would it be so infeasible to temporarily resuscitate them in a controlled environment, so that they can be the ones who definitively make the final choice as to whether or not they want to “die as they are” or be repaired into someone better than they are?

          • More that going further into “the empowering paradox of passion and reason” referencing one of its inspirations would make much more sense with the full context available, but to have the full context available really requires having read the books…

            But here, try this excerpt from his gradual interview:

            November 2004

            …which deals with Mhoram’s discovery of a flaw in the application of the Oath of Peace that has been handicapping the New Lords all along, and finding an insight which is very similar, if not strictly identical to, the eldraeic concept mentioned above.

    • Again, I don’t disagree with this notion in abstract, but is the life of “the ex-soph in question” itself too insignificant to have any bearing on the equation?

      Ethically speaking, a Defaulter on the Fundamental Contract (and thereby by definition a mass of spiritual entropy) sums to an ethical significance of zero.

      Most folks’ll (including the Imperial legal system, given the opportunity) give them a second change (but only a second chance) because it satisfies their moral values, but they aren’t owed anything, and such clemency as they may receive is a supererogatory gift and to be understood in that sense.

      Which is to say, you are given a second chance, emphasis on the given.

      Hell, it’s even possible to have your cake and eat it, too: Given that the medical knowledge and sophotech of the Associated Worlds are as advanced as they are, it’s presumably perfectly feasible that you could “kill” someone, recover and preserve their corpse or their personality data, and repair or reinstantiate them yourself, complete with a newly-“patched” personality that would no longer have whatever broken drives that prompted them to act in the first place.

      There is both a metaphysical and practical problem with that in many cases.

      The metaphysical problem is that you only have so much divergence to play with before you are creating a – technically, ethically, and legally – brand new person. Tangled legal issues aside, this has the same problem as Babylon 5’s death of personality; you’re just killing someone and resurrecting someone else wearing their body, which superstitions about meat-continuity aside, is utterly pointless.

      The practical problem in other cases is that there is very little point in resurrecting someone who, once patched to a proper, civilized sense of mélith, will immediately throw themselves on their sword to balance their accounts.

      • Ethically speaking, a Defaulter on the Fundamental Contract (and thereby by definition a mass of spiritual entropy) sums to an ethical significance of zero.

        Small problem here: Given that the Fundamental Contract itself covers such a broad range of behavior, and that within-delta-of-zero of all sophont life have adhered to its strictures perfectly, doesn’t this ultimately mean that, by a strict reading, the ethical significance of damn near everyone ever sums to an ethical significance of zero?

          • In rough order: Inadvertency, paid-off past sins, false obligations, resentment, whining, specious legal arguments, and miscellaneous other things falling under the ancient legal maxim, stercore bovem non stercorem est.

          • “[With / in / of / by the manure] [to / toward / into the cow] [is not] [to / toward / into the manure]?” I could be missing something here — it’s been a few years, and my textbooks and dictionaries are a few hundred miles away right now — but that maxim seems to make no grammatical sense.

            I notice that amid all that, there is no definition of what default actually is in this context.

          • Translated idiomatically from the vaguely Prachettian dog-Latin, “Bullshit ain’t shit.”

            Didn’t think it needed explanation, but advertent, unamended breach.

            And I hereby declare this falls under the prior-mentioned generalities rule. Unless I’m writing an obligator-focused piece, trying to replicate every refinement of a general principle derived by 9,000 years of transawesome lawyers is neither fun nor useful – actually, that’s probably true even if I am writing an obligator-focused piece – and endless arguing over assorted edge-cases and the resulting minutiae even less so.

            It’s also, more importantly, toxic to my actual writing.

            So while I have no problem with a few questions, I’m done here. (And in future similar situations.)

          • I do feel it prudent to make a few final remarks on the matter.

            Didn’t think it needed explanation, but advertent, unamended breach.

            Which is why I asked for a definition in the first place, because the default (heh) definition of “default” here cares nothing about intent, only the question of fact of “Did you fail to fulfill your contractual obligations?”

            Re the main topic at hand: All I will be saying, in closing, is to make a callback to a comment I posted on the topic of “Worldbuilding: Intellectual Integrity and Non-Utopia” ( https://eldraeverse.com/2017/01/16/worldbuilding-intellectual-integrity-and-non-utopia ) a little over a year ago:

            I do want to point out that self-consistency is only half the battle — it’s entirely possible to have a system of ethics where all the internal logic hangs together with admirable consistency, yet that outputs badly wrong conclusions because it is founded on one or more fundamental axioms that turn out to be untrue.

            I am still firmly convinced that, however well-constructed and internally consistent the eldraeic code of ethics may be, it is built on faulty premises that lead to perverse conclusions; that the issue I have raised is not tangential, but fundamental; and that this crooked foundation (in my mind) puts not only their claims of ethical superiority but their legitimacy as a “society of consent” in jeopardy. Nevertheless, since you’ve drawn the line in the sand, I will pursue it no further.

Comments are closed.