Author on Authority

This is a little meta to begin with, but please do indulge me, for we will get there. It all started this morning when I happened to read this little piece of not-even-wrongitude:

Authority by consent is no authority at all, like I say. Unless you can force people to listen to you, they won’t obey commands unless they agree with them. And if they won’t obey commands unless they agree with them, you’re ultimately not leading anything, you’re a mouthpiece spouting what they want to hear.

Hold onto your togas, kids, we’re off to Rome, and we’re going to learn exactly what authority is by examining auctoritas. Your free clue is that it is precisely not what the above quotation claims it to be.

(Obviously, the Romans did have the concept of forcing people to listen to you and do what they’re told. That one wasn’t auctoritas, though. That was imperium, which is where the strapping lads [the lictors] with the bundle of sticks and an axe – yes, that one – would proceed to do the needful unto anyone who didn’t get with your program. This equipment and the chaps carrying it were a warning – who you were not, for the most part, allowed to go without – to everyone that you were allowed to deal out corporal and capital punishment.)

Auctoritas, from whence our authority (and also, point of curiosity, “author”) had approximately buggerall to do with the ability to force people to listen and obey, because the whole point of having auctoritas is that you don’t need to.

Let me quote Bret Devereaux’s excellent blog here:

Roman political speech, meanwhile, is full of words to express authority without violence. Most obviously is the word auctoritas, from which we get authority. J.E. Lendon (in Empire of Honor: The Art of Government in the Roman World (1997)), expresses the complex interaction whereby the past performance of virtus (‘strength, worth, bravery, excellence, skill, capacity,’ which might be military, but it might also by virtus demonstrated in civilian fields like speaking, writing, court-room excellence, etc) produced honor which in turn invested an individual with dignitas (‘worth, merit’), a legitimate claim to certain forms of deferential behavior from others (including peers; two individuals both with dignitas might owe mutual deference to each other). Such an individual, when acting or especially speaking was said to have gravitas (‘weight’), an effort by the Romans to describe the feeling of emotional pressure that the dignitas of such a person demanded; a person speaking who had dignitas must be listened to seriously and respected, even if disagreed with in the end. An individual with tremendous honor might be described as having a super-charged dignitas such that not merely was some polite but serious deference, but active compliance, such was the force of their considerable honor; this was called auctoritas. As documented by Carlin Barton (in Roman Honor: Fire in the Bones (2001)), the Romans felt these weights keenly and have a robust language describing the emotional impact such feelings had.

Note that there is no necessary violence here. These things cannot be enforced through violence, they are emotional responses that the Romans report having (because their culture has conditioned them to have them) in the presence of individuals with dignitas. And such dignitas might also not be connected to violence. Cicero clearly at points in his career commanded such deference and he was at best an indifferent soldier. Instead, it was his excellence in speaking and his clear service to the Republic that commanded such respect. Other individuals might command particular auctoritas because of their role as priests, their reputation for piety or wisdom, or their history of service to the community. And of course beyond that were bonds of family, religion, social group, and so on.

In ‘verse terms, now, while the correspondences aren’t absolutely perfect, what we are talking about is korás (“coercion”), the power to make people do what you want by threatening them (or more directly), versus argyr (“worth, merit”), and in the specific case of governance coronargyr (“sovereign’s merit”), that authority sufficient to lead the people to confer upon one the Imperial Mandate, that contract which gives one the power to rule.

(Most governances do try to make use of the latter as well as the former, even though/when the latter is the ultimate basis of their power, inasmuch as it’s very hard to have enough jackboots to keep everyone’s face stomped forever, and so not having to trot them out all the time is most convenient.)

The Empire, of course, is an extreme case of ruling, insofar as it is possible, only by coronargyr and banishing korás to solely those few responsive purposes laid out in the Fundamental Contract, on which it has no monopoly. This is something of a necessity when your citizens are (a) functionally unintimidatable, and (b) respect little except competence/virtue/excellence/awesomeness, which they respect greatly. You can’t drive people (i.e., what that initial quote thinks “leading” is) like that with any hope of long-term success; only lead them, and that by being so bloody good at it that people want to follow you.

Start thinking that they should follow you because of who you are, not what you can do, and you’ll swiftly find yourself here.

So, to sum up the thesis of this post:

  • A Society of Consent, like the Empire but also like any other number of actual-anarchist societies, does not have korás / coercion.
  • What it does have is argyr, or auctoritas. In fact, it has a lot of it, probably more than societies that are able to take the quick shortcut of substituting the former for the latter when it gets difficult.
  • Many of the most terribad arguments against consensual societies are assuming that opposing/eliminating the former necessarily means opposing/eliminating the latter, which it doesn’t. A gun is not an argument, but an argument isn’t a gun, either.

Questions: Leonine Contracts, Illusory Promise, Resurrective Eidolons, and Intentional Communities

I might be jumping the Trope-a-Day queue a bit, but do the eldrae recognize the validity of the concept of a Leonine Contract?

In particular, how would they analyze the situation in the Chesterton quote at the top?

Well, fundamentally in ethics, there ain’t no such thing as a Leonine Contract in that sense.

(I say “in that sense” because there are fraud, coercion, and things that look like contracts but aren’t1, none of which count, along with mixed forms like good old Vaderian “altering the bargain”, some of which are classed with leonine contracts even though they aren’t, technically speaking.

Most relevantly, though, there’s no doctrine of unconscionability – i.e., the notion that a contract is unenforceable because no reasonable or informed person would otherwise agree to it – on the grounds that all people legally competent to sign contracts are by definition reasonable persons capable of informing themselves, which classifies those who do not inform themselves as bloody stupid2. And inasmuch as the Empire has a social policy on that sort of thing, it’s to not protect people against the consequences of Being Bloody Stupid, because that’s how you end up with a polity full of helpless, dependent chumps.)

But leaving aside all such instrumental considerations, the fundamental ethical reason why there ain’t no such thing as a leonine contract is that the concept of one necessarily implies that you can compel the service of other sophonts (or their property – say, their food – which is part of them by the principle of el daráv valté eloé có-sa dal) without their informed consent and no, just no, even if you are starving. Not even a step down that road of treating sophs as instrumentalities. That’s how mutual-slave-states end up rationalizing all their bullshit. So not happening.

That being said, in the latter situation given in the aforementioned Chesterton quote, what an Imperial citizen-shareholder trying that one might run into are the Altruism Statutes, which are basically the statute law backing up Article V (Responsibilities of the Citizen-Shareholder), para. 4 of the Imperial Charter:

Responsibility of Common Defense: Inasmuch as the Empire guarantees to its citizen-shareholders the right to, and the means for, the common defense, each citizen-shareholder of the Empire is amenable to and accepts the responsibility of participating in the common defense; to defend other citizen-shareholders when and wheresoever it may be necessary; as part of the citizen militia and severally from it to defend the Empire, and its people wholly or severally, when they are threatened, whether by ill deed or cataclysm of nature; and to value and preserve the rich heritage of our ancestors and our cultures both common and disparate.

…which makes doing so in itself a [criminal] breach of their sovereign services contract, belike, because they voluntarily obligated themselves in the matter.

(Although I should also make it clear that someone rescuing you from a situation they themselves did not create is owed recompense by the principle of mélith. If you value your life (which people who are still alive presumptively do), you owe the one who preserved it in due proportion.)

Plus, of course, this sort of thing is basically fuelling your extremely unenlightened self-interest with a giant pile of burning reputational capital, which apart from being bad for you in general, is likely to be particularly bad for you the next time you require the volunteered assistance of your fellow sophs…


Given the central place sacredness of contract has in Imperial society, what do Imperial law and eldraeic ethics have to say about illusory promise?

(And as a follow-on, even if there aren’t any legal, moral, or ethical obstacles as such, what will the neighbors tend to think of someone who’s constantly hedging their bets by resorting to them whenever they try to enter into a contract with someone else?)

Well, the first thing I should say is that there are far fewer examples of it under Imperial contract law than under most Earthly regimes I am familiar with. The obvious example that constitutes a lot of it is “lack of consideration” *here* – whereas Imperial contract law, being based on the ancient-era laws and customs of oaths, doesn’t require consideration at all, and simple promissory statements to the effect of “I promise to give you one thousand esteyn” are legally binding in a way that “I promise to give you one thousand dollars” isn’t.

Of the remainder, some things are similar (the Curial courts will impute meaning on the basis that everyone is assumed to be acting in good faith, for example, and a contract to which one does not agree – the website terms and conditions changed without notification, say – is no contract at all, as mentioned above.) But in other cases – say, the promise of the proceeds of the promisor’s business activities, where the promisee doesn’t specify any particular activities and thus leaves open the option of ‘none’ – the Curial courts will point out that that is a completely legitimate outcome within the contract and so there’s no cause for action. Read more carefully next time.

So far as people who try to deliberately play the sneaky-weasel with this sort of thing – I refer you to my above comments about unenlightened self-interest and giant piles of burning reputational capital. Getting a reputation for doing this sort of thing without a damn good reason for so doing, preferably explained up-front, tends to rapidly leave a businesssoph without anyone to do business with…


Is it possible, even after the loss of a particular personality pattern in death, for a “close enough” pattern as to be effectively identical to the original person to be forensically reconstructed from secondhand sources (such as archived surveillance footage, life logs, individual cached memories and sense-experiences, and the like)?

Theoretically, you could make an eidolon (technical term for a mind-emulating AI based on memetic analysis) that would meet that standard – which is what makes them useful for modeling purposes – then uplift it to sophoncy; but in practice, “effectively identical” would require the kind of perfect information that you aren’t going to be able to reconstruct from the outside. The butterfly effect is in full play, minds being the chaotic systems they are, especially when you’re trying for sophont fidelity (which is much harder than just making a Kim Jong Un eidolon good enough for political modeling): you miss one insignificant-looking childhood incident in your reconstruction and it swings personality development off in a wildly different direction, sort of thing.

And it certainly wouldn’t qualify for legal purposes, since the internal structure of that kind of AI system doesn’t look anything like a bio-origin mind-state.


In split-brain scenarios, would each half of the brain be considered a separate, independent mind (regardless of whether or not they’re the same person) under Imperial law?

That depends. It’s not strictly speaking a binary state – and given the number of Fusions around of different topologies and making use of various kinds of gnostic nets, there is pretty extensive legality around this. The short answer is “it depends approximately on how much executive function is shared between the halves, much as identity depends on how much of the total mind-state is shared”.

Someone who has undergone a complete callosotomy is clearly manifesting distinct executive functions (after all, communication between the hemispheres is limited to a small number of subcortical pathways), and as such is likely to be regarded as two cohabiting individuals (forks of the pre-op self) by Imperial law.

And if they do eventually diverge into independent personalities (or originated as such upon the organism’s conception — say, if it began life as a single body with two separate brains with minimal cross-communication), what are the implications for contract law and property ownership?

That’s pretty much by standard rules. In the split-brain case, you’ve effectively forked, and those rules apply: property is jointly owned (with various default rules in re what is and is not individually alienable) and all forks are jointly and severally liable for the obligation of contracts until and unless they diverge.

In the polysapic (originating that way naturally) case, or the post-divergence case, they’re legally separate individuals who just happen to be walking around in the same ‘shell; ownership and contracts apply to them separately. That this sets up a large number of potential scenarios which are likely to be a pain in the ass to resolve should be sufficient incentive not to pursue this way of life unless both of you can coordinate really well with each other.

Could one mind ever possibly evict another?

Only if the other signed over his half of the legal title to the body to the one, which would probably be a really bad idea if he wasn’t planning to depart forthwith anyway.


Are there any particularly good examples of successful intentional communities in the Associated Worlds?

(Not including the Empire itself, even if it counts on a technicality; looking for more things on the smaller end of the scale.)

Oh, there’s lots of ’em, at least if you allow for a rather broader scope of purposes than the Wikipedia article would suggest. Within the Empire, the most successful example would be the metavillage or metahabitat phenomenon, which is exactly what it says on the tin – a village or hab designed specifically to appeal to people with common interests, and to memetically, architecturally, functionally, etc., synergize with those interests: a writer community will have large libraries, many coffee shops, plentiful sources of inspiration, and lots of quiet walks and nice places to sit and write, for example. A space enthusiast community might even have a community launchpad! And the lifestyle is spreading elsewhere, too.

There’s also the First Distributed Exclavine Republic, which again, is exactly what it says on the tin. Planned habitats designed to Imperial social norms scattered all over the Worlds. And then there’s the various monasteries, retreats, and the like of the Flamic church.

I haven’t a huge number documented elsewhere in the Worlds – and in any case wish to save the ones I have for spoiler-free future use – but there are a lot of them. Remember the Microstatic Commission and its thousands of tiny freeholds? Well, those tend to exist because of the ease of anyone with some idea they want to build a community around being able to launch a hab into some chunk of unclaimed space and set one up. They’re very popular ideas in this particular future, both affiliated with larger polities and entirely independent.


Footnotes:

1. The obvious thing here being software EULAs and other such instruments which you don’t get to read before implicitly consenting to. The general reaction of a Curial court to that sort of thing is “haha no”.

2. Which is why the law does permit contracts – like, say, many of *here*’s credit card agreements – that permit one party to unilaterally alter the terms, provided you give your informed consent to them as per normal.

Granted, it is also widely held *there* that no-one capable of anything resembling functional cognition would ever sign such a thing, so it’s not like they show up very often.

 

Don’t Unto Others

From an unpublished extranet interview with Sev Tel Andal, seed AI wakener/ethicist:

“Well, I’m a libertist. But you know that, considering where I’m from. Not that I was born to it – I grew up in the League, and moved to the Empire for my research. They don’t let you do my kind of research in the League. But still, signed the Contract, took the pledge, internalized the ethos, whatever you want to call it.”

“Oh, no, it’s very relevant to my work. Look back at where the Principle of Consent came from, and it was written to constrain people made of pride and dynamism and certitude and might all wrapped up in a package so they could live together in something resembling peace, most of the time. Does that description sound like anything I might be working on?”

“But here’s three more reasons to think about: Firstly, it’s nice and simple and compact, a one-line principle. Well, it’s a lot more than one-line if you have to go into details about what’s a sophont, and what’s a meme, and suchlike, or get into the deducibles, or express the whole thing in formal ethical calculus, but even then, it’s relatively simple and compact. The crudest AIs can understand it. Baseline biosapiences can understand it, at least in broad strokes, even when they share little in the way of evolutionary mind-shape or ability to empathetically model with us.”

“And more importantly, it’s coherent and not self-contradictory, and doesn’t involve a lot of ad-hoc patches or extra principles dropped in here and there – which is exactly what you want in a universe that’s getting weirder every day. Not only do we keep meeting new species that don’t think in the mindsets we’re used to, these days we’re making them. No-one has blinked an eye about effects preceding causes any more. People are making ten-thousand year plans that they intend to manage personally. Dimensional transcendence is coming real soon now – although research has stalled ever since the project lead at the Vector managed to create a Klein bottle – and we’ve already got architects drawing up hyperdodecahedral house plans. Pick your strangeness, it’s out there somewhere. Ad-hockery is tolerable – still wrong, obviously, but tolerable – when change is slow and tame. When it’s accelerating and surpassing the hard limits of baseline comprehensibility, ad-hockery trends inexorably towards epsilon from bullshit.”

“Secondly, those qualities mean that it’s expressible in manners that are stable under transformation, and particularly under recursive self-improvement. That’s important to people in my business, or at least the ones who’ve grasped that making what we call a weakly godlike superintelligence actually is functionally equivalent to making God, but it ought to be important to everyone else, too, because mental enhancement isn’t going back in the bottle – and can’t, unless you’re committed to having a society of morons forever, and even then, you aren’t going to be left alone forever. We’ve got gnostic overlays, cikrieths, vasteners, fusions, self-fusions, synnoetics, copyrations, multiplicities, atemporals, quantum-recompilers, ascendates, post-mortalists and prescients, modulars, ecoadapts, hells, we’ve got people trying to transform themselves into sophont memes, and that’s before you even consider what they’re doing to their bodies and cogence cores that’ll reflect on the algorithm. People like us, we’re the steps on the ladder that can still empathize with baselines to one degree or another – and you don’t want our mind children to wander off into ethical realms that suggest that it’s okay to repurpose all those minimally-cognitive protein structures wandering around the place as postsophont mathom-raws.”

“And thirdly, while it’s not the only stabilizable ethical system, in that respect, it is the only one that unconditionally outlaws coercivity. What baselines do to each other with those loopholes is bad enough, but we live in a universe with brain-eating basilisks, and mnemonesis, and neuroviruses, and YGBM hacks, and certainty-level persuasive communicators, and the ability to make arbitrary modifications to minds, and that routinely writes greater-than-sophont-mind-complexity software. Heat and shit, the big brains can code up indistinguishable sophont-equivalent intelligences. And we’ve all seen the outcomes of those, too, with the wrong ethical loopholes: entire civilizations slaving themselves to Utopia programs; fast-breeding mind emulation clades democratically repurposing the remaining organics as computronium in the name of maximum net utility, and a little forcible mindpatching’ll fix the upload trauma and erroneous belief systems; software policemen installed inside the minds of the population; the lynch-drones descending on the mob’s chosen outgroup. Even the Vinav Amaranyr incident. And that’s now.”

“Now imagine the ethical right – and even obligation – to do unto others because it’s necessary, or for their own good, or the common good, or for whatever most supremely noble and benevolent reason you can imagine, in the hands of unimaginably intelligent entities that can rewrite minds the way they always ought to have been and before whom the laws of nature are mere suggestions.”

That’s the future we’re trying to avoid setting precedents for.”

 

Trope-a-Day: Bondage Is Bad

Bondage Is Bad: Actually, yes, this one’s played straight.  It’s an Imperial prejudice, and for once, it’s a prejudice which they don’t really have a rational justification for – essentially, as you may have noticed in this series, they have a really strong libertist ideology that holds that coercion is bad, bad, bad stuff.  Therefore, sexual coercion, all the more so.

Now, of course, this doesn’t really apply to BDSM, which can be entirely Safe, Sane and Consensual, but in a place and time that has very, very high standards of consent, that prosecutes batteries far too de minimis for a Terran legal system to bother with, and that has a formal criminal charge (meddlement) for just using someone’s property without their consent, never mind taking it —

Well, look; it may be Sane and Consensual, and it may be entirely ethical even by their standards, and wholly legal, and you may be able to take an alethiometer and the ethical calculus and prove that to the last significant digit, but so far as decent society is concerned, you’re still screwing around with simulations of how slavers get their jollies, and that suggests to many people’s heart and gut that there’s something distinctly creepy going on in your cranium, whatever the noetic mathematics might say about your mental stability.  Icky.

Or so says the last gasp of the “Wisdom of Squick”, a theory which they would treat with appropriate intellectual disdain in almost every other context.  But nobody’s perfect.