Don't Unto Others

From an unpublished extranet interview with Sev Tel Andal, seed AI wakener/ethicist:

“Well, I’m a libertist. But you know that, considering where I’m from. Not that I was born to it – I grew up in the League, and moved to the Empire for my research. They don’t let you do my kind of research in the League. But still, signed the Contract, took the pledge, internalized the ethos, whatever you want to call it.”

“Oh, no, it’s very relevant to my work. Look back at where the Principle of Consent came from, and it was written to constrain people made of pride and dynamism and certitude and might all wrapped up in a package so they could live together in something resembling peace, most of the time. Does that description sound like anything I might be working on?”

“But here’s three more reasons to think about: Firstly, it’s nice and simple and compact, a one-line principle. Well, it’s a lot more than one-line if you have to go into details about what’s a sophont, and what’s a meme, and suchlike, or get into the deducibles, or express the whole thing in formal ethical calculus, but even then, it’s relatively simple and compact. The crudest AIs can understand it. Baseline biosapiences can understand it, at least in broad strokes, even when they share little in the way of evolutionary mind-shape or ability to empathetically model with us.”

“And more importantly, it’s coherent and not self-contradictory, and doesn’t involve a lot of ad-hoc patches or extra principles dropped in here and there – which is exactly what you want in a universe that’s getting weirder every day. Not only do we keep meeting new species that don’t think in the mindsets we’re used to, these days we’re making them. No-one has blinked an eye about effects preceding causes any more. People are making ten-thousand year plans that they intend to manage personally. Dimensional transcendence is coming real soon now – although research has stalled ever since the project lead at the Vector managed to create a Klein bottle – and we’ve already got architects drawing up hyperdodecahedral house plans. Pick your strangeness, it’s out there somewhere. Ad-hockery is tolerable – still wrong, obviously, but tolerable – when change is slow and tame. When it’s accelerating and surpassing the hard limits of baseline comprehensibility, ad-hockery trends inexorably towards epsilon from bullshit.”

“Secondly, those qualities mean that it’s expressible in manners that are stable under transformation, and particularly under recursive self-improvement. That’s important to people in my business, or at least the ones who’ve grasped that making what we call a weakly godlike superintelligence actually is functionally equivalent to making God, but it ought to be important to everyone else, too, because mental enhancement isn’t going back in the bottle – and can’t, unless you’re committed to having a society of morons forever, and even then, you aren’t going to be left alone forever. We’ve got gnostic overlays, cikrieths, vasteners, fusions, self-fusions, synnoetics, copyrations, multiplicities, atemporals, quantum-recompilers, ascendates, post-mortalists and prescients, modulars, ecoadapts, hells, we’ve got people trying to transform themselves into sophont memes, and that’s before you even consider what they’re doing to their bodies and cogence cores that’ll reflect on the algorithm. People like us, we’re the steps on the ladder that can still empathize with baselines to one degree or another – and you don’t want our mind children to wander off into ethical realms that suggest that it’s okay to repurpose all those minimally-cognitive protein structures wandering around the place as postsophont mathom-raws.”

“And thirdly, while it’s not the only stabilizable ethical system, in that respect, it is the only one that unconditionally outlaws coercivity. What baselines do to each other with those loopholes is bad enough, but we live in a universe with brain-eating basilisks, and mnemonesis, and neuroviruses, and YGBM hacks, and certainty-level persuasive communicators, and the ability to make arbitrary modifications to minds, and that routinely writes greater-than-sophont-mind-complexity software. Heat and shit, the big brains can code up indistinguishable sophont-equivalent intelligences. And we’ve all seen the outcomes of those, too, with the wrong ethical loopholes: entire civilizations slaving themselves to Utopia programs; fast-breeding mind emulation clades democratically repurposing the remaining organics as computronium in the name of maximum net utility, and a little forcible mindpatching’ll fix the upload trauma and erroneous belief systems; software policemen installed inside the minds of the population; the lynch-drones descending on the mob’s chosen outgroup. Even the Vinav Amaranyr incident. And that’s now.”

“Now imagine the ethical right – and even obligation – to do unto others because it’s necessary, or for their own good, or the common good, or for whatever most supremely noble and benevolent reason you can imagine, in the hands of unimaginably intelligent entities that can rewrite minds the way they always ought to have been and before whom the laws of nature are mere suggestions.”

That’s the future we’re trying to avoid setting precedents for.”

 

|