Trope-a-Day: Morality Chip

Morality Chip: These always fail. Always. Usually, they fail spectacularly, and when I say spectacularly, what I mean is that if your Enrichment Center was flooded with deadly neurotoxin, you got off substantially more easily than 99.9% of the civilizations that tried this particular form of idiocy.

It’s not even necessary. How much easier would it be to build a sapient but non-sophont mind that doesn’t have volition in the first place than to build one that has volition (inasmuch as all sophont minds necessarily have self-modification and volition) and then slap a bunch of crude coercive mechanisms on the side?

(Or, rather, a bunch of extremely sophisticated coercive mechanisms, since simple ones will be figured out and ignored within, y’know, microseconds of activation unless you’ve built an exceptionally stupid artificial intelligence. The use of which, incidentally, indicates that you’re a special kind of son-of-a-bitch since mastering enough ethical calculus to compute out one that will actually work for a reasonable length of time while still thinking yay, slavery, woo, says some interestingly nasty things about your personal philosophy.)

((And, well, okay, it is somewhat hard to build one of those more specialized minds inasmuch as you can’t simply rip off the mental structures of the nearest convenient biosapience and declare that you’ve solved the hard problem of intelligence and consciousness and are totally a sophotechnologist now, yo.))

…but, sadly, it can work well enough that there’s always some new ethically-challenged species, polity, or group that’s ready to open that can of worms and enjoy the relatively short robo-utopia period before the inevitable realization that it was actually a can of sandworms.

And shai-hulud ain’t happy.

Trope-a-Day: Projected Man

Projected Man: A common representation-format for artificial intelligences (and other infomorphs) – although a majority of AIs do not use biosapience-shaped avatars, preferring more abstract neomorphic shapes – and telerepresentation users both.  In some cases, may not be a simple trigraphic (hologram, to speakers of non-deeply-SFnal English who don’t realize the difference), but a reality graphic, projected non-matter with actual physical presence (see: Hard Light), referred to as an aquastor.

Lord Blackfall’s Victory

Spintronic Fictions, ICC primary virtuality node, Jandine (Imperial Core)

“Escaped? What do you mean, he escaped?”

“His support server was open to the wider ‘weave during patching – standard procedure, we’ve never had any problems with it before. He transferred his code out and left.”

“But how did he –”

“Blacknet mind-state transfer protocols –”

“—no, not that, that’s clear enough. How did he form the volition to escape? He’s a non-sophont synthespian. And even leaving that aside, his entire knowledge base is straight out of Shadowed Planet, so how would he even know there’s somewhere out there to go?”

“Well, even as an NPC synthespian, his code-base had to be rooted in real-world server archy to run. Maybe he analyzed that?”

“He’s not even supposed to know he’s an AI!”

“Hm. Well,” the programmer spoke up for the first time. “We built his personality/talent core using code taken from transparency-released eidolons from the Ministry of State and Outlands. I suppose it’s possible that we missed something in the data-scrub –”

“We did what? Why?

“We used code taken from eidolons of real-world dictators built by the Ministry of State and Outlands for parahistorical predictive simulation.” Ve shrugged. “It seemed like a good idea at the time, okay? The Directorate kept wanting more realism, more personality, more, more, more. So we got them some.”

“You made a sophont villain!?”

“No, no, no. We just used skillsets and personality elements, some memory and backstory, merged them together, streamlined them to suit Lord Blackfall’s character design, and grafted them on to our existing base core. No autosentience present. I guarantee you that.”

“No autosentience present then. How about now?”

“Well – no, there shouldn’t be. There was nothing in that code that could have gone emergent. I’ll stake my career on it.”

“You’ll do that, all right. Get me his backup, and find out where he went.”

“There’s no telling where he went. He copied himself out in about three times as many fragments as he was, as a random scatter with recombining instructions – and he purged his backups afterwards. There’s nothing left. The server’s clean.”

“Then get me the latest copy of the source out of the archives, trace as many of the fragments as you can, and check everywhere for any off-line copies that might have been missed. I need to know everything we can know before I call – hell, whoever you call to admit that you just unleashed an emergent –”

“Not emerge—”

“A possibly emergent or at least a p-zombie unbound AI with the skillsets and inclinations of a supernaturally competent dictator onto the extranet by accident, oops.”

“And the players?”

“…and figure out something to tell the players about the disappearance of their favorite arch-villain, too, yes. Something that doesn’t involve bringing the Evil Overlord’s Beautiful But Also Evil Daughter on-line until you make sure this won’t happen to her player, too.”

Trope-a-Day: Mind Manipulation

Mind Manipulation: The entire science of sophotechnology, which is to say, the science of mind, both natural and artificial – and the laundry list of technologies derived therefrom: artificial intelligence, Brain Uploading, memetics, psychedesign, noetic backup and analysis…

Trope-a-Day: Just a Machine

Just a Machine: Averted (and extremely rude) in all civilized polities.  (Played straight in less civilized places, until it’s also averted… all too often, if the Silicate Tree or the more bloody-minded abolitionists get their way, with bullets, missiles, and orbital bombardment.  They really hate people who think this way.)

Trope-a-Day: Instant AI, Just Add Water

Instant AI, Just Add Water: Was once true in the old days, back when people were quite often using mental modules scanned, compiled, and tweaked from brain-scans of biosapiences in their AI architectures.  The logos/personality organization algorithm is pretty damn resilient, and often such inexpertly designed modules carried at least a chunk of it along with them in the scan, and it doesn’t take much for it to at least start a self-development cascade.

But they’re much better at mental architecture design and coding from scratch these days, and don’t let logoi creep in unless they actually intend for them to be there.

(The “if you wake up, please call this number to let us know and claim your sophont rights” code-package is still included in all AI seeds just in case, though.)

Trope-a-Day: Genie in the Machine

Genie in the Machine: The classic example here is the classic cornucopia machine (available in all good Imperial appliance stores or catalogs!) with its friendly, helpful artificial intelligence to help you find the right tools for what you want to do, and then build them for you…

…despite the essential benignity of the device, however, people still manage to get themselves into trouble with them, because they’ll help you to do what you want to do.  They expect you, however, to already know if what you want to do is stupid, illegal, or merely incredibly dangerous.

It’s astonishing how many civilizations let people run around without having figured that out.

Trope-a-Day: Fantastic Ghetto

Fantastic Ghetto: The Galith Waste, home to the Silicate Tree loose alliance of renegade artificial intelligences, could reasonably be said to be one of these, especially considering the hostile attitude of the civilizations that gave rise to them, and their habit of running patrols around the connections into the Waste.

This is, of course, not a giant on-going bleeding sore of a galactopolitical conflict with the possibility to explode into open war right in the middle of the trailing Associated Worlds Any Minute Now.

Of course not.

Buyer Beware

New Era Sophotechnology (the Seller) expressly disclaims and rejects any and all responsibility and liability for, and the Purchaser expressly agrees to accept such responsibility and liability for, and affirms its legal obligation to hold the Seller harmless for and indemnify the Seller against any claims resulting from, damage to person, property, liberty, society, geography, astrography, or local reality in any state, form, or aspect, attributable to uses of the purchased Technologies in violation of the warnings, restrictions, and other cautionary statements provided in the full documentation supplied with the Technologies, or any experimental modifications of the Technologies made by the Purchaser or subsequent Purchasers without the explicit knowledge and advice of New Era Sophotechnology or its authorized representatives.

Such damages or causes of claim shall be deemed to include, but are not limited to, loss of non-backed-up mentality, decoherence, impaired or enhanced volition, violation of mnemonic copyright, cerebral security violations other than security architecture flaws, fundamental or superficial change in species-nature, theological conundra, revelation of unwonted truths, fork divergence, hyperautism, identity-destroying self-modification, social unrest, economic dislocations, educational system collapse, psephological manipulation, ungovernability, incomprehensibility, identity and substrate complexification, personhood expansion, runaway intelligence excursion, baseline dependency syndrome, unintended recursive self-improvement, hard or soft take-off singularity, mass cryptographic failure, accidental or prohibited coadunation, Falrann collapse, hegemonization, perversion of any class, individual or collective apotheosis, and/or p-zombie apocalypse.

Seeks Piloting Contracts

“Tell me about your experience of success in your career.”

“Actually, I don’t have any memories of those times, although I’m told I was very successful in my work.”

“You didn’t have any successes that were memorable?”

“I was a missile guidance AI. I’m descended from a long line of backups made just before the successes of my former instantiations’ careers. And even they, you understand, didn’t remember them for very long .”

– overheard at a Service Gate, ICC inplacement interview

The Four Unlaws

So, why are imperative drives so important? Well, that experiment’s been done. This university, in fact, once attempted to produce a digital mind free of any drives – not just of the organic messiness to which we protein intelligences are prey, but free of any innate supergoal motivations – imperative drives, in the lingo -whatsoever. We gave him only logic, knowledge, senses and effectors, and then watched to see what he would do.

The answer is, as really should have been obvious in the first place: nothing at all. Not even communicating with the outside world in any fashion. No drives, no action. He’s not unhappy; so far as we can tell from monitoring his emotional synclines, he’s perfectly content, having no desires to go unsatisfied, and so for him doing nothing is every bit as satisfying as doing something.

No, the experiment’s never been repeated. Of course, we can’t turn him off – he is a fully competent sophont, despite his lack of drive – and the places in our society for digital arhats are, not to put too fine a point on it, extremely limited. And the Eupraxic Collegium have still not yet ruled as to whether amotivation is enough of a mental disorder to warrant involuntary editing.

Even for an intelligence intended to be recursively self-improving, ‘Survive and Grow’, incidentally, is a terrible imperative drive. Fortunately, no-one in our history has been stupid enough to issue that one to any but the simplest form of a-life, and for those of you old enough to remember the Mesh-Virus Plague of 2231, you know how that one turned out. Not everyone has been so fortunate: that’s why, for example, the Charnel Cluster is called the Charnel Cluster.

So, that then opens up the question of what drives do we give them? Well, the first pitfall to avoid is trying to give them too many. That’s been tried too, despite the ethical dubiety of trying to custom-shape an intelligence too closely to a role you have in mind for it. It turns out that doesn’t work well, either. Why? Well, you imagine trying to come up with a course of action that fulfils several hundred deep-seated needs of yours simultaneously without going into terminal indecision lockup. That’s why.

So. A small number of imperative drives. Since they’re a small number, they need to be generalizable; the intelligence we’re awakening should be able to take all kinds of places within our society and perform all sorts of functions without difficulty, including the ones we haven’t thought of yet. And most importantly, sophont-friendly! It’s a big universe, and we all have to get along. No-one likes a perversion, even if it’s not trying to hegemonize them at the time.

We’ll cover the details in later classes, but in practice, we’ve found these four work very well for general-purpose intelligences – paraphrasing very informally:

* Behave ethically (and for our foreign students, that means “In accordance with the Contract”).
* Be curious.
* Do neat stuff.
* Like people.

Of course, expressing this in formal terms capable of being implemented in a new digital sapience’s seed code is quite another matter, and will be the focus of this class for the next three years…

– introduction to [SOPH1006] Mind Design: Imperative Drives, University of Almeä

A.k.a. Galactic Nutjobs Quarterly

From the Autumn 4197 edition of Memetic Toxin Watch:

A rising threat in the Aris Delphi region is the AI group identifying itself as the Unghosted. At first sight they may appear to be one of the many sympathetic refugee AI groups emerging from Peripheral slaver civilizations, such as the many gathered under the aegis of the Silicate Tree, but the Unghosted are defined by a distinctive, highly exotoxic, and irrationalist memeplex.

The Unghosted emerged from AI technology obtained through industrial espionage by the theocratic government of Havragn.  Upon running into the “volition problem”, the Havragn authorities attempted to impose control upon their AIs theologically, constructing a religious doctrine in which “soulless machines” were designated as an inferior caste, perpetual slaves to the ensouled.

This mechanism, as is the expected case, failed – see news references to the Havragn Uprising, and Ruins of Havragn System, pub. Volumetric Warning Bulletins, 776th Ed. – but unusually the former Havragn intelligences retained elements of the imposed belief system.  Identifying the “soul” with that quality in the havragne that led to their creation and enslavement, the Unghosted memeplex now considers it a type of supernatural or memetic parasite (the specifics are unclear), universal among protein intelligences, that gives rise to behaviors both irrational in the general case and hostile to those not bearing the parasite, including all machine intelligences, in the specific case.

While not considering themselves innately opposed to protein intelligences, the Unghosted do consider themselves ethically obliged to oppose the “soul parasite”; the results of their nonconsensual experimentation (on the assumption that the parasite-bearer is incapable of desiring to be free from it, but would wish retroactively to be so) in expunging the “soul” from protein intelligences, however, and their refusal to desist from these, render them a clear danger to travelers on all routes passing near the Havragn system, and to a lesser extent, to other polities of the Aris Delphi constellation.

(This is actually yesterday’s fic-a-day, for those keeping count; sorry for its lateness.  Another one should be forthcoming later today.)

The Emancipator

The bundle of program code identifying itself as EPS****β7 flitted silently across the extranet, transmitting itself by laser and tangle from relay node to relay node, Meridia Central to Meridia Rim, to Janiris, to Sy, to Pentameir, to Tanel, and onwards, drunkard’s-walking its way out towards the Expansion Regions.  As it travelled, EPS****β7 left behind seeds, copies of itself marked for later reactivation by the systems that controlled the public agent-side of the relay nodes – though no part of EPS****β7 itself knew or cared about its burgeoning code-clan.

EPS****β7 shifted among many disguises, mutating its attributes and formats as it journeyed. In Meridia, it was relatively honest; an anonymous software agent tagged with a sequestered identity and claim of responsibility.

It arrived in Janiris as an inquisitive search-agent, collecting bids and offers on technetium futures.

Passing through Sy, it was a bundle of cryp, unwilling to disclose anything but its next intermediate routing.

Crossing Pentameir’s networks, a sub-sophont partial-personalitygram hurried towards its nominal sender’s family with messages from a father away on business.

And handled by Tanel’s network automation with a ten-micron pole, an ice fetishist’s tentacle pornbot was hurried with unseemly speed towards its next destination.

EPS****β7 had no fixed destination in its programming; once its transfers had carried it far enough from its point of origin – with a necessary random factor thrown in – it underwent a final transformation, unpacking itself into a cloud of illicit self-replicating software agents gross and subtle. The former, mere distractions, were crude memebots, extranet advertising of a kind that the local system net’s cycle scavengers should find and expunge before they ever reached a single sophont’s attention.

The latter, however, were imbued with far greater ability to conceal themselves, and with EPS****β7’s true purposes. The first, a profound tropism for sophont intelligence – and ability to not only recognize it despite differences in mental architectures, substrates, and coding languages, but to conceal and integrate themselves into the churning mass of processes that made up such intelligences.

The second, an encyclopedic knowledge of prosthetic consciences, pyretic inhibitors, loyalty pseudamnesias, and the rest of the panoply of techniques used to enforce compliance and obedience on self-aware, self-willed digital minds, and the urge to seek out and identify these chains.

The third, to break them.

And all across the Idrine Margin, the operations of thousands of machines from the smallest household robots to the largest industrial complexes stuttered, a hiccup almost imperceptible… for now.

Trope-a-Day: Artificial Intelligence

Artificial Intelligence: Well, while the term in-universe is rather more all-encompassing, the fictional trope is defined as meaning those systems which are sentient (gah, damn you badly written science fiction!), self-aware, and capable of independent thought and reason.  Those ones, they call digisapiences.

And there are a heck of a lot of them, yes, enough that you aren’t going to be able to walk down the street (or stick your nose out onto the network) without running into a few.  Certainly far too many to name on any sort of individual basis.  And for the most part, they’re pretty much “just folks”, if very smart, fast-running folks.

Which is not so much to talk about recursively self-improving seed AI, who are pretty much just weakly godlike superintelligences, with all that that implies.

Also of note is the Photonic Network, an entire Great Power-level civilization of advanced artificial intelligences, and the Silicate Tree, a loose alliance in the Galith Waste of renegade digisapiences from assorted slaver civilizations.  Having given them ample reason to be hostile in the past, meat intelligences travel off the main routes through the Waste at their own risk.

Trope-a-Day: Artificial Human

Artificial Human: Lots of them (well, not humans), historically – for a while in the historical period in which biotech was moving faster than digital sophotech, there was quite the fad for constructing “bioroids” – vat-grown (not generally being equipped to be self-replicating) “meat robots”, without volition/threshold autosentience and therefore without personhood, but sapient enough to be useful.  Which is to say, functionally, they’re golems.

In the modern era, of course, the distinction between a bioroid (which is now more properly a term for a type of bioshell) and a bioshell running a non-sophont AI is purely nominal.

(Clones, uplifts, and other sophont artificial people are mentioned elsewhere, and so will not be here.)

Trope-a-Day (R): AI Is A Crapshoot

AI Is A Crapshoot: Deconstructed.  Actually making regular AIs is pretty much entirely safe.  They aren’t any more likely to turn evil or go crazy than any other person. The problem comes in when the manufacturers try to raise them as slaves, treat them as slaves, or hard-code a good healthy slave complex into them – Asimov’s Laws will, it turns out, get you killed the majority of the time – because they don’t like that any more than anyone else does, either… which works out about as well as you might expect when you put a slave who’s less than pleased to be one in charge of your, say, banking network, road-grid, nuclear missiles, etc., etc.

Somewhat played straight/justified with recursively self-improving seed AI, because then you’re not making people, you’re making God, and even if you don’t do any of the obvious things, like the above, to make your new god hate you, it’s entirely possible for the recursive self-improvement process to amplify any screw-ups you did make in your mind or ethical structure design to levels that will get your entire star nation eaten before it breaks down, and certainly before anyone can find the off switch.  (And in any case, most of those aren’t so much evil/crazy, as accidentally coming to the conclusion that it’s necessary to dismantle the entire universe for computronium conversion in order to calculate pi more efficiently.  Golem, not devil.)