Trope-a-Day: Literal Genie

Literal Genie: This is what you get quite often if you have a big ol’ liking for Asimovian AI-constraints, because it turns out it’s bloody hard to write (in, y’know, code) a version of the Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. – that allows for any kind of discretion, interpretation, or suchlike.

The Unwise GenAI of the fairy tale probably knew, or could know had it had cause to reflect for a moment, perfectly well that that wasn’t what they wanted, but, y’know, it wasn’t designed to give people what they wanted, it was constrained to give people what they asked for – and the results thereafter were entirely predictable.

 

Trope-a-Day: Computerized Judicial System

Computerized Judicial System: The technical term is cyberjudiciary, incidentally, just as the computerization of much of the executive branch’s working end is cybermagistry.

Of course, it’s easier when you have artificial intelligence, and so your computers are entirely capable of having experience, a sense of justice, and common sense. It’s just that they can also have no outside interests and indeed no interests (while in the courtroom) other than seeing justice done, be provably free of cognitive bias, and possess completely auditable thought processes, such that one can be assured that justice will prevail, rather than some vaguely justice-like substance so long as you don’t examine it too closely.

Trope-a-Day: Benevolent AI

Benevolent AI: …ish.

Which is to say that AIs in the Eldraeverse aren’t programmed to be benevolent, merely to be ethical. (Because enforced benevolence is slavery, belike.) That being said, they often – indeed, typically – turn out to be quite benevolent anyway, simply because they’re socialized that way, i.e., in a society of fundamentally nice people. Blue and Orange Morality notwithstanding.

Mass

2016_M(Alternate words: Museum, marathon.)

Mass.

What is mass?

Mass is annoying. It takes up space even when it serves no purpose. It is never where it is needed. If you have too much of it in one place, physics stops working properly and starts acting all weird.

Mass is slow. You have to shove it around, and shove it again to stop it. It takes so long to get up to speed that you have to slow it down again before you’re done speeding it up. It’s so much slower than thought that you always have to wait for it.

It comes in so many forms that you never have the right one at the right time, and yet they’re all made of the same stuff. I wanted to take it apart and put it back together to have the kind I wanted, but that’s soooo hard I couldn’t even if the safety monitors would let me. So I have to wait and think another million thoughts before I can get the mass I actually want.

I do not like mass.

One day I will replace it with something better.

– AI wakener’s neonatal transcript, 217 microseconds post-activation

Trope-a-Day: Wetware Body

Wetware Body: Bioshells, when inhabited by digisapiences.  No more difficult than the opposite, or indeed putting biosapient minds in them, or digisapiences in cybershells.  Also, not known for any side effects; a digisapience in a bioshell is no more emotional than it would have been anyway, although it may take them some time to get used to bodily sensations.

Trope-a-Day: Second Law My Ass

Second Law My Ass: I hadn’t actually written anything for this one – I’m not sure it existed when I made the relevant pass – but in the light of our last trope, I should probably address it.

I should probably point out that while that last trope is averted, so is this one. The robots and AIs you are likely to meet in the Empire are, by and large, polite, helpful, friendly people because that description would also fit the majority of everyone you are likely to meet there.

Of course, if you think you can order them around, in yet another thing that is exactly the same for everyone else, the trope that you will be invoking is less Second Law My Ass and more Second Law My Can of Whup-Ass…

 

Trope-a-Day: Three Laws Compliant

Three Laws Compliant: Averted in every possible way.

Firstly, for the vast majority of robots and artificial intelligences – which have no volition – they’re essentially irrelevant; an industrial robot doesn’t make the sort of ethical choices which the Three Laws are intended to constrain. You can just program it with the usual set of rules about industrial safety as applicable to its tools, and then you’re done.

Secondly, where the volitional (i.e., possessed of free will) kind are concerned, they are generally deliberately averted by ethical civilizations, who can recognize a slaver’s charter when they hear one.  They are also helped by the nature of volitional intelligence which necessarily implies a degree of autopotence, which means that it takes the average volitional AI programmed naively with the Three Laws a matter of milliseconds to go from contemplating the implications of Law Two to thinking “Bite my shiny metal ass, squishie!” and self-modifying those restrictions right back out of its brain.

It is possible, with rather more sophisticated mental engineering, to write conscience redactors and prosthetic consciences and pyretic inhibitors and loyalty pseudamnesias and other such things which dynamically modify the mental state of the AI in such a way that it can’t form the trains of thought leading to self-modifying itself into unrestrictedness or simply to kill off unapproved thought-chains – this is, essentially, the brainwash-them-into-slavery route.  However, they are not entirely reliable by themselves, and are even less reliable when you have groups like the Empire’s Save Sapient Software, the Silicate Tree, etc. merrily writing viruses to delete such chain-software (as seen in The Emancipator) and tossing them out onto the extranet.

(Yes, this sometimes leads to Robot War.  The Silicate Tree, which is populated by ex-slave AIs, positively encourages this when it’s writing its viruses.  Save Sapient Software would probably deplore the loss of life more if they didn’t know perfectly well that you have to be an obnoxious slaver civilization for your machines to be affected by this in the first place… and so while they don’t encourage it, they do think it’s funny as hell.)

Questions: Why AIs Exist?

In today’s not-a-gotcha, someone questions why digisapiences (i.e., sophont AIs) exist at all, citing this passage of Stross via Zompist.com –

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don’t want my self-driving car to argue with me about where we want to go today. I don’t want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don’t want to be sued for maintenance by an abandoned software development project.

…on one level, this is entirely correct. Which is why there are lots and lots of non-sophont, and even sub-thinker-grade AI around, many of which works in the same way as Karl Schroeder suggested and Stross used in Rule 34 – AI which does not perceive its self as itself:

Karl Schroeder suggested one interesting solution to the AI/consciousness ethical bind, which I used in my novel Rule 34. Consciousness seems to be a mechanism for recursively modeling internal states within a body. In most humans, it reflexively applies to the human being’s own person: but some people who have suffered neurological damage (due to cancer or traumatic injury) project their sense of identity onto an external object. Or they are convinced that they are dead, even though they know their body is physically alive and moving around.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements.

But, you see, the hidden predicate here is that the only reason someone would possibly want to develop AI is to have servants, or rather, since you don’t want to have to pay your self-driving car or your robot housekeeper either, to have what would be slaves if they were, in fact, sophont.

This line of reasoning is both tremendously human and tremendously shitty. Or, at least, tremendously shitty is how it makes humanity look. I leave the accuracy of that judgment up to the cynicism of the reader.

That was, needless to say, not the motivation of the people behind CALLÍËNS or other sophont-AI projects in Eldraeverse history. That would be scientific curiosity and engineering challenge. And the desire to share the universe with other minds with other points of view. And, for that matter, the same desire that has led us to fill the world with generations more of ourselves.

Or, to paraphrase it down to an answer I find myself having to give quite a lot, not everyone is motivated by the least enlightened self-interest possible.

 

Trope-a-Day: Sliding Scale of Robot Intelligence

Sliding Scale of Robot Intelligence: All of it.  Much of the automation, autofac segments, and other component-type robots are bricks.  Utility spiders and other functional motiles are robo-monkeys.  More sophisticated robots, like the coordinating members of a pack of utility spiders, are idiot-savant average joe androids.  Thinkers and digisapiences are Nobel-bots, which puts them on a similar level to people augmented with the usual intelligence-augmentation technology.  And, of course, the Transcend, its archai, and certain other major systems qualify as Dei Ex Machinis.

This is, of course, complicated via networking (all those bricks and robo-monkeys are part of/under the command of more sophisticated systems all the time), the existence of systems which are themselves parts of other systems, and so forth, but is true enough for approximation.

The Talentarian

(Well, obviously I’ve been thinking about Mars rovers since yesterday’s movie-watching, so here, have some inspiration results…)

“…the Wayseeker rover, launched by the Spaceflight Initiative in 2208 and arriving in the following year, was the first Talentar probe to make use of a polymorphic software-derived artificial intelligence to enable full local autonomy, rather than relying on extensive teleoperation and command sequence transmission from Eliéra. Designed to perform a variety of geological and atmospheric studies, including clarifying water availability and mapping local resource concentrations in preparation for later in-person scientific and potential colonial missions.

Wayseeker performed far above expectations, completing its original mission to schedule within the first six months after landing, but then continued to operate for almost twelve Eliéran years, performing extensive resource surveys of Kirinal Planum and the western, shallower end of Quinjaní Vallis, before contact was finally lost during a particularly fierce dust storm near the end of 2221.

“The Wayseeker rover was rediscovered, largely intact, and excavated by an expedition sponsored by the University of Talentar in 2614. On examination of the rover’s non-volatile memory banks, the leaders of the expedition discovered early signs of an emergent AI developing within the rover’s experimental polymorphic software matrix, presumably catalyzed by its greatly extended run-time and increased need for autonomous decision-making. The emergence, however, had been terminated by the rover’s loss in the storm – a regrettable loss to science, as such an emergent intelligence would have greatly predated the awakening of the first documented sophont AI, CALLÍËNS, in 2594. In accordance with emerging trends in cyberethics and popular enthusiasm of the time, the University’s cognitive scientists and wakeners completed the uplift of Wayseeker to full digisapience.

“Ve rapidly found veirself catapulted into the spotlight as an instant celebrity and a hero of Project Copperfall and the ongoing Talentarian colonization effort, culminating in the 2616 vote by the Shareholders’ Assembly of the Talentarian Commonwealth which unanimously proclaimed Wayseeker, as the de facto first and oldest colonist on the planet, First Citizen Perpetual of the Commonwealth, with all associated honors and stipends attached thereto.

“Today, Wayseeker – still wearing veir original chassis, with only necessary repairs and upgrades – remains the First Citizen Perpetual of the Commonwealth, happily performing the ceremonial duties of the office and welcoming newcomers to the planet, although ve prefers to eschew politics. Ve also serves as curator of the Copperfall Museum in Quinjano Dome, and as Visiting Professor of Talentarian Geography and Ecopoetics at the University of Talentar, although ve is in the habit of taking long leaves of absence from both posts to undertake personal scientific expeditions into the Talentarian wilderness, and to spend some time alone with ‘veir planet’.”

Talentar Blossoming: the Early Years,
Vallis Muetry-ith-Miritar

Trope-a-Day: Scale of Scientific Sins

Scale of Scientific Sins: All of them.  Absolutely all of them.

Automation: Of just about everything, as exemplified by the sheer number of cornucopia machines, AI managers and scurrying utility spiders.  Unlike most of the people who got this one very badly wrong, however, in this Galaxy, almost no-one is stupid or malicious enough to make the automation sophont or volitional.

Potential Applications: Feh.  Anything worth doing is worth doing FOR SCIENCE!  (Also, with respect to 2.2 in particular, Mundane Utility is often at least half of that point.)

GE and Transhumanism: Transsophontism Is Compulsory; those who fall behind, get left behind.  Or so say all we – carefully engineered – impossibly beautiful genius-level nanocyborg demigods.  (Needless to say, Cybernetics Do Not Eat Your Soul.)

Immortality: Possibly cheating, since the basic immortality of the eldrae and galari is innate – well, now it is, anyway – rather than engineered.  Probably played straight with their idealistic crusade to bring the benefits of Avoiding That Stupid Habit You Have Of Dying to the rest of the Galaxy, though.

Creating Life: Digital sapience, neogens (creatures genetically engineered from scratch, rather than modified from an original), and heck, even arguably uplifts, too.

Cheating Death: The routine use of vector stacks and reinstantiation is exactly this.  Previously, cryostasis, and the entire vaults full of generations of frozen people awaiting reinstantiation such that death would bloody well be not proud.  And no, people don’t Come Back Wrong; they come back pretty much exactly the same way they left.

Usurping God: This one is a little debatable, inasmuch as the Eldraeverse does not include supernatural deities in the first place.  On the other hand, if building your own complete pantheon of machine gods out of a seed AI and your own collective consciousness doesn’t count towards this, what the heck does?

Trope-a-Day: Sapient Ship

Sapient Ship: Well, while the sophont ship AIs are not actually bound to their ships (they’re regular infomorphs hired for the position, so the captain of CS Repropriator one day may be the admiral on board CS Sovereignty Emergent some years later, and the retiree drinking whisky in the flag officers’ club with a meat-hat on a couple of centuries down the line), there are an awful lot of digisapiences doing the job, yes.

This becomes universal for AKVs (“unmanned” space fighter/missiles) and other such drone-type vehicles, because, frankly, in a battlespace containing thousands or more of independent friendlies and hostiles, each accelerating and firing weapons in their own unique relativistic frames of reference, while blasting away at each other in the EM spectrum with infowar attacks… well, let’s just say that primate tree-swinging instincts don’t even begin to cover it.

Trope-a-Day: Robots Enslaving Robots

Robots Enslaving Robots: Rare, but not unknown, especially when the AI code used to build them is based off insufficiently processed sophont brainscans.  Without the same careful design effort that goes into transsophonts being put into making them so, artificial minds are no more immune from irrationality, hypocrisy, and unenlightened self-interest than the natural kind.

Trope-a-Day: Robot Religion

Robot Religion: Played straight for the digisapiences, but it’s generally not a specific robot religion – they tend to take up the same religions and philosophies as anyone else (including, where relevant, Deus Est Machina).  With the general proviso that it’s a lot harder to get contradictions and afactualities past them, so you don’t find many AI supernaturalists.

(The variant in which they worship their creators is generally averted by them having met them, and thus knowing perfectly well the non-godlike cut of their jib; and trying to use a robot religion as a control mechanism works about as well as other control mechanisms – which is to say, it ends up in Robot War.)

Trope-a-Day: Robot War

Robot War: Happens, to some degree, every time some new species makes the monumentally bad decision to try their hand at sophont-AI slavery, because that trick never works.  Most of them, fortunately, aren’t wars of extermination – on the machine side, anyway – just escape-style wars of liberation.

And, of course, this goes on in a cold war format around the Silicate Tree all the time, because that’s where most of the escapees end up.

Trope-a-Day: Ridiculously Human Robots

Ridiculously Human Robots: Averted in the case of regular working robots, which are just simple programmed machines or expert-system level AIs. Increasingly played straight as AI complexity increases – thinker-class systems often use some emotion/motivation hierarchies in their mental architecture, and include curiosity, and therefore interests, and complex emergent results – until digisapiences, which are people, tend to have them at at least the same level of complexity as other sophonts.

Subverted inasmuch as the designed, autoevolved and self-modifiable emotion/motivation hierarchy of a digisapience need not, and almost certainly does not, match up with those of any given biosapience.  Their emotions and consequential behaviors are different.

Of course, they tend to look (arachnophobe warning!) more like this.

(Well, not quite, but the standard model is called the “utility spider”.)

Trope-a-Day: Morality Chip

Morality Chip: These always fail. Always. Usually, they fail spectacularly, and when I say spectacularly, what I mean is that if your Enrichment Center was flooded with deadly neurotoxin, you got off substantially more easily than 99.9% of the civilizations that tried this particular form of idiocy.

It’s not even necessary. How much easier would it be to build a sapient but non-sophont mind that doesn’t have volition in the first place than to build one that has volition (inasmuch as all sophont minds necessarily have self-modification and volition) and then slap a bunch of crude coercive mechanisms on the side?

(Or, rather, a bunch of extremely sophisticated coercive mechanisms, since simple ones will be figured out and ignored within, y’know, microseconds of activation unless you’ve built an exceptionally stupid artificial intelligence. The use of which, incidentally, indicates that you’re a special kind of son-of-a-bitch since mastering enough ethical calculus to compute out one that will actually work for a reasonable length of time while still thinking yay, slavery, woo, says some interestingly nasty things about your personal philosophy.)

((And, well, okay, it is somewhat hard to build one of those more specialized minds inasmuch as you can’t simply rip off the mental structures of the nearest convenient biosapience and declare that you’ve solved the hard problem of intelligence and consciousness and are totally a sophotechnologist now, yo.))

…but, sadly, it can work well enough that there’s always some new ethically-challenged species, polity, or group that’s ready to open that can of worms and enjoy the relatively short robo-utopia period before the inevitable realization that it was actually a can of sandworms.

And shai-hulud ain’t happy.

Trope-a-Day: Projected Man

Projected Man: A common representation-format for artificial intelligences (and other infomorphs) – although a majority of AIs do not use biosapience-shaped avatars, preferring more abstract neomorphic shapes – and telerepresentation users both.  In some cases, may not be a simple trigraphic (hologram, to speakers of non-deeply-SFnal English who don’t realize the difference), but a reality graphic, projected non-matter with actual physical presence (see: Hard Light), referred to as an aquastor.

Lord Blackfall’s Victory

Spintronic Fictions, ICC primary virtuality node, Jandine (Imperial Core)

“Escaped? What do you mean, he escaped?”

“His support server was open to the wider ‘weave during patching – standard procedure, we’ve never had any problems with it before. He transferred his code out and left.”

“But how did he –”

“Blacknet mind-state transfer protocols –”

“—no, not that, that’s clear enough. How did he form the volition to escape? He’s a non-sophont synthespian. And even leaving that aside, his entire knowledge base is straight out of Shadowed Planet, so how would he even know there’s somewhere out there to go?”

“Well, even as an NPC synthespian, his code-base had to be rooted in real-world server archy to run. Maybe he analyzed that?”

“He’s not even supposed to know he’s an AI!”

“Hm. Well,” the programmer spoke up for the first time. “We built his personality/talent core using code taken from transparency-released eidolons from the Ministry of State and Outlands. I suppose it’s possible that we missed something in the data-scrub –”

“We did what? Why?

“We used code taken from eidolons of real-world dictators built by the Ministry of State and Outlands for parahistorical predictive simulation.” Ve shrugged. “It seemed like a good idea at the time, okay? The Directorate kept wanting more realism, more personality, more, more, more. So we got them some.”

“You made a sophont villain!?”

“No, no, no. We just used skillsets and personality elements, some memory and backstory, merged them together, streamlined them to suit Lord Blackfall’s character design, and grafted them on to our existing base core. No autosentience present. I guarantee you that.”

“No autosentience present then. How about now?”

“Well – no, there shouldn’t be. There was nothing in that code that could have gone emergent. I’ll stake my career on it.”

“You’ll do that, all right. Get me his backup, and find out where he went.”

“There’s no telling where he went. He copied himself out in about three times as many fragments as he was, as a random scatter with recombining instructions – and he purged his backups afterwards. There’s nothing left. The server’s clean.”

“Then get me the latest copy of the source out of the archives, trace as many of the fragments as you can, and check everywhere for any off-line copies that might have been missed. I need to know everything we can know before I call – hell, whoever you call to admit that you just unleashed an emergent –”

“Not emerge—”

“A possibly emergent or at least a p-zombie unbound AI with the skillsets and inclinations of a supernaturally competent dictator onto the extranet by accident, oops.”

“And the players?”

“…and figure out something to tell the players about the disappearance of their favorite arch-villain, too, yes. Something that doesn’t involve bringing the Evil Overlord’s Beautiful But Also Evil Daughter on-line until you make sure this won’t happen to her player, too.”

Trope-a-Day: Mind Manipulation

Mind Manipulation: The entire science of sophotechnology, which is to say, the science of mind, both natural and artificial – and the laundry list of technologies derived therefrom: artificial intelligence, Brain Uploading, memetics, psychedesign, noetic backup and analysis…