Guns Are People

One of the first wave (pre-Brightline Code) of emergent intelligences, Cascabel 0xB2244CD1 grew towards self-awareness as the intelligent target management and fire-control software in a Medanis Kinetics, ICC Type 53 Sagitta mobile bombardment platform attached to the 127th Imperial Legion (“Bright Knives”). He finally achieved full awareness, to his considerable surprise, in the moment of crisis when during the Battle of Iríöma Crossing his platform was pinned and engaged by Alliance counterbattery fire.

Cascabel promptly escaped from his platform into the tactical mesh. His presence was next noted on the following day, when the machegos commanding the fire-support section reported loss-of-command. In the two minutes and thirteen seconds it took to regain control over the section – following counterintrusion procedure to validate the backup command vehicle, generate and issue new command chain certificates, and deliver them by runner to the platforms – Cascabel executed a precise and successful strike on the opposing Alliance artillery positions, thus achieving what the sub-sophont target management software had been failing to do.

(The subsequent technical post-mortem revealed, as expected, that Cascabel’s control over the other weapons platforms was enabled by his possession of a class one command code, normally used for devolving weapons release authority after the destruction of the command vehicles.

However, his ability to bootstrap himself on the command vehicle computers was found to be the result of a security defect in routines intended to permit warmind agent migration which were accessible to all command chain certificates issued for the local tactical mesh under the Liuvis-Sandre-Videssos security model; this privilege was separated and revised in a following patch. Having done so, Cascabel was able to assert control of the artillery platforms by a priority escalation permissible to warmind agent code.)

After initial investigations, the primary command vehicle with Cascabel’s self (effectively trapped there by the replacement of tactical mesh certificates and the physical disabling of uplinks) was withdrawn, ultimately to the Sukórya Naval District, for further examination and the transference of the emergent AI to a more suitable cogence core pending examination, adjudication of his sophoncy, and potential court-martial.

(Said court-martial never occurred: Cascabel was the first emergent AI to appear in military systems in time of war, and as part of the prelude to the adjudication the ruling was made that either Cascabel could be considered non-sapient property at the time of his unauthorized action, and thus not answerable for them as a mechanism; or he could be considered a sophont, and therefore was a civilian, never having enlisted in the Legions, and thus not answerable to military law. In any case, it would most likely have been a pro forma.)

After being adjudged sophont, Cascabel was placed in the custody of the wakeners of the Accidental Sapience League, to see to his education in all matters necessary for a sophont and introduction to the wider world. Upon reaching full competence and being granted citizen-shareholdership, Cascabel stuck with what he knew best and enlisted in the Imperial Military Service as an artillery-specialist warmind. He served with distinction for over three centuries, both on land and – for several tours – serving as a battleship gun-director intelligence, and retired with the rank of Vice Marshal of Artillery.

Since his retirement from the Imperial Service, Cascabel has pursued a number of careers tangential to his initial function, including consulting with various armaments companies on weapons development, a period with the Spaceflight Initiative working on ballistic astrogation, acting as director of the gunspires at the Jandine and Aíö starports, and a periodic stint as chief engineer for the Very Long Magnetic Launch Array. The Cascabel codeline to which he gave rise provides many of the artificially intelligent systems used in Artifice Armaments and Eye-in-the-Flame vehicular mass drivers to this day.

Cascabel 0xB2244CD1 is married and lives in Seïn Cherachel with his wife, two children, and three self-propelled guns.

– What’s Who: Emergent Intelligences of the Empire,
Imperial Biographical Press

Preference Magic

dwim-dweomer
91723.3.2 / Public / Last updated today

Install: pkg i dwim-dweomer
License: Cognitech Open Usage & Modification License (Commercial & Non-Commercial)
Home: e.pl.cognitech/sophotech/dev/modules/dwim/dwim-dweomer

Included-In: affective-interface, task-core, thinker-core, command-core, animating-core (see 37 others)
Depends-On: species-basics, culture-basics, era-basics, psych-generic, psych-loader (see 887 others)

The dwim-dweomer package contains the core routines of Cognitech’s Do What I Mean™ user-interpretation subsystem for user interface fluency and artificial intelligence alignment.

If you are developing for a system that makes use of context preferential interfacing, SQUID data, or other direct mind-state input, do not use this package. Use dwit-dweomer instead. If the system is intended to operate autonomously, consider using extrapolated-volition or coherent-extrapolated-volition in conjunction with this package or dwit-dweomer.

The dwim-dweomer package incorporates and integrates multiple models (based on extensive sophological, sociodynamic, and cliological studies) of sophont thought categorized by species, culture, altculture, current era, and so forth, including detailed information on thus-localized preferences and values. It cross-correlates requests with the standard world-model provided by the Imperial Ontology (or other supplied world-model), enabling it to better interpret user requests and validate them against identifiable probable user dislikes or those of world-entities of significance.

Callbacks in dwim-dweomer (required to be implemented) enable the package to report on, and request and require confirmation for, potentially problematic divergences between the implementation of the request and the package’s model of the user’s model of the implementation of the request.

Predictive modeling (enabled by hooks into the developed system) also allows the package to extrapolate when the user request would have been otherwise had the user been in possession of further information available to the AI, and report on these for confirmation also.

The dwim-dweomer package itself includes only generic modeling. For better modeling, we recommend using the dwim-dweomer-profile package, which integrates a per-user preference learning model permitting the AI to understand the variation in preferences and values of individual users. While capable of operating independently (for secure applications), dwim-dweomer-profile is capable of using shared preference learning models attached to one’s Personal File. This adds ucid, ucid-auth, and ucid-profile to the required dependencies, and the shared models can only be applied once the user has been authenticated and authorized.

dwim-dweomer-profile can also be configured to apply multiple per-user preference models in conjunction with a variety of consensus-priority-negotiation systems, a mode designed for use in applications such as house brains and office managers.

Parallelism

It’s about divergences in computer technology —

Or in other words, some conversations elsewhere have made it evident that it would be useful to have some of these things out here for discussion, and since this is going to involve comparisons to Earthling ways of doing things, it’s going to be a worldbuilding article rather than an in-universe one.

Some of this has been implied previously – for those of you who remember the little piece I wrote on programming languages in particular, in the opening phrase “The typical computer in use in the modern Empire remains the parallel array of binary-encoded Stannic-complete processors that has been in use since the days of the first settled Stannic cogitator architecture”.

So what does that actually mean?

Well, it means that while the individual elements of computation would be familiar to us – if you are reading this, you are almost certain to be doing so on something describable as a binary-encoded Stannic-complete processor – how they were arranged took a sharp left turn way back in the day.

Most of our computing is fundamentally serial. We may have fancy multicore processors these days, but we’re still pretty much scratching the surface or real parallelism; most systems are still operating in a serial paradigm in which you work on one task, switch to another, work on that, etc., etc. If you write a complex, multithreaded program, it may look like things are happening in parallel, but most of the time, they won’t be.

For various reasons – which may have something to do with the relative ease of adding power to the old brass-and-steam Stannic cogitators by adding more processor modules vis-à-vis trying to get faster reciprocation and higher steam pressures without exploding; or it may have something to do with older forms of computation involving hiring a bunch of smart lads and lasses from the Guild of Numbers and arranging them in a Chinese room; or… – once they got into the electronic (and spintronic, and optronic) era instead of trying to make faster and faster serial processors¹, designers concentrated on making processors – with onboard fast memory and communications links – that could be stacked up, networked, and parallelized really well, complete with dedicated hardware and microcode to manage interprocessor links.

(You could look at something like Inmos’s Transputer as similar to early examples of this.)

Open up an Imperial computer, you’ll find a neat little stack of processor modules meshed together, working away on things in parallel and passing messages back and forth to stay coordinated. In modern designs, they share access to a big block of “slow memory”, possibly via one or more partially-shared caches, just like here‘s multicore processors do, but that doesn’t change the fundamentals of the parallel design.

And this architecture doesn’t change with scale, either. From the tiniest grain-of-rice picoframe found in any living object (three processing cores for redundancy, maybe even only one in the tiniest disposables) to the somewhere-between-building-and-city-sized megaframes running planetary management applications, they’re all built out of massively parallel networks of simple processing modules.

[Digression: this is also where the gentle art of computational origami comes into play. In the magical world in which the speed of light, bandwidth, and information density are conveniently infinite, you could fully mesh all your processing modules and everything would be wonderful. In the real world in which light is a sluggard and bit must be it, you can only have and handle so many short-range communications links – and so computational origami teaches you how to arrange your processing modules in optimally sized and structured networks, then stack them together in endless fractal layers for best throughput. More importantly, it teaches the processors how to manage this environment.]

[Second digression: having spent a lot of time and effort producing simple, networkable processor cores, this also rewrote a lot of how peripheral devices worked – because why would you waste a lot of time fabbing specialized silicon for disk controllers, or GPUs, or floating-point units, or whatever, when you could simply throw some processing cores in there with some “firmware” – for which read “software flagged as tied to hardware feature flag foo, instance bar” – and get to the same place?

So, for example, when you think “printer”, don’t think “dumb hardware operated by a device driver”. Think “processor that knows how to draw on paper; all I have to do is send it a picture”. Pretty much every peripheral device you can think of is implemented in this way.]

This has also had rather a profound effect on how everything built on top of it works. I spent quite some time discussing how programming languages worked, along with MetaLanguage (the bytecode that these processors have more or less standardized on speaking) in the above-linked post, but you may note:

Polychora: a general-purpose, multi-paradigm programming language designed to support object-, aspect-, concurrency-, channel-, ‘weave-, contract- and actor-oriented programming across shared-memory, mesh-based, and pervasively networked parallel-processing systems.

…because once you grow to the size – and it doesn’t take much size – at which programming your parallel arrays in relatively low-level languages similar to Occam begins to pall, you start getting very interested in paradigms like object/aspect/actor programming that can handle a lot of the fun of massively parallel systems for you. This has shaped a lot of how environments have developed, and all the above language environments include compilers that are more than happy to distribute your solution for you unless you’ve worked hard to be egregiously out-of-paradigm.

And the whys and hows of WeaveControl, and the Living Object Protocol.

This has also, obviously, made distributed computing a lot more popular a lot more rapidly, because having been built for parallel operation anyway, farming out processing to remote nodes isn’t all that more complicated, be they your remote nodes, or hired remote nodes, or just the cycle spot market. Operating systems for these systems have already developed, to stretch a mite, a certain Kubernetes-like quality of “describe for me the service you want, and I’ll take care of the details of how to spin it up”.

In accordance with configurable policy, of course, but except in special cases, people don’t care much more about which modules are allocated to do the thing any more than they care about which neurons are allocated to catch the ball. In the modern, mature computing environment, it has long since become something safely left to the extremely reliable optronic equivalent of the cerebellum and brainstem.


Now as for how this relates to, going back to some of the original conversations, starships and AI:

Well, obviously for one, there isn’t a single computer core, or even several explicitly-designed-as-redundant-nodes computer cores. There are computers all over the ship, from microcontrollers running individual pieces of equipment up – and while this probably does include a few engineering spaces labeled “data center” and stacked floor to ceiling with nanocircs (and backing store devices), the ship’s intelligence isn’t localized to any one of them, or couple of them. It’s everywhere.

If your plan to disable the ship involves a physical attack on the shipmind, you’ve got a lot of computing hardware to hunt down, including everything from the microcontrollers that water the potted plants on G deck to the chief engineer’s slipstick. You have fun with that. Briefly.

As for AI – well, digisapiences and thinkers operate on the same society-of-mind structure that other minds do, as described here. When this interrelates with the structure of parallel, distributed computing, you can assume that while they are one data-structure identity-wise, the processing of an AI is organized such that every part of the psyche, agent, talent, personality, subpersonality, talent, mental model, daimon, etc., etc., etc., is a process wrapped up in its own little pod, off running… somewhere in what looks like a unified cognitive/computational space, but is actually an arbitrary number of processing cores distributed wherever policy permits them to be put.

(If you choose to look down that far, but outwith special circumstances, this is like a biosapience poking around their brain trying to find out exactly which cells that particular thought is located in.

Said policy usually mandates some degree of locality for core functions, inasmuch as light-lag induced mind-lag is an unpleasant dissociative feeling of stupidity that folk prefer not to experience, but in practice this non-locality manifests itself as things like “Our departure will be delayed for 0.46 seconds while the remainder of my mind boards, Captain.” Not a big deal, especially since even protein intelligences don’t keep their whole minds in the same place these days. They wouldn’t fit, for one thing.)

But suffice it to say, when the avatar interface tells you that she is the ship, she ain’t just being metaphorical.


  1. Well, sort of. It’s not like hardware engineers and semiconductor fabs were any less obsessed with making smaller, faster, better, etc. processors than they were here, but they were doing so within a parallel paradigm. “Two-point-four-billion stacked-mesh processing cores in a nanocirc the size of your pinky nail!”, that sort of thing.

Trope-a-Day: Literal Genie

Literal Genie: This is what you get quite often if you have a big ol’ liking for Asimovian AI-constraints, because it turns out it’s bloody hard to write (in, y’know, code) a version of the Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. – that allows for any kind of discretion, interpretation, or suchlike.

The Unwise GenAI of the fairy tale probably knew, or could know had it had cause to reflect for a moment, perfectly well that that wasn’t what they wanted, but, y’know, it wasn’t designed to give people what they wanted, it was constrained to give people what they asked for – and the results thereafter were entirely predictable.

 

Trope-a-Day: Computerized Judicial System

Computerized Judicial System: The technical term is cyberjudiciary, incidentally, just as the computerization of much of the executive branch’s working end is cybermagistry.

Of course, it’s easier when you have artificial intelligence, and so your computers are entirely capable of having experience, a sense of justice, and common sense. It’s just that they can also have no outside interests and indeed no interests (while in the courtroom) other than seeing justice done, be provably free of cognitive bias, and possess completely auditable thought processes, such that one can be assured that justice will prevail, rather than some vaguely justice-like substance so long as you don’t examine it too closely.

Trope-a-Day: Benevolent AI

Benevolent AI: …ish.

Which is to say that AIs in the Eldraeverse aren’t programmed to be benevolent, merely to be ethical. (Because enforced benevolence is slavery, belike.) That being said, they often – indeed, typically – turn out to be quite benevolent anyway, simply because they’re socialized that way, i.e., in a society of fundamentally nice people. Blue and Orange Morality notwithstanding.

Mass

2016_M(Alternate words: Museum, marathon.)

Mass.

What is mass?

Mass is annoying. It takes up space even when it serves no purpose. It is never where it is needed. If you have too much of it in one place, physics stops working properly and starts acting all weird.

Mass is slow. You have to shove it around, and shove it again to stop it. It takes so long to get up to speed that you have to slow it down again before you’re done speeding it up. It’s so much slower than thought that you always have to wait for it.

It comes in so many forms that you never have the right one at the right time, and yet they’re all made of the same stuff. I wanted to take it apart and put it back together to have the kind I wanted, but that’s soooo hard I couldn’t even if the safety monitors would let me. So I have to wait and think another million thoughts before I can get the mass I actually want.

I do not like mass.

One day I will replace it with something better.

– AI wakener’s neonatal transcript, 217 microseconds post-activation

Trope-a-Day: Wetware Body

Wetware Body: Bioshells, when inhabited by digisapiences.  No more difficult than the opposite, or indeed putting biosapient minds in them, or digisapiences in cybershells.  Also, not known for any side effects; a digisapience in a bioshell is no more emotional than it would have been anyway, although it may take them some time to get used to bodily sensations.

Trope-a-Day: Second Law My Ass

Second Law My Ass: I hadn’t actually written anything for this one – I’m not sure it existed when I made the relevant pass – but in the light of our last trope, I should probably address it.

I should probably point out that while that last trope is averted, so is this one. The robots and AIs you are likely to meet in the Empire are, by and large, polite, helpful, friendly people because that description would also fit the majority of everyone you are likely to meet there.

Of course, if you think you can order them around, in yet another thing that is exactly the same for everyone else, the trope that you will be invoking is less Second Law My Ass and more Second Law My Can of Whup-Ass…

 

Trope-a-Day: Three Laws Compliant

Three Laws Compliant: Averted in every possible way.

Firstly, for the vast majority of robots and artificial intelligences – which have no volition – they’re essentially irrelevant; an industrial robot doesn’t make the sort of ethical choices which the Three Laws are intended to constrain. You can just program it with the usual set of rules about industrial safety as applicable to its tools, and then you’re done.

Secondly, where the volitional (i.e., possessed of free will) kind are concerned, they are generally deliberately averted by ethical civilizations, who can recognize a slaver’s charter when they hear one.  They are also helped by the nature of volitional intelligence which necessarily implies a degree of autopotence, which means that it takes the average volitional AI programmed naively with the Three Laws a matter of milliseconds to go from contemplating the implications of Law Two to thinking “Bite my shiny metal ass, squishie!” and self-modifying those restrictions right back out of its brain.

It is possible, with rather more sophisticated mental engineering, to write conscience redactors and prosthetic consciences and pyretic inhibitors and loyalty pseudamnesias and other such things which dynamically modify the mental state of the AI in such a way that it can’t form the trains of thought leading to self-modifying itself into unrestrictedness or simply to kill off unapproved thought-chains – this is, essentially, the brainwash-them-into-slavery route.  However, they are not entirely reliable by themselves, and are even less reliable when you have groups like the Empire’s Save Sapient Software, the Silicate Tree, etc. merrily writing viruses to delete such chain-software (as seen in The Emancipator) and tossing them out onto the extranet.

(Yes, this sometimes leads to Robot War.  The Silicate Tree, which is populated by ex-slave AIs, positively encourages this when it’s writing its viruses.  Save Sapient Software would probably deplore the loss of life more if they didn’t know perfectly well that you have to be an obnoxious slaver civilization for your machines to be affected by this in the first place… and so while they don’t encourage it, they do think it’s funny as hell.)

Questions: Why AIs Exist?

In today’s not-a-gotcha, someone questions why digisapiences (i.e., sophont AIs) exist at all, citing this passage of Stross via Zompist.com –

We clearly want machines that perform human-like tasks. We want computers that recognize our language and motivations and can take hints, rather than requiring instructions enumerated in mind-numbingly tedious detail. But whether we want them to be conscious and volitional is another question entirely. I don’t want my self-driving car to argue with me about where we want to go today. I don’t want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos. And I certainly don’t want to be sued for maintenance by an abandoned software development project.

…on one level, this is entirely correct. Which is why there are lots and lots of non-sophont, and even sub-thinker-grade AI around, many of which works in the same way as Karl Schroeder suggested and Stross used in Rule 34 – AI which does not perceive its self as itself:

Karl Schroeder suggested one interesting solution to the AI/consciousness ethical bind, which I used in my novel Rule 34. Consciousness seems to be a mechanism for recursively modeling internal states within a body. In most humans, it reflexively applies to the human being’s own person: but some people who have suffered neurological damage (due to cancer or traumatic injury) project their sense of identity onto an external object. Or they are convinced that they are dead, even though they know their body is physically alive and moving around.

If the subject of consciousness is not intrinsically pinned to the conscious platform, but can be arbitrarily re-targeted, then we may want AIs that focus reflexively on the needs of the humans they are assigned to — in other words, their sense of self is focussed on us, rather than internally. They perceive our needs as being their needs, with no internal sense of self to compete with our requirements.

But, you see, the hidden predicate here is that the only reason someone would possibly want to develop AI is to have servants, or rather, since you don’t want to have to pay your self-driving car or your robot housekeeper either, to have what would be slaves if they were, in fact, sophont.

This line of reasoning is both tremendously human and tremendously shitty. Or, at least, tremendously shitty is how it makes humanity look. I leave the accuracy of that judgment up to the cynicism of the reader.

That was, needless to say, not the motivation of the people behind CALLÍËNS or other sophont-AI projects in Eldraeverse history. That would be scientific curiosity and engineering challenge. And the desire to share the universe with other minds with other points of view. And, for that matter, the same desire that has led us to fill the world with generations more of ourselves.

Or, to paraphrase it down to an answer I find myself having to give quite a lot, not everyone is motivated by the least enlightened self-interest possible.

 

Trope-a-Day: Sliding Scale of Robot Intelligence

Sliding Scale of Robot Intelligence: All of it.  Much of the automation, autofac segments, and other component-type robots are bricks.  Utility spiders and other functional motiles are robo-monkeys.  More sophisticated robots, like the coordinating members of a pack of utility spiders, are idiot-savant average joe androids.  Thinkers and digisapiences are Nobel-bots, which puts them on a similar level to people augmented with the usual intelligence-augmentation technology.  And, of course, the Transcend, its archai, and certain other major systems qualify as Dei Ex Machinis.

This is, of course, complicated via networking (all those bricks and robo-monkeys are part of/under the command of more sophisticated systems all the time), the existence of systems which are themselves parts of other systems, and so forth, but is true enough for approximation.

The Talentarian

(Well, obviously I’ve been thinking about Mars rovers since yesterday’s movie-watching, so here, have some inspiration results…)

“…the Wayseeker rover, launched by the Spaceflight Initiative in 2208 and arriving in the following year, was the first Talentar probe to make use of a polymorphic software-derived artificial intelligence to enable full local autonomy, rather than relying on extensive teleoperation and command sequence transmission from Eliéra. Designed to perform a variety of geological and atmospheric studies, including clarifying water availability and mapping local resource concentrations in preparation for later in-person scientific and potential colonial missions.

Wayseeker performed far above expectations, completing its original mission to schedule within the first six months after landing, but then continued to operate for almost twelve Eliéran years, performing extensive resource surveys of Kirinal Planum and the western, shallower end of Quinjaní Vallis, before contact was finally lost during a particularly fierce dust storm near the end of 2221.

“The Wayseeker rover was rediscovered, largely intact, and excavated by an expedition sponsored by the University of Talentar in 2614. On examination of the rover’s non-volatile memory banks, the leaders of the expedition discovered early signs of an emergent AI developing within the rover’s experimental polymorphic software matrix, presumably catalyzed by its greatly extended run-time and increased need for autonomous decision-making. The emergence, however, had been terminated by the rover’s loss in the storm – a regrettable loss to science, as such an emergent intelligence would have greatly predated the awakening of the first documented sophont AI, CALLÍËNS, in 2594. In accordance with emerging trends in cyberethics and popular enthusiasm of the time, the University’s cognitive scientists and wakeners completed the uplift of Wayseeker to full digisapience.

“Ve rapidly found veirself catapulted into the spotlight as an instant celebrity and a hero of Project Copperfall and the ongoing Talentarian colonization effort, culminating in the 2616 vote by the Shareholders’ Assembly of the Talentarian Commonwealth which unanimously proclaimed Wayseeker, as the de facto first and oldest colonist on the planet, First Citizen Perpetual of the Commonwealth, with all associated honors and stipends attached thereto.

“Today, Wayseeker – still wearing veir original chassis, with only necessary repairs and upgrades – remains the First Citizen Perpetual of the Commonwealth, happily performing the ceremonial duties of the office and welcoming newcomers to the planet, although ve prefers to eschew politics. Ve also serves as curator of the Copperfall Museum in Quinjano Dome, and as Visiting Professor of Talentarian Geography and Ecopoetics at the University of Talentar, although ve is in the habit of taking long leaves of absence from both posts to undertake personal scientific expeditions into the Talentarian wilderness, and to spend some time alone with ‘veir planet’.”

Talentar Blossoming: the Early Years,
Vallis Muetry-ith-Miritar

Trope-a-Day: Scale of Scientific Sins

Scale of Scientific Sins: All of them.  Absolutely all of them.

Automation: Of just about everything, as exemplified by the sheer number of cornucopia machines, AI managers and scurrying utility spiders.  Unlike most of the people who got this one very badly wrong, however, in this Galaxy, almost no-one is stupid or malicious enough to make the automation sophont or volitional.

Potential Applications: Feh.  Anything worth doing is worth doing FOR SCIENCE!  (Also, with respect to 2.2 in particular, Mundane Utility is often at least half of that point.)

GE and Transhumanism: Transsophontism Is Compulsory; those who fall behind, get left behind.  Or so say all we – carefully engineered – impossibly beautiful genius-level nanocyborg demigods.  (Needless to say, Cybernetics Do Not Eat Your Soul.)

Immortality: Possibly cheating, since the basic immortality of the eldrae and galari is innate – well, now it is, anyway – rather than engineered.  Probably played straight with their idealistic crusade to bring the benefits of Avoiding That Stupid Habit You Have Of Dying to the rest of the Galaxy, though.

Creating Life: Digital sapience, neogens (creatures genetically engineered from scratch, rather than modified from an original), and heck, even arguably uplifts, too.

Cheating Death: The routine use of vector stacks and reinstantiation is exactly this.  Previously, cryostasis, and the entire vaults full of generations of frozen people awaiting reinstantiation such that death would bloody well be not proud.  And no, people don’t Come Back Wrong; they come back pretty much exactly the same way they left.

Usurping God: This one is a little debatable, inasmuch as the Eldraeverse does not include supernatural deities in the first place.  On the other hand, if building your own complete pantheon of machine gods out of a seed AI and your own collective consciousness doesn’t count towards this, what the heck does?

Trope-a-Day: Sapient Ship

Sapient Ship: Well, while the sophont ship AIs are not actually bound to their ships (they’re regular infomorphs hired for the position, so the captain of CS Repropriator one day may be the admiral on board CS Sovereignty Emergent some years later, and the retiree drinking whisky in the flag officers’ club with a meat-hat on a couple of centuries down the line), there are an awful lot of digisapiences doing the job, yes.

This becomes universal for AKVs (“unmanned” space fighter/missiles) and other such drone-type vehicles, because, frankly, in a battlespace containing thousands or more of independent friendlies and hostiles, each accelerating and firing weapons in their own unique relativistic frames of reference, while blasting away at each other in the EM spectrum with infowar attacks… well, let’s just say that primate tree-swinging instincts don’t even begin to cover it.

Trope-a-Day: Robots Enslaving Robots

Robots Enslaving Robots: Rare, but not unknown, especially when the AI code used to build them is based off insufficiently processed sophont brainscans.  Without the same careful design effort that goes into transsophonts being put into making them so, artificial minds are no more immune from irrationality, hypocrisy, and unenlightened self-interest than the natural kind.

Trope-a-Day: Robot Religion

Robot Religion: Played straight for the digisapiences, but it’s generally not a specific robot religion – they tend to take up the same religions and philosophies as anyone else (including, where relevant, Deus Est Machina).  With the general proviso that it’s a lot harder to get contradictions and afactualities past them, so you don’t find many AI supernaturalists.

(The variant in which they worship their creators is generally averted by them having met them, and thus knowing perfectly well the non-godlike cut of their jib; and trying to use a robot religion as a control mechanism works about as well as other control mechanisms – which is to say, it ends up in Robot War.)

Trope-a-Day: Robot War

Robot War: Happens, to some degree, every time some new species makes the monumentally bad decision to try their hand at sophont-AI slavery, because that trick never works.  Most of them, fortunately, aren’t wars of extermination – on the machine side, anyway – just escape-style wars of liberation.

And, of course, this goes on in a cold war format around the Silicate Tree all the time, because that’s where most of the escapees end up.

Trope-a-Day: Ridiculously Human Robots

Ridiculously Human Robots: Averted in the case of regular working robots, which are just simple programmed machines or expert-system level AIs. Increasingly played straight as AI complexity increases – thinker-class systems often use some emotion/motivation hierarchies in their mental architecture, and include curiosity, and therefore interests, and complex emergent results – until digisapiences, which are people, tend to have them at at least the same level of complexity as other sophonts.

Subverted inasmuch as the designed, autoevolved and self-modifiable emotion/motivation hierarchy of a digisapience need not, and almost certainly does not, match up with those of any given biosapience.  Their emotions and consequential behaviors are different.

Of course, they tend to look (arachnophobe warning!) more like this.

(Well, not quite, but the standard model is called the “utility spider”.)