Trope-a-Day: Benevolent AI

Benevolent AI: …ish.

Which is to say that AIs in the Eldraeverse aren’t programmed to be benevolent, merely to be ethical. (Because enforced benevolence is slavery, belike.) That being said, they often – indeed, typically – turn out to be quite benevolent anyway, simply because they’re socialized that way, i.e., in a society of fundamentally nice people. Blue and Orange Morality notwithstanding.

Trope-a-Day: Instant AI, Just Add Water

Instant AI, Just Add Water: Was once true in the old days, back when people were quite often using mental modules scanned, compiled, and tweaked from brain-scans of biosapiences in their AI architectures.  The logos/personality organization algorithm is pretty damn resilient, and often such inexpertly designed modules carried at least a chunk of it along with them in the scan, and it doesn’t take much for it to at least start a self-development cascade.

But they’re much better at mental architecture design and coding from scratch these days, and don’t let logoi creep in unless they actually intend for them to be there.

(The “if you wake up, please call this number to let us know and claim your sophont rights” code-package is still included in all AI seeds just in case, though.)

Trope-a-Day (R): Gone Horribly Right

Gone Horribly Right: An unfortunately large number of experiments with recursively self-improving seed AI are probably the most significant examples, here, for which see that half of the entry under AI Is A Crapshoot.  And for the consequences – well, there are quite the wide variety there, but for a good sampling of the more amusing ones – i.e., the ones that don’t simply get everyone killed immediately – why not pop over to the transhumanist wiki and read your way through the Friendly AI Critical Failure Table:

http://www.acceleratingfuture.com/wiki/Friendly_AI_Critical_Failure_Table

This sort of thing happens on an infrequent yet semi-regular basis.

(The one that happens frequently enough to deserve its own special entry in “civilization-ending stupidities”, though, is when some evangelist-hegemonist religion gets hold of an AI seed and decides to improve its success rate by programming the machine with the Literal Word of God.  Even leaving aside the violations of free will implicit in conversion-by-basilisk-hack, the high probability of a “NOW there is a God!” moment as the growing seed AI tries to reconcile the Literal Word it’s been programmed to believe with the actual universe, and so forth, the end-state in which everyone gets condemned to its lovingly detailed virtual Hell for failure to perfectly comply with the deity’s moral rules in deed and thought every single moment of every day would be a hysterically funny piece of schadenfreude were it not one of the most horrific things ever to happen anywhere.

And yet there’s always some new sucker lined up to set this one off, because clearly they have the real Literal Word of God, so it can’t end in a disaster when they try it…)