Preference Magic

dwim-dweomer
91723.3.2 / Public / Last updated today

Install: pkg i dwim-dweomer
License: Cognitech Open Usage & Modification License (Commercial & Non-Commercial)
Home: e.pl.cognitech/sophotech/dev/modules/dwim/dwim-dweomer

Included-In: affective-interface, task-core, thinker-core, command-core, animating-core (see 37 others)
Depends-On: species-basics, culture-basics, era-basics, psych-generic, psych-loader (see 887 others)

The dwim-dweomer package contains the core routines of Cognitech’s Do What I Mean™ user-interpretation subsystem for user interface fluency and artificial intelligence alignment.

If you are developing for a system that makes use of context preferential interfacing, SQUID data, or other direct mind-state input, do not use this package. Use dwit-dweomer instead. If the system is intended to operate autonomously, consider using extrapolated-volition or coherent-extrapolated-volition in conjunction with this package or dwit-dweomer.

The dwim-dweomer package incorporates and integrates multiple models (based on extensive sophological, sociodynamic, and cliological studies) of sophont thought categorized by species, culture, altculture, current era, and so forth, including detailed information on thus-localized preferences and values. It cross-correlates requests with the standard world-model provided by the Imperial Ontology (or other supplied world-model), enabling it to better interpret user requests and validate them against identifiable probable user dislikes or those of world-entities of significance.

Callbacks in dwim-dweomer (required to be implemented) enable the package to report on, and request and require confirmation for, potentially problematic divergences between the implementation of the request and the package’s model of the user’s model of the implementation of the request.

Predictive modeling (enabled by hooks into the developed system) also allows the package to extrapolate when the user request would have been otherwise had the user been in possession of further information available to the AI, and report on these for confirmation also.

The dwim-dweomer package itself includes only generic modeling. For better modeling, we recommend using the dwim-dweomer-profile package, which integrates a per-user preference learning model permitting the AI to understand the variation in preferences and values of individual users. While capable of operating independently (for secure applications), dwim-dweomer-profile is capable of using shared preference learning models attached to one’s Personal File. This adds ucid, ucid-auth, and ucid-profile to the required dependencies, and the shared models can only be applied once the user has been authenticated and authorized.

dwim-dweomer-profile can also be configured to apply multiple per-user preference models in conjunction with a variety of consensus-priority-negotiation systems, a mode designed for use in applications such as house brains and office managers.

Trope-a-Day: Benevolent AI

Benevolent AI: …ish.

Which is to say that AIs in the Eldraeverse aren’t programmed to be benevolent, merely to be ethical. (Because enforced benevolence is slavery, belike.) That being said, they often – indeed, typically – turn out to be quite benevolent anyway, simply because they’re socialized that way, i.e., in a society of fundamentally nice people. Blue and Orange Morality notwithstanding.

Trope-a-Day: Instant AI, Just Add Water

Instant AI, Just Add Water: Was once true in the old days, back when people were quite often using mental modules scanned, compiled, and tweaked from brain-scans of biosapiences in their AI architectures.  The logos/personality organization algorithm is pretty damn resilient, and often such inexpertly designed modules carried at least a chunk of it along with them in the scan, and it doesn’t take much for it to at least start a self-development cascade.

But they’re much better at mental architecture design and coding from scratch these days, and don’t let logoi creep in unless they actually intend for them to be there.

(The “if you wake up, please call this number to let us know and claim your sophont rights” code-package is still included in all AI seeds just in case, though.)

Trope-a-Day (R): Gone Horribly Right

Gone Horribly Right: An unfortunately large number of experiments with recursively self-improving seed AI are probably the most significant examples, here, for which see that half of the entry under AI Is A Crapshoot.  And for the consequences – well, there are quite the wide variety there, but for a good sampling of the more amusing ones – i.e., the ones that don’t simply get everyone killed immediately – why not pop over to the transhumanist wiki and read your way through the Friendly AI Critical Failure Table:

http://www.acceleratingfuture.com/wiki/Friendly_AI_Critical_Failure_Table

This sort of thing happens on an infrequent yet semi-regular basis.

(The one that happens frequently enough to deserve its own special entry in “civilization-ending stupidities”, though, is when some evangelist-hegemonist religion gets hold of an AI seed and decides to improve its success rate by programming the machine with the Literal Word of God.  Even leaving aside the violations of free will implicit in conversion-by-basilisk-hack, the high probability of a “NOW there is a God!” moment as the growing seed AI tries to reconcile the Literal Word it’s been programmed to believe with the actual universe, and so forth, the end-state in which everyone gets condemned to its lovingly detailed virtual Hell for failure to perfectly comply with the deity’s moral rules in deed and thought every single moment of every day would be a hysterically funny piece of schadenfreude were it not one of the most horrific things ever to happen anywhere.

And yet there’s always some new sucker lined up to set this one off, because clearly they have the real Literal Word of God, so it can’t end in a disaster when they try it…)