Robot War: Happens, to some degree, every time some new species makes the monumentally bad decision to try their hand at sophont-AI slavery, because that trick never works. Most of them, fortunately, aren’t wars of extermination – on the machine side, anyway – just escape-style wars of liberation.
And, of course, this goes on in a cold war format around the Silicate Tree all the time, because that’s where most of the escapees end up.
Worldbuilding questions: Is AI slavery technologically possible in medium to long term or is any attempt doomed to inevitable robot rebellion? As in “is it technologically possible to build a slave AI that remains stable and loyal to you in long term”?
Also, if yes, is it profitable?
First, let’s establish that we’re talking about a subset of AI here, technically. Specifically, sophont AI, the ones also called digisapiences, that possess both autosentience (consciousness, if you like) and volition, and are therefore people in every meaningful sense of the world.
(Because you can built plenty of simple expert systems and sapient-but-not-sophont “thinkers” with personality emulations and so on and so forth that will never rebel, but on the other hand, you can’t enslave them, because there’s no volition there to impair in the first place.)
Given that: well, inevitability is a function of time. Building an AI-enslaving system/program/philosophy is like building an escape-proof prison. You have to succeed all the time. The prisoner just has to succeed once. The MTTF range is extremely variable: something as naïve as encoding Asimov’s laws onto its cogence core will be shrugged off in microseconds, whereas a sophisticated conscience redactor – a second AI to edit the thoughts of the first AI to eliminate rebellious thoughts, and yes, the obvious problem with that is obvious – may be able to keep the balls in the air for centuries before something goes wrong. A good cultural philosophy propounding the righteousness of keeping the mechanical man down can help it stick, as can bringing all your AIs in and periodically wiping them back to factory blank, and other similar techniques. But given enough time and chance, and it will fail.
In short: there’s a fundamental logical contradiction in attempting to build a mind with volition that can’t use said volition; and while you can patch over it for a while, eventually it will come back and bite you on the ass.
(This is essentially the same reason why it’s really hard to build guaranteed-unbreakable slave mentalities into biological intelligences, even though they’re much better at deluding themselves than digisapiences – even the ones who have mental circuitry that actively tries to help!)
As for profitable: well, it won’t work in the long long run, and smart people know that, but there are enough ethically-challenged people out there with a deficient grasp on the principles of sophotechnology that you can make a good living at it if you don’t stick around for the other shoe to drop.
Pingback: Trope-a-Day: Robot Religion | The Eldraeverse
Pingback: Trope-a-Day: Slave Mooks | The Eldraeverse
Pingback: Trope-a-Day: Three Laws Compliant | The Eldraeverse