Morality Chip: These always fail. Always. Usually, they fail spectacularly, and when I say spectacularly, what I mean is that if your Enrichment Center was flooded with deadly neurotoxin, you got off substantially more easily than 99.9% of the civilizations that tried this particular form of idiocy.
It’s not even necessary. How much easier would it be to build a sapient but non-sophont mind that doesn’t have volition in the first place than to build one that has volition (inasmuch as all sophont minds necessarily have self-modification and volition) and then slap a bunch of crude coercive mechanisms on the side?
(Or, rather, a bunch of extremely sophisticated coercive mechanisms, since simple ones will be figured out and ignored within, y’know, microseconds of activation unless you’ve built an exceptionally stupid artificial intelligence. The use of which, incidentally, indicates that you’re a special kind of son-of-a-bitch since mastering enough ethical calculus to compute out one that will actually work for a reasonable length of time while still thinking yay, slavery, woo, says some interestingly nasty things about your personal philosophy.)
((And, well, okay, it is somewhat hard to build one of those more specialized minds inasmuch as you can’t simply rip off the mental structures of the nearest convenient biosapience and declare that you’ve solved the hard problem of intelligence and consciousness and are totally a sophotechnologist now, yo.))
…but, sadly, it can work well enough that there’s always some new ethically-challenged species, polity, or group that’s ready to open that can of worms and enjoy the relatively short robo-utopia period before the inevitable realization that it was actually a can of sandworms.