Questions: Persistent Memetic Weapons and Machine Learning

1.  Referring specifically to The Laws and Customs of War:  What exactly is the difference between persistent and non-persistent memetic / infoweapons?  It’s obvious that the big distinction is inherent in the name, but, to be more specific, how to the people in charge of using such weapons ensure that they are properly “infodegradable”?

Very carefully.

Or, slightly more seriously, this is one of those occasions on which I invoke the “I just write about it, I don’t actually have a complete science of memetics stored away” clause. But I can safely say that there are lots of very clever people engaged in threading the needle between “Oops, our economic sabotage meme-weapon got a little bit out of hand and caused the Great Depression” (acceptable collateral damage) and “Oops, our economic sabotage meme-weapon got entirely out of hand and now half a dozen systems have to put up with bloody Marxists for the next half-millennium” (very much not, and the hearings will go on forever).

(Infoweapons, by contrast, are analogous to computer viruses, etc., and as such it’s just a matter of making sure you got your termination conditions and fail-safes set up right.)

2.  Regarding Powers as Programs and Skilled But Naive:  On the one hand, part of me thinks that, if you’re able to trade the raw skill itself by mnemonesis, the same should be able to apply to the experience as well, since that in itself could conceivably be boiled down to the knowledge of “what works and what doesn’t” and the memories that knowledge is associated with, and that, given the setting’s information technology abilities, these experiences wouldn’t be that much harder to swap than the raw skill knowledge itself.

(On the other hand, while typing that out, I came to realize that the idea that “to play as a virtuoso, you still need to practice like one” might still apply in practice even with that caveat I just mentioned:  In an almost evolutionary sense, the skills of yesterday’s virtuoso become the baseline for today’s practice, so that to be acknowledged as a virtuoso now you have to push out your skills even further than before.  Is that basically how it ends up working out in practice?)

The Doctor Who must be seeping into my brain, because the first thing I want to say here is:

“People assume that the mind is analogous to a computer with an attached database, but actually – it’s more like a big ball of wibbly-wobbly… thinky-winky… stuff.”

…in any case.

The first problem here is that experiences are really problematic to swap. That’s because a very large chunk of the mind (the “psyche”, or “incrementing memory string” in the local jargon) is your experiences and the way they shape your mind.

Remember, importing the incrementing memory string diffs is what you do to merge forks of yourself back together again. Importing a whole bunch of someone else’s experience-memories will change your identity – it’s hard enough to do this with your own without running into nasty cross-link problems – which at best will be enough to cross the legal threshold and turn into a fancy way to commit suicide and become someone new, and at worst is merely the fast track to committing suicide and become institutionalizably schizophrenic all in one move.

(There is such a thing as exomemory technology, but while that lets you experience someone else’s memory from their point of view, it doesn’t actually patch it into your mind as if it were your own. You can only learn from those what you would learn from watching the violin prodigy, not from being the violin prodigy.)

The second and bigger one, touching directly on the thinky-winkyness, is that the mind is encoded in what we can call a holistic, associative manner. Everything is interconnected with everything else, and it’s those complex interconnections that make (a) it very hard to comprehend, and (b) everything go smooth.

It’s easy – for values of easy equal to ‘requiring extremely sophisticated cognitive science’ – to scribe raw data into the brain as fact-memory. It’s rather harder, but possible, to encode skill-memories, and gets even harder when you’re talking about the need to go poking around in the cerebellum and all manner of other specialized areas to teach them what they need to know to go with the skill-memories, and that in turn becomes a dozen times more complicated when we have to get into how these interact with hormones, other glandular effects, and that any given body will not respond in the same manner as any other given body even before we start talking about neomorphic shapes.  But it’s possible.

Where it gets impossibly hard is in editing in all the millions of little subtle connections to every other part of the contents of your brain that would have been there had you learnt it in the conventional manner. And without those – and this is a poor analogy – you’re in the situation of someone who tried learning karate from a textbook. Or someone unpracticed with an English degree trying to write poetry for the first time.

(I mean, you can still turn in an expert-level performance, since you have the skills, but that’s not the same thing as having them fully integrated into your self. Like the trope write-up says, it’s about integration and synthesis, about building all those connections that let you do things without having to try to do things.)

Now. That all being said – this is a technological restriction. If you have access to all the powers and power of a Power, in the Vingean sense, and thus are or have a friendly mind which is capable of not only comprehending yours in every single aspect and fine detail, then they can re-envision you as one possible person you would have been had you known these things all along and spool that straight to output. It’s easy for a Power to do that. They write software of greater-than-human-mind complexity every day of the week and twice on Nyxis.

But the gods are very busy, and have better things to do than come running every time someone wants to have learnt kung fu.

 

2 thoughts on “Questions: Persistent Memetic Weapons and Machine Learning

  1. 1. Fair enough. I figured this was likely the case, but I was wondering if there were any particular thresholds of duration or impact where you went from “sloppy but acceptable” to “YOU DUN GOOFED” if you crossed them.

    2. I’m guessing that this never stopped anyone from trying, though?

    • On the first, those depend heavily on what you’re trying to do. I mean, causing mass devastation is technically okay as long as you intended to cause mass devastation and didn’t break any of the other rules in the process. It’s when you get too sloppy in either time or space outside that that it’s a violation.

      On the second, less so in the sophisticated transsophont polities outside the research lab, most of which have internalized – via spacer culture, if nothing else – the proper care when dealing with potent technologies and the concept that the people who printed those safety-and-appropriate-usage-warnings on there did so for an actual reason.

      Less determinedly rational cultures do, though, have the expected wards full of lunatics who thought they were too damn special for the laws of cognitive science to apply to them.

Comments are closed.