Parallelism

It’s about divergences in computer technology —

Or in other words, some conversations elsewhere have made it evident that it would be useful to have some of these things out here for discussion, and since this is going to involve comparisons to Earthling ways of doing things, it’s going to be a worldbuilding article rather than an in-universe one.

Some of this has been implied previously – for those of you who remember the little piece I wrote on programming languages in particular, in the opening phrase “The typical computer in use in the modern Empire remains the parallel array of binary-encoded Stannic-complete processors that has been in use since the days of the first settled Stannic cogitator architecture”.

So what does that actually mean?

Well, it means that while the individual elements of computation would be familiar to us – if you are reading this, you are almost certain to be doing so on something describable as a binary-encoded Stannic-complete processor – how they were arranged took a sharp left turn way back in the day.

Most of our computing is fundamentally serial. We may have fancy multicore processors these days, but we’re still pretty much scratching the surface or real parallelism; most systems are still operating in a serial paradigm in which you work on one task, switch to another, work on that, etc., etc. If you write a complex, multithreaded program, it may look like things are happening in parallel, but most of the time, they won’t be.

For various reasons – which may have something to do with the relative ease of adding power to the old brass-and-steam Stannic cogitators by adding more processor modules vis-à-vis trying to get faster reciprocation and higher steam pressures without exploding; or it may have something to do with older forms of computation involving hiring a bunch of smart lads and lasses from the Guild of Numbers and arranging them in a Chinese room; or… – once they got into the electronic (and spintronic, and optronic) era instead of trying to make faster and faster serial processors¹, designers concentrated on making processors – with onboard fast memory and communications links – that could be stacked up, networked, and parallelized really well, complete with dedicated hardware and microcode to manage interprocessor links.

(You could look at something like Inmos’s Transputer as similar to early examples of this.)

Open up an Imperial computer, you’ll find a neat little stack of processor modules meshed together, working away on things in parallel and passing messages back and forth to stay coordinated. In modern designs, they share access to a big block of “slow memory”, possibly via one or more partially-shared caches, just like here‘s multicore processors do, but that doesn’t change the fundamentals of the parallel design.

And this architecture doesn’t change with scale, either. From the tiniest grain-of-rice picoframe found in any living object (three processing cores for redundancy, maybe even only one in the tiniest disposables) to the somewhere-between-building-and-city-sized megaframes running planetary management applications, they’re all built out of massively parallel networks of simple processing modules.

[Digression: this is also where the gentle art of computational origami comes into play. In the magical world in which the speed of light, bandwidth, and information density are conveniently infinite, you could fully mesh all your processing modules and everything would be wonderful. In the real world in which light is a sluggard and bit must be it, you can only have and handle so many short-range communications links – and so computational origami teaches you how to arrange your processing modules in optimally sized and structured networks, then stack them together in endless fractal layers for best throughput. More importantly, it teaches the processors how to manage this environment.]

[Second digression: having spent a lot of time and effort producing simple, networkable processor cores, this also rewrote a lot of how peripheral devices worked – because why would you waste a lot of time fabbing specialized silicon for disk controllers, or GPUs, or floating-point units, or whatever, when you could simply throw some processing cores in there with some “firmware” – for which read “software flagged as tied to hardware feature flag foo, instance bar” – and get to the same place?

So, for example, when you think “printer”, don’t think “dumb hardware operated by a device driver”. Think “processor that knows how to draw on paper; all I have to do is send it a picture”. Pretty much every peripheral device you can think of is implemented in this way.]

This has also had rather a profound effect on how everything built on top of it works. I spent quite some time discussing how programming languages worked, along with MetaLanguage (the bytecode that these processors have more or less standardized on speaking) in the above-linked post, but you may note:

Polychora: a general-purpose, multi-paradigm programming language designed to support object-, aspect-, concurrency-, channel-, ‘weave-, contract- and actor-oriented programming across shared-memory, mesh-based, and pervasively networked parallel-processing systems.

…because once you grow to the size – and it doesn’t take much size – at which programming your parallel arrays in relatively low-level languages similar to Occam begins to pall, you start getting very interested in paradigms like object/aspect/actor programming that can handle a lot of the fun of massively parallel systems for you. This has shaped a lot of how environments have developed, and all the above language environments include compilers that are more than happy to distribute your solution for you unless you’ve worked hard to be egregiously out-of-paradigm.

And the whys and hows of WeaveControl, and the Living Object Protocol.

This has also, obviously, made distributed computing a lot more popular a lot more rapidly, because having been built for parallel operation anyway, farming out processing to remote nodes isn’t all that more complicated, be they your remote nodes, or hired remote nodes, or just the cycle spot market. Operating systems for these systems have already developed, to stretch a mite, a certain Kubernetes-like quality of “describe for me the service you want, and I’ll take care of the details of how to spin it up”.

In accordance with configurable policy, of course, but except in special cases, people don’t care much more about which modules are allocated to do the thing any more than they care about which neurons are allocated to catch the ball. In the modern, mature computing environment, it has long since become something safely left to the extremely reliable optronic equivalent of the cerebellum and brainstem.


Now as for how this relates to, going back to some of the original conversations, starships and AI:

Well, obviously for one, there isn’t a single computer core, or even several explicitly-designed-as-redundant-nodes computer cores. There are computers all over the ship, from microcontrollers running individual pieces of equipment up – and while this probably does include a few engineering spaces labeled “data center” and stacked floor to ceiling with nanocircs (and backing store devices), the ship’s intelligence isn’t localized to any one of them, or couple of them. It’s everywhere.

If your plan to disable the ship involves a physical attack on the shipmind, you’ve got a lot of computing hardware to hunt down, including everything from the microcontrollers that water the potted plants on G deck to the chief engineer’s slipstick. You have fun with that. Briefly.

As for AI – well, digisapiences and thinkers operate on the same society-of-mind structure that other minds do, as described here. When this interrelates with the structure of parallel, distributed computing, you can assume that while they are one data-structure identity-wise, the processing of an AI is organized such that every part of the psyche, agent, talent, personality, subpersonality, talent, mental model, daimon, etc., etc., etc., is a process wrapped up in its own little pod, off running… somewhere in what looks like a unified cognitive/computational space, but is actually an arbitrary number of processing cores distributed wherever policy permits them to be put.

(If you choose to look down that far, but outwith special circumstances, this is like a biosapience poking around their brain trying to find out exactly which cells that particular thought is located in.

Said policy usually mandates some degree of locality for core functions, inasmuch as light-lag induced mind-lag is an unpleasant dissociative feeling of stupidity that folk prefer not to experience, but in practice this non-locality manifests itself as things like “Our departure will be delayed for 0.46 seconds while the remainder of my mind boards, Captain.” Not a big deal, especially since even protein intelligences don’t keep their whole minds in the same place these days. They wouldn’t fit, for one thing.)

But suffice it to say, when the avatar interface tells you that she is the ship, she ain’t just being metaphorical.


  1. Well, sort of. It’s not like hardware engineers and semiconductor fabs were any less obsessed with making smaller, faster, better, etc. processors than they were here, but they were doing so within a parallel paradigm. “Two-point-four-billion stacked-mesh processing cores in a nanocirc the size of your pinky nail!”, that sort of thing.

On The Relationship Between Transcend and Transcendi

Kicking this one uphill from a comment to an actual post, since it may clarify matters for any other readers, too, with questions in this area:

Okay. Let me try this again.

All minds, as defined in the ‘verse, are Minskian societies of mind, masses of independently running agents on a shared substrate, from which consciousness, volition, and all other mental properties emerge. (“The Country of the Mind” in Greg Bear’s Queen of Angels would be a good symbolic representation.)

This, for example, is how technologies like the gnostic overlay work; by patching some new voices into the chorus.

The Transcendent soul-shard (hence its technical name, logos bridge) serves as a bridge between two societies of mind, carrying messages back and forth, allowing both the participation of the constitutional’s agents in the Transcend’s mentality and the participation of some of the Transcend’s agents in the constitutional’s. This blurs the strict lines of identity, arguably, but it’s not a complete subsumption of identity such as occurs on joining a Fusion; rather, every time a new constitutional Transcends, both they and It become somewhat different people in various ways, wedded together most intimately, but they don’t achieve identity of identity.

(Incidentally, as I think I mentioned in a 2012 piece, the main reason the Transcend keeps an economy and a governance around, is that they work. Sure, technically, you could replace market coordination with coordination mediated through coadjutors and Transcendent oversouls, but because of that mathematical theorem that demonstrates that even a hypothetical Omniscient Calculator could only at best equal the performance of a free market, not beat it, you’d have nothing to gain except wasted cycles and the lack of a convenient interface to the rest of the universe. Similar logic applies to various other applications.

Basically, you don’t send an oversoul to do a simpler instrumentality’s job.)

On Free Will and Noetic Architecture

Another little note on identity, following on from here:

On the whole, do eldraeic mainstream views on free will, determinism, and the possible interactions between the two run more towards compatibilism or incompatibilism?

While ideas vary as ideas always do in the absence of proof one way or another, the mainstream position – certainly among sophontechnologists, who have the greatest claim to knowledge on this point – is incompatiblism, and specifically a variant of that form of it that goes by the name of libertarianism; i.e., that free will is true, and determinism is in certain ways, false.

(This is, of course, purely a coincidence. Heh.)

To explain why that is requires delving a little way into my Minovsky cognitive science, which explains how minds work for the purposes of the Eldraeverse. Since this attempts to explain how minds work in the general case, regardless of species, origin, or substrate, it’s rather different in any case from the kind of cognitive science that concentrates on the specific case of human brains, even before we must point out that I’m pretty much pulling it out of my ass.

So what is a mind?

Well, to a large part, it’s a Minskian society of mind. Which is to say that it’s a massively parallel set of personalities, subpersonalities, agents, talents, memes, archetypes, models, animus-anima pairings, instincts, skillsets, etc., etc., etc., all burbling away continuously alongside each other. None of them can strictly be said to be the mind; the mind is none of them. The mind is, to a large extent, the emergent chorus that results from the argument of all of them, or at least the currently dominant set, each with the other.

(This, incidentally, is how gnostic overlays work. By grafting some voices into the chorus while suppressing others, you can add to, shade, or suppress some elements of that emergent chorus without replacing the basic personality.)

It has, however, two identifiable centers. One of these is the consciousness loop, which is a special cognitive entity present in conscious/autosentient beings whose job is to organize the output of the chorus into a narrative thread of consciousness, a.k.a., that little voice you hear when you think out loud. (It’s important to realize, of course, that despite being the part of your cognition that’s visible to you – assuming, gentle reader, that you are in fact conscious – it has no claim to be you, or indeed to play any particular part in controlling what you do. The most accurate analogy for what it does is that it’s the mind’s syslog, recording everything that the other bits of the mind do, and which they can in turn consult to find out what’s going on. It’s also important to realize that it’s not actually necessary for it to be associated with the mind’s own self-symbol, or indeed for it to exist at all, whatever the most common naturally evolved mental architectures might have to say on the matter.)

The other one is the logos, or personality organization algorithm, which is the weird fractal algorithm sitting in the middle of sophont minds, and only sophont minds (i.e., both autosentient and volitional). It’s also the only part of the mind that isn’t computable at all – vis-a-vis being only computable much more slowly – on a standard computer, requiring a quantum processor.

But none of that is the weird thing. The weird thing is this.

It’s empirically nondeterministic.

More to the point, it’s not nondeterministic in a physical sense, dependent upon its substrate; it’s nondeterministic in a mathematical sense. However you choose to compute a logos, you will never get a perfectly consistent result in an arbitrary number of trials. You will never get a statistically consistent result in an arbitrary number of arbitrary numbers of trials. Except that occasionally you will. It’s funny that way, and it’s definitely not simply random or chaotic.

Now, sure, say the physicists. The observable physical universe is deterministic. And chemistry is deterministic, and biology is deterministic, and computation is deterministic, and thus the 99.99% of mental operations in which the logos takes no part are deterministically determined by the rest of one’s society of mind, because free will or no free will, sophonts don’t actually seem to exercise it that often. (Although the exceptions – chaotic clionomic excursions, say – are suggestive.)

But there’s this THING that shows up in sophont minds.

It’s very poorly understood around the edges – enough to clone and modify and seed with it and understand some of its typology – and not at all understood, pretty much, in the middle. It might mean nothing. It might just be some artifact of the underlying cosmic metaphysics that the ontotechnologists play with, of no real significance in this debate.

But, say the mainstream sophontologists, that’s not the way we’re betting. That’s your free will, your volition, right there, in that tiny little mathematical corner peeking into the universe. That minuscule cog of the engine of creation that runs on paracausality, not causality; where will defeats law.

The Flame.

Also, I’m not quite sure how to reverse-engineer the proper philosophical position from the analogy in sensible words, but: Would a drawing of a Kanizsa triangle count as a real triangle?

Well, I wouldn’t say that it is a triangle (but then, I wouldn’t say that about a simple drawing of a triangle either); but I would say that it represents the concept of a triangle. (Along with various other things; most physical objects represent/instantiate/make use of several concepts. To re-use a precious example, Elements of Arithmetic, Second Edition, 1992 can represent any of “arithmetic”, “book”, “textbook”, “paper”, “cuboid”, etc., etc., depending/instantiate/make use on the context you look at it in.)