Parallelism

It’s about divergences in computer technology —

Or in other words, some conversations elsewhere have made it evident that it would be useful to have some of these things out here for discussion, and since this is going to involve comparisons to Earthling ways of doing things, it’s going to be a worldbuilding article rather than an in-universe one.

Some of this has been implied previously – for those of you who remember the little piece I wrote on programming languages in particular, in the opening phrase “The typical computer in use in the modern Empire remains the parallel array of binary-encoded Stannic-complete processors that has been in use since the days of the first settled Stannic cogitator architecture”.

So what does that actually mean?

Well, it means that while the individual elements of computation would be familiar to us – if you are reading this, you are almost certain to be doing so on something describable as a binary-encoded Stannic-complete processor – how they were arranged took a sharp left turn way back in the day.

Most of our computing is fundamentally serial. We may have fancy multicore processors these days, but we’re still pretty much scratching the surface or real parallelism; most systems are still operating in a serial paradigm in which you work on one task, switch to another, work on that, etc., etc. If you write a complex, multithreaded program, it may look like things are happening in parallel, but most of the time, they won’t be.

For various reasons – which may have something to do with the relative ease of adding power to the old brass-and-steam Stannic cogitators by adding more processor modules vis-à-vis trying to get faster reciprocation and higher steam pressures without exploding; or it may have something to do with older forms of computation involving hiring a bunch of smart lads and lasses from the Guild of Numbers and arranging them in a Chinese room; or… – once they got into the electronic (and spintronic, and optronic) era instead of trying to make faster and faster serial processors¹, designers concentrated on making processors – with onboard fast memory and communications links – that could be stacked up, networked, and parallelized really well, complete with dedicated hardware and microcode to manage interprocessor links.

(You could look at something like Inmos’s Transputer as similar to early examples of this.)

Open up an Imperial computer, you’ll find a neat little stack of processor modules meshed together, working away on things in parallel and passing messages back and forth to stay coordinated. In modern designs, they share access to a big block of “slow memory”, possibly via one or more partially-shared caches, just like here‘s multicore processors do, but that doesn’t change the fundamentals of the parallel design.

And this architecture doesn’t change with scale, either. From the tiniest grain-of-rice picoframe found in any living object (three processing cores for redundancy, maybe even only one in the tiniest disposables) to the somewhere-between-building-and-city-sized megaframes running planetary management applications, they’re all built out of massively parallel networks of simple processing modules.

[Digression: this is also where the gentle art of computational origami comes into play. In the magical world in which the speed of light, bandwidth, and information density are conveniently infinite, you could fully mesh all your processing modules and everything would be wonderful. In the real world in which light is a sluggard and bit must be it, you can only have and handle so many short-range communications links – and so computational origami teaches you how to arrange your processing modules in optimally sized and structured networks, then stack them together in endless fractal layers for best throughput. More importantly, it teaches the processors how to manage this environment.]

[Second digression: having spent a lot of time and effort producing simple, networkable processor cores, this also rewrote a lot of how peripheral devices worked – because why would you waste a lot of time fabbing specialized silicon for disk controllers, or GPUs, or floating-point units, or whatever, when you could simply throw some processing cores in there with some “firmware” – for which read “software flagged as tied to hardware feature flag foo, instance bar” – and get to the same place?

So, for example, when you think “printer”, don’t think “dumb hardware operated by a device driver”. Think “processor that knows how to draw on paper; all I have to do is send it a picture”. Pretty much every peripheral device you can think of is implemented in this way.]

This has also had rather a profound effect on how everything built on top of it works. I spent quite some time discussing how programming languages worked, along with MetaLanguage (the bytecode that these processors have more or less standardized on speaking) in the above-linked post, but you may note:

Polychora: a general-purpose, multi-paradigm programming language designed to support object-, aspect-, concurrency-, channel-, ‘weave-, contract- and actor-oriented programming across shared-memory, mesh-based, and pervasively networked parallel-processing systems.

…because once you grow to the size – and it doesn’t take much size – at which programming your parallel arrays in relatively low-level languages similar to Occam begins to pall, you start getting very interested in paradigms like object/aspect/actor programming that can handle a lot of the fun of massively parallel systems for you. This has shaped a lot of how environments have developed, and all the above language environments include compilers that are more than happy to distribute your solution for you unless you’ve worked hard to be egregiously out-of-paradigm.

And the whys and hows of WeaveControl, and the Living Object Protocol.

This has also, obviously, made distributed computing a lot more popular a lot more rapidly, because having been built for parallel operation anyway, farming out processing to remote nodes isn’t all that more complicated, be they your remote nodes, or hired remote nodes, or just the cycle spot market. Operating systems for these systems have already developed, to stretch a mite, a certain Kubernetes-like quality of “describe for me the service you want, and I’ll take care of the details of how to spin it up”.

In accordance with configurable policy, of course, but except in special cases, people don’t care much more about which modules are allocated to do the thing any more than they care about which neurons are allocated to catch the ball. In the modern, mature computing environment, it has long since become something safely left to the extremely reliable optronic equivalent of the cerebellum and brainstem.


Now as for how this relates to, going back to some of the original conversations, starships and AI:

Well, obviously for one, there isn’t a single computer core, or even several explicitly-designed-as-redundant-nodes computer cores. There are computers all over the ship, from microcontrollers running individual pieces of equipment up – and while this probably does include a few engineering spaces labeled “data center” and stacked floor to ceiling with nanocircs (and backing store devices), the ship’s intelligence isn’t localized to any one of them, or couple of them. It’s everywhere.

If your plan to disable the ship involves a physical attack on the shipmind, you’ve got a lot of computing hardware to hunt down, including everything from the microcontrollers that water the potted plants on G deck to the chief engineer’s slipstick. You have fun with that. Briefly.

As for AI – well, digisapiences and thinkers operate on the same society-of-mind structure that other minds do, as described here. When this interrelates with the structure of parallel, distributed computing, you can assume that while they are one data-structure identity-wise, the processing of an AI is organized such that every part of the psyche, agent, talent, personality, subpersonality, talent, mental model, daimon, etc., etc., etc., is a process wrapped up in its own little pod, off running… somewhere in what looks like a unified cognitive/computational space, but is actually an arbitrary number of processing cores distributed wherever policy permits them to be put.

(If you choose to look down that far, but outwith special circumstances, this is like a biosapience poking around their brain trying to find out exactly which cells that particular thought is located in.

Said policy usually mandates some degree of locality for core functions, inasmuch as light-lag induced mind-lag is an unpleasant dissociative feeling of stupidity that folk prefer not to experience, but in practice this non-locality manifests itself as things like “Our departure will be delayed for 0.46 seconds while the remainder of my mind boards, Captain.” Not a big deal, especially since even protein intelligences don’t keep their whole minds in the same place these days. They wouldn’t fit, for one thing.)

But suffice it to say, when the avatar interface tells you that she is the ship, she ain’t just being metaphorical.


  1. Well, sort of. It’s not like hardware engineers and semiconductor fabs were any less obsessed with making smaller, faster, better, etc. processors than they were here, but they were doing so within a parallel paradigm. “Two-point-four-billion stacked-mesh processing cores in a nanocirc the size of your pinky nail!”, that sort of thing.

Trope-a-Day: Virtual Danger Denial

Virtual Danger Denial: Very strongly averted just about everywhere advanced, because this attitude coupled with ubiquitous computing and mind-machine interfacing, as well as when Everything is Online, is not survival-oriented, shall we say. (Yes, you can catch a fatal STI from cybersex.) In the modern world, you can safely assume that you are completely surrounded by computers which control just about everything going on in your vicinity, and anything that affects them will most definitely affect you.

And don’t even think about what an EMP would mean.

Trope-a-Day: Precrime Arrest

Precrime Arrest: Well, while this sort of thing is easy enough to do with behavioral analysis software and ubiquitous computing and AI monitoring and all the other appurtenances of Citizen Oversight, obviously you can’t arrest people before the crime, having a great and tremendous respect for free will and all. That would be very bad form indeed. (I mean, if they were certain to commit the crime, that would be fine: under Imperial notions of the legal causality of intent, that’s why you can arraign someone for murder even if they were stopped before proceeding, but if they haven’t committed to their mens rea yet, it’s not a crime even if it was very likely to be, and free-will choices in critical moments are awkward that way.)

But there’s ain’t no rule saying you can’t quietly park a UAV with a stunner, say, in the air over people who are very likely to be about to commit crimes just in case, or quietly take other precautionary measures. If it turns out they don’t – well, no harm, no foul. That’s not an accusation, it’s just probability-based policing.

Living Object Protocol

So, a little while back I was having this discussion (scroll down) regarding starships, and where exactly the seat of their identity might be said to lie, with particular reference to the Ship of Theseus problem.

And, as it happens, Imperial technology already has a thing or two to say on this sort of question. Let me tell you about living objects, and about the Living Object Protocol.

Of course, the first thing to say there is that while, technically, a “living object” is just an object that implements the Living Object Protocol, it’s still something of an obsolete term. This is the modern age, after all, and you’d have to go to some barbarous outworld to encounter an object that didn’t implement the Living Object Protocol. Even shrubs and rocks, thanks to the nanoecology, implement the Living Object Protocol. In practice, therefore, they’re just called “objects”.

So what is it?

It’s ubiquitous computing, the Internet of Things at its apogee. LOP turns the objects it’s applied to into smart, meshed (wirelessly connected to the dataweave and to objects around them), self-aware, location-aware objects.

At a minimum (the “base subset”) this supports limited self-knowledge. Every object is aware of its own identity (both hard-coded, by type, and whatever its owner names it); it is aware of its creator (designer and manufacturer); it is aware of its owner (and ownership history); and it is aware of its location.

(Which, as recent fic implies, makes it very hard to steal things if the owner left the ackles at default or set ‘em even half-sensibly. In some cases even more so – you stole someone’s phone? That’s not going to let you call anyone but Emergency Response. Steal their gun, and… well, let’s say getting into a firefight with that would be a real bad idea.)

Virtually every significant object – anything more than a bolt – comes with the “informational subset”, too, which in combination can tell you virtually everything about them; their user manuals and other documentation explaining how to properly use and care for them; customer support information and links to object-centric memeweaves; specifications and product data; maintenance procedures and history; manufacturing origins and components/ingredients; fabber recipes for customization; the purchase invoice; proper end-life procedures for recycling and/or disposal. Such documentation is self-updating, with information automatically appearing regarding product updates, recalls, and required service calls, along with geolinks to service centers or downloadable service packs.

Such objects are also readily searchable; it’s easy to track down your favorite mug with a simple query to your home dataweave for its location by name, or even for the locations of every object in your house identifying itself as a mug. Search engines can perform a similar task for objects in the broader world – at least, for objects that you own, or which are flagged for public accessibility.

With little effort, therefore, it’s easy to understand where and what anything is, when and where you got it, how much it cost, what it’s made from, where, and by whom, how to use it, how you should never use it, what other models are available, how it’s evolved from previous versions, how it might change in the future, what other users think about it and how they’ve tweaked it, what creative uses it’s been put to by heteroprax users, and how you might dispose of it safely.

More sophisticated objects also support the Interweave Command/Control Protocol (“WeaveControl”) enabling them to be controlled and commanded remotely, and providing access to both their internal diagnostics, and any sensors with which they’re equipped: your bath can report its temperature and the current water level; your chairs know who’s sitting in them; your milk bottles can tell you if the milk they contain is fresh; and so forth.

Objects which naturally come in groups support cooperative LOP and WeaveControl subsets to be queried or commanded as a group; of which the most obvious example is LOP enumeration. A handful of LOP-compliant Imperial coins, for example, can be ordered to count themselves and report their total value.

Likewise, hierarchical objects automatically cooperate and pass information up and down the hierarchy, the superior controlling and coordinating its inferiors. A vehicle or building’s structural members can cooperatively use their localizers to validate the structure against its blueprint, or compute current stresses and strains in the structure.

SO – in relevant context, want to know whose that starship is, or what that module was part of before you got it, or the maintenance history of that booster, or the fuel status of that drone, or the details of the current consist?

Ask it.

You’ll get a valid answer. The LOP protocols will reject any invalid transfers, identities, or assemblies you try to push through them. So it will always know…

Trope-a-Day: Made of Phlebotinium

Made of Phlebotinium: Well, while there are several kinds of phlebotinium around (see: Applied Phlebotinium) of one grade or another, the deprivation of most of which would certainly make the universe substantially less pretty and/or efficient, the two big ones from a “made of” point of view would be the Absolutely Ubiquitous Computing, which would have much the same “rocks fall, almost everyone dies” effects were it to suddenly go away as electricity suddenly stopping working in Real Life1, and the specific pieces of ontotechnology responsible for the creation of stargates and tangle channels, without which – and thus with all communications and transport restricted to sub-light speeds – the galactic community would look very different indeed.  Indeed, if you delete the tangle channels (which allow real-time communication once you lob them at each other subluminally) as well as the stargates, there’s unlikely to be much of a galactic community, or much in the way of a “star nation” except very loose federations of subluminally-established colonies, bound together by information updates and data trade.

(1. ObVious reference example here: A Fire Upon The Deep, and the Countermeasure.)

Trope-a-Day: Friendly Fireproof

Friendly Fireproof: Your modern weapons, seeing as they contain fairly sophisticated software and personal-area network integration, tend to come with FFI (Friendly Fire Inhibition) to ensure that this is realized in real life; they just plain won’t fire at targets positively identified as friendly.

(Yes, of course there’s an override mode.  There’s also a fancier civilian model that prevents you from firing if the collateral damage you might do exceeds the amount your tort insurer would be willing to pay for.)

Of course, all guarantees are off when it comes to grenades or other area-effect weaponry…

The Most Fundamental Assistance

I watch the two of them move through the misty streets from my seat in the café, while I wait for the thin rain to stop.  The one, tall, pale and dark in the manner of the eseldrae – and clad in rather a nice set of blue-black formal robes, too – but with the very distinct no-expression and flickery eye-hand motions of someone very much occupied with their augmentality feeds, and yet somehow smoothly staying on course; the other, much more alert, trotting at his heels and weaving in and out of obstacles to stay close, golden fur obscured by panniers stuffed with…

Wait, is that a whisker laser reflecting off the droplets?  On a hunch, I call up the local network overlay.  Ah, dedicated high-bandwidth links.  Well, that’s nothing I’ve seen before.

A thinking-brain dog.