Parallelism

It’s about divergences in computer technology —

Or in other words, some conversations elsewhere have made it evident that it would be useful to have some of these things out here for discussion, and since this is going to involve comparisons to Earthling ways of doing things, it’s going to be a worldbuilding article rather than an in-universe one.

Some of this has been implied previously – for those of you who remember the little piece I wrote on programming languages in particular, in the opening phrase “The typical computer in use in the modern Empire remains the parallel array of binary-encoded Stannic-complete processors that has been in use since the days of the first settled Stannic cogitator architecture”.

So what does that actually mean?

Well, it means that while the individual elements of computation would be familiar to us – if you are reading this, you are almost certain to be doing so on something describable as a binary-encoded Stannic-complete processor – how they were arranged took a sharp left turn way back in the day.

Most of our computing is fundamentally serial. We may have fancy multicore processors these days, but we’re still pretty much scratching the surface or real parallelism; most systems are still operating in a serial paradigm in which you work on one task, switch to another, work on that, etc., etc. If you write a complex, multithreaded program, it may look like things are happening in parallel, but most of the time, they won’t be.

For various reasons – which may have something to do with the relative ease of adding power to the old brass-and-steam Stannic cogitators by adding more processor modules vis-à-vis trying to get faster reciprocation and higher steam pressures without exploding; or it may have something to do with older forms of computation involving hiring a bunch of smart lads and lasses from the Guild of Numbers and arranging them in a Chinese room; or… – once they got into the electronic (and spintronic, and optronic) era instead of trying to make faster and faster serial processors¹, designers concentrated on making processors – with onboard fast memory and communications links – that could be stacked up, networked, and parallelized really well, complete with dedicated hardware and microcode to manage interprocessor links.

(You could look at something like Inmos’s Transputer as similar to early examples of this.)

Open up an Imperial computer, you’ll find a neat little stack of processor modules meshed together, working away on things in parallel and passing messages back and forth to stay coordinated. In modern designs, they share access to a big block of “slow memory”, possibly via one or more partially-shared caches, just like here‘s multicore processors do, but that doesn’t change the fundamentals of the parallel design.

And this architecture doesn’t change with scale, either. From the tiniest grain-of-rice picoframe found in any living object (three processing cores for redundancy, maybe even only one in the tiniest disposables) to the somewhere-between-building-and-city-sized megaframes running planetary management applications, they’re all built out of massively parallel networks of simple processing modules.

[Digression: this is also where the gentle art of computational origami comes into play. In the magical world in which the speed of light, bandwidth, and information density are conveniently infinite, you could fully mesh all your processing modules and everything would be wonderful. In the real world in which light is a sluggard and bit must be it, you can only have and handle so many short-range communications links – and so computational origami teaches you how to arrange your processing modules in optimally sized and structured networks, then stack them together in endless fractal layers for best throughput. More importantly, it teaches the processors how to manage this environment.]

[Second digression: having spent a lot of time and effort producing simple, networkable processor cores, this also rewrote a lot of how peripheral devices worked – because why would you waste a lot of time fabbing specialized silicon for disk controllers, or GPUs, or floating-point units, or whatever, when you could simply throw some processing cores in there with some “firmware” – for which read “software flagged as tied to hardware feature flag foo, instance bar” – and get to the same place?

So, for example, when you think “printer”, don’t think “dumb hardware operated by a device driver”. Think “processor that knows how to draw on paper; all I have to do is send it a picture”. Pretty much every peripheral device you can think of is implemented in this way.]

This has also had rather a profound effect on how everything built on top of it works. I spent quite some time discussing how programming languages worked, along with MetaLanguage (the bytecode that these processors have more or less standardized on speaking) in the above-linked post, but you may note:

Polychora: a general-purpose, multi-paradigm programming language designed to support object-, aspect-, concurrency-, channel-, ‘weave-, contract- and actor-oriented programming across shared-memory, mesh-based, and pervasively networked parallel-processing systems.

…because once you grow to the size – and it doesn’t take much size – at which programming your parallel arrays in relatively low-level languages similar to Occam begins to pall, you start getting very interested in paradigms like object/aspect/actor programming that can handle a lot of the fun of massively parallel systems for you. This has shaped a lot of how environments have developed, and all the above language environments include compilers that are more than happy to distribute your solution for you unless you’ve worked hard to be egregiously out-of-paradigm.

And the whys and hows of WeaveControl, and the Living Object Protocol.

This has also, obviously, made distributed computing a lot more popular a lot more rapidly, because having been built for parallel operation anyway, farming out processing to remote nodes isn’t all that more complicated, be they your remote nodes, or hired remote nodes, or just the cycle spot market. Operating systems for these systems have already developed, to stretch a mite, a certain Kubernetes-like quality of “describe for me the service you want, and I’ll take care of the details of how to spin it up”.

In accordance with configurable policy, of course, but except in special cases, people don’t care much more about which modules are allocated to do the thing any more than they care about which neurons are allocated to catch the ball. In the modern, mature computing environment, it has long since become something safely left to the extremely reliable optronic equivalent of the cerebellum and brainstem.


Now as for how this relates to, going back to some of the original conversations, starships and AI:

Well, obviously for one, there isn’t a single computer core, or even several explicitly-designed-as-redundant-nodes computer cores. There are computers all over the ship, from microcontrollers running individual pieces of equipment up – and while this probably does include a few engineering spaces labeled “data center” and stacked floor to ceiling with nanocircs (and backing store devices), the ship’s intelligence isn’t localized to any one of them, or couple of them. It’s everywhere.

If your plan to disable the ship involves a physical attack on the shipmind, you’ve got a lot of computing hardware to hunt down, including everything from the microcontrollers that water the potted plants on G deck to the chief engineer’s slipstick. You have fun with that. Briefly.

As for AI – well, digisapiences and thinkers operate on the same society-of-mind structure that other minds do, as described here. When this interrelates with the structure of parallel, distributed computing, you can assume that while they are one data-structure identity-wise, the processing of an AI is organized such that every part of the psyche, agent, talent, personality, subpersonality, talent, mental model, daimon, etc., etc., etc., is a process wrapped up in its own little pod, off running… somewhere in what looks like a unified cognitive/computational space, but is actually an arbitrary number of processing cores distributed wherever policy permits them to be put.

(If you choose to look down that far, but outwith special circumstances, this is like a biosapience poking around their brain trying to find out exactly which cells that particular thought is located in.

Said policy usually mandates some degree of locality for core functions, inasmuch as light-lag induced mind-lag is an unpleasant dissociative feeling of stupidity that folk prefer not to experience, but in practice this non-locality manifests itself as things like “Our departure will be delayed for 0.46 seconds while the remainder of my mind boards, Captain.” Not a big deal, especially since even protein intelligences don’t keep their whole minds in the same place these days. They wouldn’t fit, for one thing.)

But suffice it to say, when the avatar interface tells you that she is the ship, she ain’t just being metaphorical.


  1. Well, sort of. It’s not like hardware engineers and semiconductor fabs were any less obsessed with making smaller, faster, better, etc. processors than they were here, but they were doing so within a parallel paradigm. “Two-point-four-billion stacked-mesh processing cores in a nanocirc the size of your pinky nail!”, that sort of thing.

Big Iron

INERTIA WARNING

MAGNETIC DRUM STORAGE IN USE
ACCESS BY AUTHORIZED TECHNICIANS ONLY

Magnetic drums rotate at 36 rpm when in use. Do not enter drum room unless steam valve is closed and padlocked, drive clutch is disengaged and padlocked, and spin-down is complete. (Governor ball check is insufficient; spin gauge must read zero.)

Inform supervising administrator before entrance and after exit. All keys are to be retained by responsible technician until maintenance is complete.

Full manual rotation must be performed to check balance before spin-up. Use auxiliary engine only; do not attempt to manipulate drums by hand. Steam release from head positioning servos may occur during zeroing and read/write tests.

EMERGENCY SHUTDOWN CAN NOT BRAKE DRUMS
CONTACT WITH ROTATING DRUM WILL KILL YOU

INERTIA WARNING

How to Talk to Rocks

“The typical computer in use in the modern Empire remains the parallel array of binary-encoded Stannic-complete processors that has been in use since the days of the first settled Stannic cogitator architecture. This is the case at all scales, from the smallest picoframe microcontroller to the largest mega, with the principal exception being the rod-logic nanocomputers used to provide computing power to microbots and other tiny devices, for which the distinction between hardware and software becomes fuzzy.

“These processors naturally come in a variety of designs utilizing a number of different internal architectures, microcodes, and instruction sets – even word lengths, although 128-bit words (banquyts) are an industry standard. That being said, while bare-metal programming is still taught to inculcate the fundamentals of the profession, it is rarely practiced today.

“Rather, high-level languages are compiled down to MetaLanguage, or ML. ML serves an an intermediate language whose core set of instructions is implemented, directly or indirectly, on all processors; a number of optional feature subsets (for physical interfaces, quantum computing, cryptography, and so on and so forth) may be implemented by various processors, but are not required. Exotic or experimental processors which wish to make use of ML, the majority, may implement their own private subsets. Code objects, or assemblages of such objects, are either precompiled upon installation or just-in-time compiled to platform-specific instructions for the processors they serve.

“The high-level languages of choice, naturally, are a much wider selection. The long-term leaders, at the time of publication, are:

Polychora: a general-purpose, multi-paradigm programming language designed to support object-, aspect-, concurrency-, channel-, ‘weave-, contract- and actor-oriented programming across shared-memory, mesh-based, and pervasively networked parallel-processing systems.

Descant: More dynamic and less strict than Polychora in its approach, and optimized for just-in-time compilation, Descant is a general-purpose language which, while supporting similar functionality in most areas, is optimized to serve in an extensible, modular, readily-integratable system-scripting role. Where convenient, it shares operators and syntax with Polychora.

Silvar: A dynamic language for data-structure-oriented programming, metaprogramming, and self-modification, supporting full homoiconicity while maintaining interoperability with other ML-based languages.

“Additionally, there are many domain specific languages in use. Common examples of these include Exapar (a language designed for convenient programming of nanoswarms and other massive-parallelism systems), eXchange (for expressing smart contracts), Imprimatura (used for declarative rights management systems), psylisp (an extended dialect of Silvar designed for optimal mind-state encoding and self-improving intelligent systems), and VIML (Virtual Interface Meta Language, used for virtuality design, along with specialized derivatives including IMF, the Interactive Modeling Format, and DObI, the Descriptive Object Interface).”

– Introduction to Computer Programming (Vol. 1.): Speaking To Minerals,
Imperial University Press

Trope-a-Day: Data Pad

Data Pad: The slate, a once ubiquitous accessory before the advent of the fob terminal, the wearable, and the neural lace, and still quite common because it’s useful to pass data around (of course, smart paper does just as well for this, but…). Also, people kind of like working with their hands.

Also note that this is a way to share one or more documents on one person’s device among multiple people having a coffee together, not a stack of one-document-per-device data pads ending up in in-trays, this not being Star Trek:

picard-padds

Seriously, this is the dumbest thing ever committed in television SF. Except possibly the baryon sweep, or the temperatures a couple of hundred degrees below absolute zero, or the cutting of holes in an event horizon, or… well, okay, those were all bad science. This is just total fail of common sense.

Trope-a-Day: Virtual Danger Denial

Virtual Danger Denial: Very strongly averted just about everywhere advanced, because this attitude coupled with ubiquitous computing and mind-machine interfacing, as well as when Everything is Online, is not survival-oriented, shall we say. (Yes, you can catch a fatal STI from cybersex.) In the modern world, you can safely assume that you are completely surrounded by computers which control just about everything going on in your vicinity, and anything that affects them will most definitely affect you.

And don’t even think about what an EMP would mean.

Service Pack

“Back off the toggle reader,” Myrian Vitremarvis bellowed through his bullhorn. “Unclutch the address drive from the operatin’ counter.”

A series of metallic bangs punctuated by less-metallic blasphemies from the floor above accompanied the execution of this order.

“Okay, now get the donkey strapped up. Advance address counter to 12,732. 12,732, you hear?” He turned to belabor the crew behind him. “Now lower away on the bit winch. Get the shackle down to the reader level –”

A clangor cut him off, as the operating-code shaft spun and the great master toggle chain clattered down into the depth of its well.

“Okay, 12,732? Give me the next eight toggles in sequence.”

“Up, up, down, down, up, up, up, down, boss,” a yell came down from the reader balcony, “and the edging lines are right.”

“Clamp it and cut it. Cut it above, remember, we’re losing link 12,731.” He turned again. “Lower away on the bit winch, get us space. First chain!” A gesture with a wrench ushered in a half-dozen junior operators bearing another length of toggle chain on their shoulders. “Give me the leader.”

Myrian scrutinized the pattern of lines etched into the first link of the new chain. “It’s valid. Get it up to the reader walk.” He raised his bullhorn again. “Got it? Weld it. Then run the address counter forward until the loose end’s up at the reader walk. And haul away on the bit winch, get the shackle back at par – then hook that and weld it, too.”

A flurry of acknowledgements came back.

“Good. Now run the chain back seven hundred sixty-eight places, an’ get the reader in position. Rig for a test read-and-compare off the donkey. Seventeen more chains to patch and only a day and a half left in the maintenance window – so snap it up, you code dogs!

Meat Machines

CS Drachensvard
holding position 120,000 miles from uncharted drift
Corfeth (Vanlir Edge) System

The sound of retching broke the silence on the bridge. Midshipman Lochran-ith-Lanth, currently manning the tactical/payload position. He’d already clamped his hand over his mouth by the time I glanced over at him, though, and got his reflexes shut down in only a second more. Good man, well trained.

Not that anyone could be blamed for throwing up, seeing this for the first time. Clavíë at Data Ops had penetrated the station’s network without breathing hard, and the images coming back from the internal sensors were enough to turn anyone’s stomach.

Slavery persists in backwater parts of the Periphery, and even the Expansion Regions, much to our embarrassment. But then, we’re the Imperial Navy, not Éjavóné Herself. We can’t vaporize everyone who deserves it all at once.

And everyone knows the reasons: sophont servants, flesh toys, test subjects, cannon fodder, pet victims, and so forth. This, though – this was a very distinct perversion, characteristic of where high technology met low.

After all, it takes a relatively high – and expensive – technology to weave the topological braids of a hard-state neural net processor, or to program an effective software emulation of all of its subtleties. It takes an advanced biotechnology to grow and educate a cortexture that can perform advanced cognitive tasks. But while it takes a firm grasp of sophotechnology to learn how to repurpose an existing neural network…

…it turns out that any transistor-stringing moron can actually do it.

Take a sophont. Preferably an intelligent one, and young and strong enough to survive the process for a long time. “Simplify” them – by which they mean remove any inconvenient limbs, or hair, or anything else not needed in their new role. Dose them up with catacinin, or some other mind-killer drug, and neural plasticizers, then saw off the top of their brain-case, insert the interface electrodes, and seal the hole with sterile plastic. Hook up the life-support system, and box them up. ‘No user serviceable parts inside.’ A week or so of imprinting, and you have a neural-net processor – worth ten-thousand gPt, maybe twenty-five kgAu in one of these backwaters. It’ll last maybe ten years before the flesh gives out, and it’s an order of magnitude cheaper than less ethically defective hardware, unfortunately.

“Communications from the station, Skipper. They – ah, they protest our unprovoked attack, and wish to offer surrender.”

“One response, Máris: ‘Dármódan xalakhassár hál!’ Mr. Lanth, load the primary with AMSM warhead.”

“Captain?”

“You heard me, Mr. Lanth.” At his shocked look, I continued. “There’s nothing that can be done for the ‘cargo’, son. Everyone over there to rescue’s had their brain pithed with a dull knife. The best we can do for them is make sure the ones who did this don’t do it to anyone else. Now: load primary with AMSM.”

“Aye, sir. I mean – aye-aye, sir.”

I tapped the view-mode switch, and watched as the exterior of the slaver station replaced the pitiful sight on the for’ard viewer.

“Primary loaded and standing by, sir,” he reported.

“Fire.”

Author’s Note: Hey, Y’All, Watch This!

For those wondering about some of the technical background:

The chief obstacles to using “normal” computers in space are heat generation (given the average spacecraft’s limited heat budget – disposing of heat in vacuum is hard), cooling (because in microgravity, convection doesn’t work – there go heat-sinks without a lot of active coolant-movement devices), ability to work in low air pressure and/or vacuum if something goes wrong, and the prevalence of ionizing  and other EM radiation, which tends to muck up delicate electronics.  For a large part of history, this was handled by many of the same compromises we made – reduced transistor density, specially hardened chips and designs, magnetic core memory, and so forth.

(Fun fact: this problem was particularly bad back in the Apollo-era equivalents of Projects Phoenix, Oculus, and Silverfall, because they were using Orion-style nuclear pulse drives.  Which is to say, during atmospheric ascent, a crapload of EMP happening right near the flight computers.  Back then, they were using “electron plumbing” machines, because despite their space program being relatively later in their technological timeline and thus having better ICs available, they still were by no means EMP-immune.  “Electron plumbing” is a technological path we didn’t take – essentially, evolved thermionic valves/vacuum tubes to higher orders of complexity.  Never widely used, because ICs were still a better technology overall, but for this specific use, excellent.)

But in the modern era of spaceflight, they can use standard commercial computers, because those use optronic nanocircs.  Those run cool (no need to wiggle significant electrons about; photons are much easier to handle) inherently, and care much, much less about passing ionizing and other EM radiation.  Also, all but the most cut-down “standard” ML runtimes or hardprocs (a processor that implements the ML runtime directly in hardware) incorporate all the real-time and safety-critical features that you’d need for spaceflight applications, because those features are also used in general automation and robotics and other applications that are pretty close to ubiquitous downside as well.  And so does the standard IIP networking protocol, and so forth, and for much the same reasons.

As for WeaveControl, it’s more formal name is Interweave Command/Control Protocol; for reasons of technological evolution, plus much more prevalent hackerish tendencies in the population, just about every device manufactured – cars, lightbulbs, drink-makers, ovens, coins – comes with an IIP interface and a WeaveControl endpoint, which lets you run all the functions of the device from an external command source.  (It’s become such a ubiquitous open standard that there’s no reason not to spend the couple of micros it takes to install it.)  You really can script just about anything to do anything, or hook it up to interfaces of your choosing on any device you have that can run them.  Things as simple as programming your alarm clock to tell the appropriate devices to make your morning cuppa, lay out suitable clothes according to the weather and the style of the day, cook your breakfast, fetch and program your paper with the morning’s news, order a car to come take you to work, and program its music system with a playlist suitable for your mood are downright commonplace.

But they’re serious about anything/anything compatibility.  You can program your bath from your car, drive your car from your PDA, operate an industrial 3D printer from seat 36B on the sub-ballistic – hell, run your building elevator from your pocket-watch if you can think of any reason why that might be something you’d want to do.

Some of these applications are, ah, less advisable than others!

Out of Order Transmission

…as every child learns, computing as we know it today originated with the invention of the Stannic cogitator.  Stane Vitremarvis-ith-Vidumarvis of Azikhan, working in the family business of manufacturing mechanical calculators and automata cores, was the first to make the conceptual breakthrough that in addition to accepting fixed programming, such automated devices could store and indeed dynamically modify programs in the same manner as they did data.  Thus were the first general-purpose computers built, ushering in the transition between the Low Steam Age and the later High Steam Age with the use of miniature Stannic cogitators to provide the required control mechanisms for the first true steam clanks (pre-electronic robots), and earning a second fortune for House Vitremarvis in the process.

This, however, is the history of networking.  The ability of computers to interconnect and communicate exponentially expands their capacity and usefulness, something which was clear from the earliest days of the field, but nonetheless, the development of networking had to wait until the availability of a suitable transmission medium.

While some short-range experiments were carried out in the early days using chains, shafts, belts, dedicated multi-mass ball-bearing races, and other mechanical interconnects between pairs of Stannic cogitators located close to each other, some with remarkable success, none of these mechanical means proved possible to make function reliably, or indeed at all, across distance.  Communication between distant devices required shipping the data using conventional transportation, in a frozen form – most commonly a stack of punched cards (stiff paper cards in which holes in specific locations represent the information, readable using a pin matrix), or a toggle chain (a standardized length and gauge of chain in which each link contains a two-position mechanical toggle, whose positions read from end to end represent a data string).  Some progress was also made in transmitting the contents of these media using automated heliography (although manual transcription was required at the receiving end; experiments in fully automated heliography were not being carried out until near the end of the High Steam Age).

The first true networks did not appear until the first relay-based computers came into use.  With the harnessing of electricity, it finally became possible for one machine to produce a signal, readily transmissible over long distances, which could automatically be read by a receiving machine.

While first used as dedicated machine-to-machine connections, a team working under Parváné Camriad-ith-Sereda devised what we know today as the forerunner of IIPv1, a set of protocols implemented in these early machines by dedicated hardware, which permitted multiple machines to share a single line and transmit any-to-any, with only the intended recipient receiving any given message; and also to break up messages in such a way that a long message would be transmitted in segments, such that other machine pairs could still partially utilize the communications line.  Later, his team added to this a mechanical interchange such that messages could be forwarded from one line to another by an intermediate hub, allowing messages to be passed over long distances without requiring all the machines in each location to be connected to a single communications line; the first true packet-switched network.

Parváné Camriad-ith-Sereda offered his demonstration to a number of entrepreneurs of the time, some few of which saw the potential in his shared-line system.  These went on to found Empire Telegnosis and Mechanical Messaging (a corporate forerunner of the modern Bright Shadow, ICC), which used Parváné’s shared-line system as the basis of a long-distance communication network to bind together many of the Empire’s major cities, and thus offer a versatile system to interconnect many of the commercial, scientific and governmental computers then in use.

It is a matter of some historical interest, unusual when technological development sequences are compared, that Eliera developed the data network so early in its history; this can be probably be attributed to the also-unusual early advancements in metallurgy and clockwork engineering that permitted the successful invention of the Stannic cogitator.  On most worlds, electronic computers are the first to be successfully constructed, and data networks tend to follow the invention of telegraphy and telephony.

By contrast, telegraphy on Eliéra was the product of various local initiatives (Cestia Lightning Mail, Azikhan Electromessaging, Roquentius & Co. Telescriptorium, et. al.) purchasing simple computers, little more than a cypherwheel and an interface, and having them interconnected by ET&MM for the dedicated purpose of sending and receiving sophont-to-sophont messages at high speed.  Likewise, telephony was a latecomer to the Eliéran scene – reaching many regions after most homes already contained their own “telegraphic terminal” – based on dedicated voice lines using the existing data network as an out-of-band control channel.

IIPv1 itself was a product of…

– IIP Elucidated, Volume I: Perspectives