Themes: Paracausality

So, stepping out of the ‘verse for a moment, why does paracausality exist?

Thematically speaking, the existence of paracausality says something very important about the nature of the universe. It means that it’s impossible to deny the existence of free will. (Or, rather, you can, but it’s about as useful as standing on a planet’s surface and denying the existence of gravity.)

You make choices, and your choices make you, and the universe you exist within. Create or destroy, heal or harm, save or damn, it’s all down to choice.

And either way, it’s your fault. No-one made you do it, not without rooting your brain and turning you into a non-volitional tool. Not society, not your parents, not circumstance, not culture, not memes, not instincts, not your friends, not your enemies, and certainly not the deterministic unfolding of acyclic causal graphs. Just you.

You chose, and the world responded. You did it. And the consequences are yours to own and to live with, forever and a day.

This gives the world a rather vital quality, especially in fiction: meaningfulness.

13 thoughts on “Themes: Paracausality

      • Certainly in terms of paracausal physical effects in general, being a plot device figuratively and occasionally literally.

        But even in this real world, I maintain that there isn’t a coherent metaphysical position that denies the existence of free will.

        • Fictionally sure, it is a pretty interesting plot device that I’d actually love to see explored in greater length at some point in your work.

          As about the real world and the denial of real (i.e., existing in the territory, not on the map) free will, how do you substantiate your claim on there not being a coherent position like that? Of course, I don’t know your particular position here (libertarian (the philosophical kind), compatibilist or some other), so I risk running into unfounded assumptions.

          • On the latter, I’d be a physicalist libertarian in metaphysical terms (at least, insofar as we accept arguendo that information dualism isn’t metaphysical dualism; i.e., that the mind is not the brain in the same sense as the program isn’t the electrons, and the equilateral triangle isn’t the plastic widget in my desk drawer). I take no position on possible mechanisms, except that indeterminacy alone is insufficient.

            As for my claim, I substantiate it principally by reductio ad absurdum, inasmuch as I find the other positions to be self-contradictory because the very acts of believing something, of advancing an argument, and so forth, implicitly require choice, and thus the capacity to choose; and the universe isn’t self-inconsistent.

            So, to be consistent, either I can exercise my free will by choosing to believe that I have it; or I can be an deterministic automaton. I therefore conclude that I have free will, because in all the cases in which I’m wrong, I couldn’t conclude otherwise anyway, even if I exist in any meaningful sense.

            As for the other cases, If I choose to believe that I don’t have free will, I contradict myself and am therefore wrong; and for an automaton to believe in anything is impossible.

          • One possible position that might be both at least internally consistent (although I’d argue for the stronger case of external consistency as well, i.e., vis-à-vis the currently available corpus of observational and experimental data) is that free will does not exist in the biological-chemical-physical territory but does exist in our cognitive map of this territory.

            To wit: we do (obviously!) possess a basic mechanism for action selection (likely partly implemented in the SNc/VTA) and a mechanism for forming lower-resolution “snapshots” of our own internal state (possibly via thalamocortical loops and a lossy hierarchical structure of information processing, essentially a very complex and messy version of a CNN or HTM, the latter actually being strongly inspired by the architecture of the cortex).

            These two mechanisms allow us to make deterministic or probabilistic choices based on input data (in much the same way as an immensely complex if/then statement in a piece of code might) and let the knowledge about the fact propagate throughout the brain, although, and this is crucial, at rather low resolution that allows us to register the act of a choice being made but not the full chain of causality behind (and ahead of, this setting an early bound to our capability to predict the consequences of our own actions) the act.

            In such a situation, we “fill in the gaps” by way of eager pattern-matching. Namely, just like we are apt to see mind-like patterns where there are likely none and invest agency in external processes that we don’t observe in full causal detail (hence the universal belief in gods, spirits, magic, and all sorts of ontologically basic mental entities), we in the same way tend to invest additional agency in our own mental faculties we also don’t fully observe. From this stems the equally universal belief in souls and, I might argue, a libertarian free will (i.e., one that is in some way meaningfully different from a big if/then statement).

            Since a “belief” is essentially a somewhat separable part of our cognitive map of the territory, I believe [sic] having beliefs does not in any way contradict determinacy or the lack of a libertarian free will, as long as we are able to form lower-resolution snapshots of the thing we observe, internally or externally. A hypothetical omniscient entity, on the other hand, in such a picture would likely not have beliefs in a conventional sense.

            Of course, I don’t presume to know with any certainty any of these things but, were I to argue against free will, my position would probably be similar to this one.

          • Apologies, first, for the belated reply.

            I think I may have been unclear, English being a somewhat imprecise language to wax philosophical in; the problem, as I see it, is choice, and as such I would draw a distinction to set some definitions, between “knowing”, simply having a given datum in a cognitive territory-map, and “believing”, choosing to accept that datum into said territory-map, with the implication that one could have done otherwise. Or between “to act”, simply to perform some action, and “to do”, which for the purposes of this argument requires choosing an action with the possibility of not doing so.

            So, for example, my home automation system can know the current temperature, and it can act upon it by turning on the heating, but it cannot believe or do, because it is purely deterministic and cannot choose anything.

            What I would say with regard to the position you present above is that while certainly it is possible for free will to not exist in the territory but to exist in the map, that this isn’t any different from other map/territory mismatches in that it’s a sign of cognitive error. Such an entity may, using the above definitions, know that they believe, but they can’t actually believe it any more than the automation system can.

            They’ve anthropomorphised themselves in the same way, as you point out, that we tend to anthropomorphize all sufficiently complex/obscure systems, but reality is in the territory, not the map.

            Now this is definitely an internally consistent position, and with regard to with that corpus of data. (And, indeed, is Occamically preferable on those grounds.)

            But metaphysically speaking, it’s impossible to believe in, because (per the definitions I’m using), it forecloses the possibility of belief. At best, I can know it; at worst, I’m not only an automation, I’m a deluded automaton that couldn’t have been anything other than deluded.

            Given that, and inasmuch as I do possess the internal experience of choice and thus belief, I choose to believe in the option (free will in the territory, not just the map) that makes my existence meaningful, knowing that if it isn’t true, there’s nothing I could have done, can do, or will be able to do about it anyway.

  1. Arets,

    My experience of myself, in the world convinces me that I’m sumthin’ other than a Roomba or a domino (an event). No, I’m an agent, a cause. Why this is so, I can’t say. I only know it ‘is’ so.

    Event causality is a real, dominant, feature of Reality and it would seem to negate the possibility of agent-causality, but: here I am, here ‘you’ are.

    So: cause & effect has a loophole, or, there’s sumthin’ peculiar about human beings (and mebbe other higher-order life) that allows for a violation of cause & effect, or, proponents of agent causation are just plain wrong.

    Me: I go with the second option.

    • See my reply to Alistair above. I have tried answering some of these questions there, including the apparent (but, I argue, not real) discrepancy between our experience of agency and the assumed causal nature of our cognition.

        • Something can be both a cause and an effect and, I’d argue, most things are. As to the rest – I disagree but I also don’t have much to add, apart from what I’ve already said. Of course, none of that is axiomatic, I’m open to new evidence and ready to follow it.

Comments are closed.