It is clear, from a rigorously scientific point of view, that the universe contains this very odd thing that we might call ‘irreducible uncertainty’. We might ask what exactly ‘irreducible uncertainty’ is but this of course would be rather ridiculous – how can we say what uncertainty is? We can define this concept in negative terms however by saying that it represents a limit to what we can know. In a universe without irreducible uncertainty it would be possible to make measurements that are 100% reliable; in the universe that we actually do live in this can never be done. We can be pretty sure about things sometimes – we might even be able to be very certain – but we can never be 100% certain. A degree of ‘fuzziness’ can never be entirely eliminated from the picture, and sometimes it even takes over completely!
This is the basis of both chaos and complexity theory – no matter how carefully we try to control some particular setup there will always be random fluctuations present, and these fluctuations can sometimes be hugely amplified to create large-scale effects. Randomness can never be wholly eradicated in the real world; if we ever do succeed in completely eradicating all randomness then this would simply mean that we are no longer in the real world. This would mean that we are living in a model or simulation of the world, which is just another way of saying that we’re living in a fantasy. When there is a base-level of irreducible randomness in the world then there is always going to be the possibility that positive feedback cycles can come into being that will amplify these little pockets of randomness into wholly unpredictable fluctuations on the macro-scale. ‘Wholly unpredictable fluctuations on the macro-scale’ is more commonly known as chaos. Chaos – needless to say – is where there is no control; it is where no rules are being followed and no ‘hard data’ can be obtained.
Complexity theory and chaos theory are so closely related that they can’t be separated. Complexity theory describes how processes can change in an unpredictable way at certain points due to the energy level of the process increasing beyond what the old ‘dynamic regime’ can handle. A dynamic regime essentially comes down to the set of rules governing how the processes involved operate. At a certain energy level, or a certain level of throughput (for example, how quickly water is coming out of the tap) the old regime ceases to be the most efficient way of organising things (or perhaps it ceases to be tenable at all) and so then an instability phase is entered. At this point a number of different possibilities become feasible but there is no way of telling in advance which of these possibilities is going to be actualised. This is a junction marked by pure undiluted chance and nothing else – randomness is the ‘engine’ here. The system may go one way or it may go another and it is the amplification of random fluctuations that pushes the system into the new mode of organisation (whatever that might be). The process itself doesn’t ‘know where it’s going’, so to speak. Normally we would say that the order in any logical system comes from the rules that govern that system, so that they has to be a set of instructions for ‘what is to happen’ already there; the new dynamic regime however isn’t ‘coded for an advance’ but nevertheless still comes from somewhere. There is still order, possibly of a very high degree, in it. This order is just ‘handed to us’ so to speak, and this is what complexity pioneer Stuart Kaufmann calls ‘order for free’. This phenomenon is generally known as ‘emergence’.
The key point about emergence (or ‘order for free’) is that it comes as a complete surprise; it comes as a complete surprise because we don’t haven’t come up with it ourselves. In a purely mechanistic universe (which as we have said doesn’t exist) nothing can ever happen as a surprise and any change that is to happen always has to be written into it in the form of pre-existing conditions, pre-existing circumstances. Causality rules, in other words. Whatever happens in the future is always going to be determined by the past; the ‘future’ and ‘the past’ are all one therefore – there are no breaks, no discontinuities, no junctions that are ruled by chance. In the real, non-mechanical universe there are therefore breaks or gaps and so there isn’t just this one big logical continuity, this one big logical structure. There are breaks in linear causality ever so often and what these ‘breaks in linear-causality’ represent is pure, unregulated uncertainty. They represent an absolute limit to what we can know – a limit we can’t jump over, no matter how good at jumping we might be…
Another aspect of the ‘irreducible uncertainty’ that we are talking about here is quantum uncertainty – quantum mechanics was the first and greatest blow to land upon the apparently impregnable edifice of classical Newtonian mechanics. The advent of quantum theory wasn’t just a little ‘tap’ either – it didn’t just dent the edifice of Newtonian mechanics, it cracked it wide open; it cracked it in a fundamental way – a way that can never be undone. Newtonian mechanics still exists, and is still important, but it is now understood to account for a particular range of phenomena, not the whole range, and this is a very big difference. The paradigm of Newtonian mechanics doesn’t hold sway in the domain of the very small – the domain of the very small is where we encounter quantum uncertainty in a big way! According to the very commonsensical mechanistic worldview it is entirely possible to measure things in a completely accurate way – we can in theory (if we have the correct instrumentation) know exactly where particles such as an electron can be found. It is the possibility of having such unambiguous, black-and-white knowledge that supports the edifice of classical mechanics – linear mechanics cannot be built upon the shifting sands of irreducible uncertainty, after all!
The possibility of obtaining ‘absolutely accurate measurements’ is precisely what quantum theory undermines however – in a fundamental way, we can never ascertain the exact position of the particle; we can’t generate hard data in relation to the particle’s location (no matter how advanced our instrumentation) because we have run headfirst into chance, into uncertainty. It was Albert Einstein’s aversion to this prospect that led him to come out with his famous line “God doesn’t play dice”. Aristotle’s ‘law of the excluded middle’ states that the answer to a question such as ‘Is the cat on the mat?’ must always be either yes or no; quantum theory, on the other hand, says that it can be both yes and no at the same time. What we’re coming up against here is the highly illogical principle of ‘non-locality’ – non-locality means that any (so-called) ‘particle’ that we want to talk about can – when the system hasn’t been collapsed into a black-and-white, or ‘certain’ state, be said to be ‘everywhere and at the same time nowhere’, which means of course that the very notion of ‘a particle’ is now highly inaccurate, to say the least. If a particle’s position in space cannot ever be pinpointed with any degree of accuracy, no matter how hard we try, then how can we talk about ‘a particle’? Particles are only particles because they are located.
Quantum theory allows us to envisage (in a kind of a way, at least) a state of being that is entirely nonlocal (and therefore non-physical) – it enables us to envisage a ‘symmetrical state’ in which ‘up’ is the same as ‘down’, ‘in’ is the same as ‘out’, and ‘yes’ is the same as ‘no’. There is no ‘logic’ to the situation in other words, because ‘being INSIDE the logical category’ is the very same thing as ‘being OUTSIDE of that category’. Dissymmetry is where ‘yes’ isn’t the same as ‘no’, where ‘in’ isn’t the same as ‘out, and so symmetry is where this fundamental logical orientation has been lost, leaving not even the slightest trace behind. When we talk about ‘irreducible uncertainty’ therefore, what we really talking about is non-locality, and ‘non-locality’ is the loss, complete and utter, of any ‘logical framework’. [We could also say that it is the loss of that very familiar framework known as the space time-continuum)] If this ‘lack of definition’, (and, what is more, the lack of any possibility of definition) exists on the very small scale of things, then these’ cracks in certainty’ (so to speak) in the realm of the very small means that irreducible uncertainty is going to permeate, in an inescapable way, the macroscopic world too, in the way that we have just discussed with regard to complexity and chaos theory. Radical uncertainty isn’t contained in the domain of the very small therefore (even though we might like to think this); it also reaches out into the world of everyday life, which is what we think of as ‘the real world’.
The final point that we are going to come to in this discussion has to do with irreducible uncertainty as it relates to ‘the realm of that very large’. Is the Big Picture of Everything ‘certain’ or ‘uncertain’? Can it be known or can it not be known? We will have to part company with science at this point; science can have nothing meaningful to say about the Big Picture of Everything, or Wholeness. Neither science nor logic can be of any help to us here. We can be very confident in saying this – when we talking about Wholeness (or the Big Picture of Everything) there are no ‘measurements’ to be made, no facts to be obtained! All measurement – which is what gives rise to ‘positive knowledge’ – relies on there being two things:  A ‘standard’ (or framework) we can absolutely trust, and  A ‘reality’ that can be compared to this framework in order to generate positive knowledge (i.e. ‘hard data’). The ‘Big Picture of Everything’ is a Whole however and this means that there isn’t anything outside of it to which it can be compared. If there was something outside of the Whole then the Whole wouldn’t be the Whole, obviously enough! We could try to argue that ‘rules’ or ‘frameworks’ can somehow exist outside of the Whole as abstractions from that Whole, the big problem with this argument is precisely that rules are abstractions however, and abstractions can’t say anything meaningful about the Whole from which they have been abstracted. We can’t ‘reverse-engineer’ reality from a description of reality; if we could do that then ‘reality’ will be the same thing as ‘the description of it’, which would make reality itself entirely redundant. When ‘the descriptor’ is equal to ‘that which is being described’ then what we have on our hands in this case is Jean Baudrillard’s ‘hyperreality’. Hyperreality is fatuous reality; it isn’t worth the paper that it’s printed on. It corresponds, we might say, to the eastern idea of Samsara, which is a ‘theatre’ or ‘illusion show’, as Alan Watt says.
Things start to come together at this point in the discussion – at the smallest possible scale irreducible uncertainty is king (which means of course that there is no ‘scale’) and at the order of things where we are concerning ourselves with the Undivided Wholeness of everything there is also this very same irreducible uncertainty (or ‘frameworklessness’) and in between the two (between ‘Top’ and ‘Bottom’, so to speak) there is the ‘macroscopic world of everyday objects and events’, and within this world we find that the same irreducible uncertainty permeates, albeit in a way that we can easily get away with ignoring. Putting things like this means of course that we can see the grand scope of radical uncertainty. Within the classical mechanistic paradigm uncertainty is always mere error (i.e. something that needs to be corrected or eliminated) within the new paradigm that has emerged over the last hundred years (the paradigm that reveals the very large to be the same thing as the very small) we can very plainly see that uncertainty is the Supreme Principle of Reality Itself, and not ‘something to be corrected’. Where would we be if we ‘eliminated’ or ‘corrected’ the Supreme Principle of Reality, after all?