In a recent conversation on free will and determinism some confusion (disagreement) arose over the contention that the descriptions of systems and their behaviors being “deterministic” and “random” were a complete cover of the possibilities. This is not the case. Emergent behavior is described as one way to conceptualized the set missing possibilities.
Emergent behavior lately has been described in two ways, “strong” and “weak” emergence. The distinction is claimed in that weak emergence behavioral patterns are derivable (or at least highly suggestible) from local interactions. Two examples of that might be Brownian motion and the ideal gas law. Brownian motion describes the motion of large objects in a bath of small particles. These large particles “dance” and move about. Their speed and travel is determined by the temperature and the relative dimensions and densities of the particles in question. Considered as an aggregate the distance traveled in a set time and the distribution of those distances is quite regular (hence determined). However, exact details of the actual position and travel of a given particle is indeterminate. This hierarchy of regimes is a feature of all systems described to have emergent behavior. Similarly the ideal gas law which we all recall from High School, PV = nRT, where P is pressure, V is volume, n is a number of atoms, and T is a temperature (in absolute scale). This can be derived by using equilibrium statistical physics methods. It is deterministic, but the motion of the atoms in the bath which is being described cannot be described deterministically. Here in one hierarchy you have randomness and at another layer, determinism.
Strong emergent behavior:
Laughlin belongs to a). In his book, he explains that for many particle systems, nothing can be calculated exactly from the microscopic equations, and that macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details. This does not mean that the microscopic interactions are irrelevant, but simply that you do not see them anymore – you only see a renormalized effect of them. Laughlin is a pragmatic theoretical physicist: if you cannot, possibly ever, calculate the broken symmetry macroscopic properties from the microscopic equations, then what is the point of talking about reducibility?
Two examples of this might be, from biology, the behavior of termites (drawn from Gazinaga’s book on the brain and free will) in which local rules driving the action of termites when the population and health of a colony passes a certain threshold, the underground colonies suddenly alter their behavior to the large cemented/clay towers seen in southern Africa. Another example might be schooling behavior of fish and flocking by birds. Local simple rules governing speed and direction when a size threshold is reached suddenly change the behavior from individually driven to schools or flocks. And while (like with Brownian motion) some general characteristics of the school/flock might be imagined to be derivable, the direction and course of that flock is not (which akin to not being able to predict the direction and distance that a individual large particle travels in a set time period).
So in general we see a hierarchy of regimes in which a lower random bath can give rise to very regular behavior at a large level. When that emergent behavior is a computational network or like the brain large collections of such networks… then things can get interesting and at that point you are well in this unknown not-derministic/not-random world.