Free Will … Continued

Jim Rapp, blogger profile here, left a extended comment on one of  my prior conversation on free will, which I’m going to promote here to the top and hope it elicits comments. It’s quiet long, so out of habit, I’m going to give you the first few paragraphs and then “crop” it with a “more” tag, i.e., the dreaded “below the fold notion. It’s really interesting, but I’m going to have to take a bit to digest it. But … the meantime it’s helpful to share so …

Mark wrote – “If that same system also turned on itself (i.e., an aesthetic random seeded rule based engine looking at the aesthetics of its own rules) then is that enough for creativity and free will.”

Ah. I think you’re onto something. I need to think about this.

You’re wanting “randomness” as internal agency (your post above to Anne). Something like a genetic algorithm with agentive internal randomness in classical mechanics really isn’t far fetched nor inconceivable. I’m sorry I missed this focus on internal agentive randomness when I switched to selection pressures in an overall ecology. Mea culpa.

I need to pause a moment and get my head around this.

I need to put a quick a bookmark in your notion of internal randomness and come back to it momentarily.

The following thoughts are not argumentative: just housekeeping notes to see if I’m getting your concept.

When JewishAthiest (JA) mused earlier – “free will is a fundamentally incoherent concept” – then maybe he’s doing what I did, namely, missing the concept of internal agentive randomness. JA’s note on free will as “a fundamentally incoherent concept” deserves pause because when we think of coherence/incoherency across scales of human interactions (putting aside classical mechanics for a moment), and especially when we think of human interactions having a measure of “free will,” and we want these interactions to be “coherent,” then my bias is that coherency criteria are themselves incoherent (technically “ill defined”) until human agentive purposes are adequately specified. This means that coherency criteria like additivity of probabilities, transitivity, and consistency are not really “coherent” until human agentive purposes are specified, as Sen has pointed out in economic exchanges (see A. Sen, “Internal consistency of choice,” Econometrica (1993)). We could use non-agentive definitions of coherency, like the kinds of coherency governing formally precise rules of mathematics (say, Euclid’s axioms), or perhaps we could conceive of coherency according to formal rules of modal logic used to attempt formally precise metaphysics (say, the Stanford Metaphysical Project); but, the criteria for coherency under these models isn’t expressly agentive, and these other criteria for coherence don’t seem intuitively to apply to agentive actions as would be expected for notions of “free will.” And so, in agentive relationships (like human interactions presupposing “free will”), we intuitively or elaborately think of criteria defining coherency, along with Sen, as things like additivity, transitivity, and consistency, plus some specification of a purpose.

For example, if my Superman purpose is to leap a tall building with a single or a seriies of bounds, then my additive bounds must add to the probabilities of my jumping over the building, plus my transitive effort must be “greater than” the sum of the forces holding me below that purpose, plus my jump direction must consistent with an over-the-building tragectory, and not face-down toward the ground – to form a coherency with my agentive purpose. Accordingly, no matter how much “free will” I have for other “purposes,” if I’m not free enough (by these coherency criteria), to make my jump all the way over the building, then I’m not free – not for that agentive “purpose.”

In short, I think that JA and I get mistakenly hung up on “coherency” in the concept of “free will” because we have an intuitive (or, elaborated, ala Sen) sense that agentive “purposes” must be specified before a concept like free will is “coherent.”

Yeah, I know I’m confusing aesthetics with pragmatic measures for coherency, or with coherency in economics and in other human agentive interactions; but, this confusion is my point, because that’s how we often think. And, this is just housekeeping.

The gist is that your model of an aesthetic internal randomness cuts across our basic intuition and our elaborate conceptions of “coherency.” Your concept of internal randomness aggravates our human use of agentive “purpose” as a unifying concept (in free will) that otherwise makes coherency coherent for us.

Your aesthetic model doesn’t require this specification of “purpose” as a necessary criterion coherency? – right?

And in my case (can’t speak for JA), even when I focused on aesthetic “rules” (your word), even then, my bias in looking for coherency criteria caused me to stall out by looking too quickly for other coherency rules, like optimality rules, in your algorithm. Duh! In the end, I resorted to external selection pressures as a default because of the kind of complexity that I intuitively anticipated to derive from internal randomness rules: I just didn’t see much intuitive aesthetic appeal in looking for patterns in the snow-on-the-tv-screen! Change the channel! Mea culpa, again!

Now, back to classical mechanics. The use of the term “purpose” enters into descriptions here, too. First, JA may share in the common misconception that random systems aren’t predictable (coherent) because there are no “rules” in randomness, which is wrong, because classical randomness could derive from an ubiquity of simple rules/laws, rules, rules, and more rules (not the non-existence of rules), for which we otherwise can’t specify decision-criteria. But, random rules nonetheless. One particle of “snow” on my television screen could blink on a periodic order of every three seconds in a constant color of green, while another particle could blink every two seconds, in a constant red. And only if I were trying to drive my car through this maze of red and green lights would this randomness of simple “rules” be disorderly against my agentive purpose. But, we normally still think of “free will” as agentive, and as agency toward some purpose (Sen). So, when we switch over to describe “purpose” in a natural system (e.g. thermal elegance, or selection pressures), or even in your algorithm for “free will” operating in classical mechanics, then what we’re really doing is using “purpose” as a proxy shorthand for the more complicated description of mechanism. Coherency criteria based on agentive-purpose aren’t valid (not yet), so that our intuitive sense of purpose can be mapped to purpose as a proxy for mechanics, so long as we remember “design” is gratuitous (please forestall discussions of ID).

So, what you want to do is generate something like an algorithm using a “simple rules” model (randomness in classical mechanics), and then make the algorithm genetic by hypothesizing “expert” rules for aesthetics, and these expert rules can actually opportunistically take advantage of random noise in order to create a lager synthesis (something like a ratchet?) inside the system? – and our normal biases about “coherence” via agentive “purpose” don’t yet enter?

If this is how you see it, then yes! Yes, if. Yes – if – you could generate this kind of algorithm, then you should be able to give a description of both “free” (free: as a simple “rule” in a classically random state), as well as give a description of “will” (will as the power to create an effect, cause->effect, again at the classical level).

The notion of aesthetics isn’t really a trivial artifact in your system because the “expert rule” could also be a random and simple “rule” in a classical description.

This is pretty fascinating.

Okay – my invocation of selection pressures isn’t really relevant. Sorry for that. Selection doesn’t enter in (yet) because your algorithm would generate variation. Selection pressures would only act on generated “variation” as filters to produce an overall effect, of net “variety.” But, variation in this scheme is theoretically a greater numerical concept (even in a simple Thomistic list) than is variety (after selection pressures). Is this right?

I’m trying to get my head around this without re-introducing the concept of “purpose” (Sen, above). It’s possible that humans are so dad-gummed onto-genetically biased and socially programmed for coherency patterns based on agentive “purpose” (terminably so in discussions of “free will”) that it’s hard to dispense with purpose as a concept.

But, your aesthetics could be free of “purpose” as we normally conceive it, right?

I’m sorry if this ramble is convoluted. It’s mostly housekeeping, really. So many cobwebs to clean out.

Just wondering – would an aesthetic algorithm (the algorithm, not the generated results!) need to satisfy Occam-like aims? – or, is there something about aesthetics that would not require this consideration? – is elegance, beauty?


Leave a Reply

Your email address will not be published. Required fields are marked *