The Problem of the Notion of a [Diversity] Science

Replace the “diversity” tag above, with your favorite tag, be it Black, Green, Gay, Feminist or whatever. For much of science the problems are clear. What is “gay” number theory, or “feminist” crystallography besides nonsense.

A friend pointed me at this essay on First Things entitled, The Myth of Scientific Objectivity. There is much to unpack there, but I’d thought I’d offer a few thoughts. My notions of philosophy of science and how science works to put my cards on the table are much influenced by Michael Polanyi especially this book, Personal Knowledge. I think some of the insights from that book would do to criticize and quiet the problems that arise such as Mr Wilson’s example, a ‘feminist’ sociologist examining the “good” features of divorce, which requires ignoring much of the obvious. Mr Polanyi points out that much of science is, contrary to popular notions, a process which we can’t explain but have to learn for ourselves. One of the features of this explanation of how science works it that there is an essential step which Mr Wilson doesn’t mention.

Mr Wilson points out that the scientific process is not the abstract inductive or deductive process, but one of a collection of personal insights for which the advocate of that insight then gathers data to support and convinces other that he/she is correct. I think the part missing here is that the person who has this insight has become, through years of work, skilled at the ways of thinking and methods in solving problems in the particular field of research that their insight is not uninformed but instead based on a collection of personal history and knowledge in that same field. The aesthetics of what comprises good science in any particular field is taught and learned and makes an essential feature to the progress of science.

Diversity in and of itself has impact on fields of science, as you would expect, only as much as the social aspects of human life are the within the scope of inquiry in that branch of science. If you are studying how flagella propel microorganisms in fluids, then your notions of gender and race exactly irrelevant. But within sociology, psychology, and such arguably have contributions that might be possible from other social points of view. But those insights gleaned from those fields likely are as impermanent as the social conditions in which they are implanted. One the other hand, inquiry into the nature of elliptic curves over the rational numbers … not so much. The insights gleaned will not fade as social conditions change nor will the truths discovered be dependent on any features and facets of  human society.

I might note, that there is a good counter argument to Mr Polanyi’s ineffable nature of scientific knowledge, in that computer science and programming may be an answer to what is and what is not ineffable. See for example, this text. If you can teach a computer to do the thing you are trying to explain how to do, then you understand it at a level which is no longer ineffable. Your program is the explanation.

To Funny For Me

OK. Yesterday a friend pointed out xkcd’s “What if” feature (on the top left links corner). My daughter and I were reading through some them, and found many quite funny. Then. Well. We got to this one, which I couldn’t read out loud by the point of “The mole planet is now a giant sphere of meat …” the tears from laughing so hard were obstructing my view that I couldn’t read, well, to be honest I was having difficulty breathing I was laughing so hard.

Wow. That has to be the funniest thing I’ve read in a long time.

Which just goes to show there is no accounting for taste, eh?

On the Ada Lovelace Thing

Recently there’s been a bit about Ada Lovelace and noting “important women” in science. Why Ada and not another woman? Some ask, if not Ada, who? I say, not Ada. The only rational choice is Emmy Noether. There was nobody like her. Ever. This started as a comment on today’s link thread were this was noted. But it grew into post size, so I’ve promoted it.

The point I’m trying to make if had the name the top 5 most influential people in 20th century physics, Emmy Noether would be a top candidate for that list … or possibly even the top 3. The Ada Lovelace thing is for “famous women scientists”. Other names are suggested but … none of which have that stature.  The big question is why don’t people recognize her? Is it sexism or anti-semitism? Is that a factor. Einstein was a Jew … and it didn’t diminish him .. but it’s a possibility I raised, especially noting in the 30s and 40s anti-Semitism was far more common than it is now.

One other possibility was that it was territorial, i.e., Noether wasn’t a physicist. One might think that it’s embarrassing (for physicists) that one of the biggest theoretical discoveries in your field to be made by some one who just stopped in looked at the maths in your playground for a bit and said, you know “I had this little idea, so I wrote it up.” And subsequently this little paper becomes the cornerstone of your whole science for the next century and counting. In part this is why I find the “Ada Lovelace” kind of thing questionable, there isn’t any question of who the most important women thinker/scientist of the last N years has been, where N is a number larger than 100 (1000? or 10000?). There’s only one candidate, and the other question might be was there anyone male or female who was more influential … perhaps there’s a short short list.  There is not a single one of those women dominated two separate fields of study and wrenched them both around in such a fundamental way. What men might you make the same claim for, what male scientist revolutionized two separate scientific fields? If you think there is a better candidate, put that name out there .. link or comment .. your choice.

So, was it scientific jealousy? Anti-Semitism? Or sexism? Or something else?

My commenter (this started as a comment response), noted he watches Discover/Cosmos type shows. So, in the nature of a quick “Cosmos” style precis, where does Ms Noether’s work fit? (that explanation goes below the cut) Continue reading →

Heat and Climate: Some Basics

In a recent discussion heat and transport has become a point of contention. The relationship of heat of a thing (the ground, or you in a sleeping bag … or more distantly the temperature of your coffee in that thermos) depends on a few parameters. At equilibrium (not your coffee cup any more) heat transfer in equals heat transfer out. The earth, radiated at the sun, is (basically) at a time averaged equilibrium. The claim of the global climate warming crowd is that additional insulating effects raise the temperature. How does this work if the energy in still equals the energy out? Well, to first order, the energy flowing out depends on two factors, the first being the difference in temperature between the two regions and a factor dependent on the geometry of the interface and the heat conductivity of the interface. If you add insulation (reduce the heat conductivity of the interface) then to have the same amount of energy flowing out the heat differential has to be larger, i.e., in bed when you add blankets you warm up (the heat differential between outside the bed and snug in the covers rises).

It has been claimed recently (and this needs substantiation) that wind farms change the turbulence of the air in the region around them, decreasing the efficient mixing of air between low and high altitudes, i.e., decreasing their effectiveness at heat conductivity. Hence the delta T rises (the ground temp) rises in that region. This change in conductivity is what drives the temperature change at the ground. The suggestion is, that then if wind farming becomes a non-trivial fraction of the earths cover this is just the same problem as adding greenhouse gases, the result is increased global average temperatures. The same people who thing global warming is problematic should be concerned about this possibility for the same reasons. Those who are not concerned of course, should not use this as an objection against wind farming.

Three Books

Last night, I noted a book I’d been reading (The Instant Economist). Two more might be noted, From the same place the I/E was mentioned another Econ text was noted. Economics 2.0 is the title (available for ebook in various formats). What is ec 2.0? This book is a lightning overview of current research topics and results from (according to the authors) the forefront of research and developments and analysis in the economics world …. in layman’s terms. Each chapter ends with a short list of 12-15 references to the papers and books that give the non-layman’s version on which the section was based. It is readable and recommended.

Also recommended, although I haven’t read it much, is a book that has a much closer personal connection. The book A Passion for Discovery is a book on backstories and personal anecdotes of the leading men and women in Physics from the last century. This book was authored by Peter Freund … my thesis advisor when I was in grad school in the late 80s.  Professor Freund always had lots of stories to tell, well know he’s telling them to a larger audience. Oddly enough this book was much cheaper by a factor of 3 on the Amazon eBook format than from Google … and the sony store didn’t have it at all. At leastl that was the case last night.

Free Will In A Non-Magical Universe

What might free will look like? Suppose you had a intelligent black box and wondered if it had free will, or in the horribly imprecise terms of recent discussion was “deterministic or random?” Again suppose you can copy this black box. You pose an question for your black box(es) and repeat for many iterations, posing the same question/problem each time. What then would you expect if this box had free will.

It seems to me the answer would be that the box had free will, then the box would give a distribution of answers, all of which “make sense” from a logical, creative, or emotionally reactive point of view. It also seems, given our understanding of the human brain and of what the possibilities of our universe regarding setting limitations both from a complexity (and if need be quantum) view point of the “same” initial conditions” this result is exactly what we expect from brains like ours.

What other sort of “Turning-like” tests might you pose to an array of black boxes to determine if they have free will or not that might better determine the question of whether free will might or might not make sense? Sugggestions?

Deterministic, Random and the Other Choice

In a recent conversation on free will and determinism some confusion (disagreement) arose over the contention that the descriptions of systems and their behaviors being “deterministic” and “random” were a complete cover of the possibilities. This is not the case. Emergent behavior is described as one way to conceptualized the set missing possibilities.

Emergent behavior lately has been described in two ways, “strong” and “weak” emergence. The distinction is claimed in that weak emergence behavioral patterns are derivable (or at least highly suggestible) from local interactions. Two examples of that might be Brownian motion and the ideal gas law. Brownian motion describes the motion of large objects in a bath of small particles. These large particles “dance” and move about. Their speed and travel is determined by the temperature and the relative dimensions and densities of the particles in question. Considered as an aggregate the distance traveled in a set time and the distribution of those distances is quite regular (hence determined). However, exact details of the actual position and travel of a given particle is indeterminate. This hierarchy of regimes is a feature of all systems described to have emergent behavior. Similarly the ideal gas law which we all recall from High School, PV = nRT, where P is pressure, V is volume, n is a number of atoms, and T is a temperature (in absolute scale). This can be derived by using equilibrium statistical physics methods. It is deterministic, but the motion of the atoms in the bath which is being described cannot be described deterministically. Here in one hierarchy you have randomness and at another layer, determinism.

Strong emergent behavior:

Laughlin belongs to a). In his book, he explains that for many particle systems, nothing can be calculated exactly from the microscopic equations, and that macroscopic systems are characterised by broken symmetry: the symmetry present in the microscopic equations is not present in the macroscopic system, due to phase transitions. As a result, these macroscopic systems are described in their own terminology, and have properties that do not depend on many microscopic details. This does not mean that the microscopic interactions are irrelevant, but simply that you do not see them anymore – you only see a renormalized effect of them. Laughlin is a pragmatic theoretical physicist: if you cannot, possibly ever, calculate the broken symmetry macroscopic properties from the microscopic equations, then what is the point of talking about reducibility?

Two examples of this might be, from biology, the behavior of termites (drawn from Gazinaga’s book on the brain and free will) in which local rules driving the action of termites when the population and health of a colony passes a certain threshold, the underground colonies suddenly alter their behavior to the large cemented/clay towers seen in southern Africa. Another example might be schooling behavior of fish and flocking by birds. Local simple rules governing speed and direction when a size threshold is reached suddenly change the behavior from individually driven to schools or flocks. And while (like with Brownian motion) some general characteristics of the school/flock might be imagined to be derivable, the direction and course of that flock is not (which akin to not being able to predict the direction and distance that a individual large particle travels in a set time period).

So in general we see a hierarchy of regimes in which a lower random bath can give rise to very regular behavior at a large level. When that emergent behavior is a computational network or like the brain large collections of such networks… then things can get interesting and at that point you are well in this unknown not-derministic/not-random world. 

Them Four Elements

An odd thought occurred to me the other day. The Greeks famously regarded that there were four elements, fire, water, air, and earth. The modern man, with the all the advantages of a public school system, knows this to be incorrect that instead there are 92 (naturally occuring) elements with about a dozen artificially produced ones added to this.

However, what occurred to me is that the dismissal of those four elements is an error hermeneutic. We know the definitions of words change. What they mean by “element” is what we call “states of matter.” In point of fact, there are four phases of matter which oddly enough very closely align with those Greek four. Solid, liquid, gas, and plasma match quite well with earth, water, air and fire don’t they?

So when reading those accounts of those four ‘Elements’ just replace the word ‘Elements’ with ‘States’ and see what that does for you.

Musing About Evolution and the ID Criticism

I’m kind an outsider on the Evolution ID debate and don’t follow it closely, because I don’t think evolution is anywhere near as important a science/issue as it is made out to be, e.g., it is not a cornerstone by or lens through which the how’s and why’s of biological data need to be seen. From my point of view (and a view oddly enough shared by at least one NOVA program) is that the ID critique of the evolutionary model proposed by the genetic error/adaptive selection model is one of time. The ID critique from that point of view is that the changes seen should take longer than they have absent other mechanisms. The standard GE/AS models have no substantial riposte to that because neither side has a predictive methodology. Questions like: Given an isolated flightless population what is the expectation value for the duration you’d have to wait before flight would be developed by that population? Or, Given a isolated population with no light, what are the expectaion values for the time for loss of all sight organs and functions? Or Given a isolated population with an excess of right handed sugars, what is the time to develop digestion of the same? Neither ID nor GE/AS has any clue/method for calculating an answer to that. 

In that mode, it seems that an interesting tack for experimentation on that would be to develop data points. Stress populations and figure out how long it would take the population to develop a response. That is develop data points and methods to begin building a heuristic model to answer the above questions. It seems to me that small table top populations of organisms could be created which in the main have very fast generational times and consequently the possibilities for adapative responses. This could in turn give some data points for developing descriptive formula for which a theory which describes them might be hung.

Feedback and Micro-Evolution

Almost a year ago, our family got our first pet. I never grew up with a dog, and this was a new experience for my wife and I (not to mention our two daughters). Sophie is a now 1 y/old terrier mix, half Yorkie and half Cairn. She is now fully grown at a year, weighing in at about 11-12 pounds. She is very fast. In a fenced in yard my daughters took more than 20 minutes to catch her when she’d grabbed a small thing they thought she shouldn’t have.

What does this have to do with evolutionary feedback. Well, one of the most amazing things she does is, I suppose, the result of that. When we go for a walk or a fast jog, she often runs alongside me at 8-10 mph for a block or two. While she is doing this her nose is sniffing for scents continually, with her snozzle running along the sidewalk what looks like 1/2 inch from the concrete. What I find hard to imagine is how, over irregular ground an animal weighing in at 11 pounds measuring less than a foot at the shoulder can run that fast with their nose (!) a centimeter above rough concrete. I mean, if you mis-judge that distance and your nose collides with the ground that’s going to hurt (and send you flipping in somersaults). Yet it never has occured.

Animals that can’t hold scan the ground well, don’t find food. Those that misjudge the distance scrape their snozzles and get really embarassed doing flips inadvertently. This isn’t a skill practiced and learned. It is innate. Evolution in action.

The March (Or Not) of Science

Thomas Kuhn, in 1962, in his book The Structure of Scientific Revolutions offered a now widely accepted view of how change, or paradigm shifts, occur in science. However, just a few years earlier, 1958 Michael Polanyi wrote a book, Personal Knowledge: Towards a Post-Critical Philosophy, which covers the same topics (that is scientific progress and broad themes in the philosophy of science). In the more popular political realm, Thomas Hobbes famously introduced the notion of “social contract” which with modifications by Mr Locke came to be highly influential in the political thinking of our founders and as a result of people, at least in the US, today. The problem with Mr Kuhn’s paradigm shift as compared and contrasted with Mr Polanyi’s different view on the progress of science might be compared to a modern critique of the idea of social contract. French political philosopher/theorist Bertrand de Jouvenel (and excellent introduction can be found in Bertrand De Jouvenel by Daniel Mahoney) realized that the essential problem with the social contract is that, although a neat self-contained theory of politics, it doesn’t hold anthropological water. Humans don’t “enter into social” contracts with societies and form groups that way. What they actually do is to follow influential leaders. Likewise the idea of paradigm shift is clean and compact and self-contained, but doesn’t hold anthropolgical water. That is, while it might describe things from afar it’s not how scientists as humans operate. 

My local industrious (underpaid) commenter Boonton offers this quick summary of how he views the progress of a paradigm shift (I’ve altered the format and corrected some spelling but left it essentially untouched):

Let’s go to the Kuhn’s theory of scientific revolutions and paradigms. I didn’t actually read him but about him so I make no excuse for botching his theory. The cycle begins with

  1. A paradigm “appears”
  2. Fleshing it out, scientists apply the paradigm to various issues. Often it works, other times it doesn’t, these are anomolies.
  3. There are two possibilities to explain them, first the paradigm works to explain them it’s just the how hasn’t been figured out yet. Second, they represent an area where the paradigm is wrong or missing something essential.
  4. Early on in the life cycle anomies build up with the working assumption that they are just ‘hard problems’ that haven’t been fully gotten to yet. As time goes by, there are fewer ‘easy problems’ still there and the left over anomies remain and start adding up. Scientific talent addresses them by solving them within the paradigm but many are left over and they start to stand out more and more.
  5. It becomes less believable as more and more minds try and fail to solve the remaining issues under the paradigm that the problem is simply one of ‘not figuring it out yet’. The idea that something is wrong with a paradigm builds. News ones are presented and resisted and then a breakthrough which brings you back to #1

This looks neat and clean. Science (and for that matter any human endeavor) alas isn’t. Mr Polanyi’s (much simplified) explanation goes more like:

  1. A person seeks to go into science.
  2. He apprentices to a master and learns the methods (largely ineffable) of the current practices in that chosen branch of science.
  3. After acheiving sufficient master he locates a “new thing” he believes to be right.
  4. He then works to convince and presuade the others in that field of the correctness of that new idea. If successful he returns to step #3. If unsuccessful, he faces the choice of continuing or returning to step 3 with an new idea.

How does this differ from non-scientific fields? The key is in step 2. The methods of the field are real. The skills you learn have real world application and are repeatable. It is in the real nature of the skills learned (and shared by the particular field’s community) that forms the basis for the rigor of science. 

Mr Polanyi begins his book, oddly enough, with a number of examples which demolish neatly the orderly progress of science as envisioned in the steps describing the advancement of science as described above by Mr Boonton. The first few chapters are well worth the price of admission alone. I recommend looking into it. 

Science and Passion

The scientific method is taught and portrayed as a dispassionate rational dialectic between theory and experiment. Theories are propose, data is collected which forces refinement of theory and that continues. Occasionally, ala Mr Kuhn, a revolution occurs in which a major paradigm shift takes over and a radically new theory becomes ascendant. 

Alas, this has little to no relation to what actually occurs within science. Scientists are not dispassionate men judging between different competing theories analyzing experimental data to that end. They are instead emotional advocates of a particular theory which they espouse a theory which they find, well, beautiful (for a variety of reasons). Now, the reason we have success and progress in science is that the training and process of learning their particular specialty has programmed their emotional responses to align their aesthetic principles with the rigors of their discipline. 

to be continued … 

Considering the TSA and the Anti-Martyr Problem

Well, the TSA objective of making transportation safe is back on the front-burner. Now the TSA screening is a poor seive. It is a largely static target and is very costly, the largest cost of course is in the lost time that travellers endure in negotiating long security lines. Furthermore, it is likely that much of their efforts are counter-productive. For example, making box-cutters freely available and common on flights would make it harder, not easier, for a terrorist or terrorists to hijack a flight. The “rules” of engagement with those who would interfere with the operation and direction of airplane do not get time to negotiate or to “make demands” known like they might do in the 20th century. Once a person is identified as hostile (a prospective anti-martyr) that person is quickly neutralized by his fellow passengers. The age of passive passengers has past once the 9/11 event occurred.

However TSA has a purpose. It is visible and reactive. It can take the appearance of being the primary and front line defence in a strategy to identify and interdict prospective anti-martyrs. War and espionage (to which this anti-martyr interdiction campaign is related) is in part one of misdirection. To that end, the TSA screeners take a very public and obvious role. They (might) be the public and obvious strategy which is a counterfeit. If indeed the TSA plays such a role, we as the voting public will not know that for as soon as it is common and public knowledge that the TSA is a large noisy feint … then their will be an outcry to remove it and an alternate deception will be harder to enact. Continue reading →

Considering AGW … Scattered Showers Thoughts

A few thoughts, in the form of the dread bullet list, on AGW and climate in general.

  • If one uses a variable τ to represent the time-scale in which one is considering making predictions (about the weather), then we can identify a number of regimes forτ. If τ is in minutes or even seconds, often we have difficulty predicting weather as gusts of wind move from our local point of view very unpredictably. If τ is in the day/week regime again weather forecasters have difficulty predicting more than 24 hours out in great accuracy temperature, wind, and precipitation. Apparently, as the recent almost decades long downturn which was … unpredicted is anyindication as well, if τ is in the time-scale of a year or a dozen years again climate scientists fail to predict accurately into the future. Yet, they would have it that ifτ is in the quarter century to century regime … there, and apparently only there the science is easy and they have that figured out. And they’ve both made their prediction and want us to stake the farm on their result. Now in the above conversation if one moved from weather to the market and discussed the various regimes of market, which is similarly unpredictable at any number of time scales, and suggested that based on my computer model of past events I’ve got it nailed down. I have a good model for quarter century market movement. Furthermore I am now trying to convince you to bet your entire life savings on your model. You’d rightfully point out that only several million other yokels have used past markets to predict future market behaviour and have all failed … and that such models are worth not a whole lot more than a bucket of spit. So the question is, is why you think one chaotic dynamical system is so far different from another. If you’re going to really make the claim that when τ is a half-century then that needs to be tested. And that claim that your methods and models work well for that time period will be proven … in about 200-300 years … if you can correctly predict trends now and watch the climate track your predictions. 30 years ago climate science was warning of impending ice ages. Today, it’s warming. Tomorrow?
  • One of the claims of those who would call those sceptical of AGW, “climate deniers” are quick to attempt to label the objectors as anti-science Luddites. Yet that claim doesn’t really fly. There are indeed some anti-science people on the right, and others can argue about the numbers or percentages and compare cricket race results. But there is a problem, which is people like myself. There is another problem. There are strong social idealogical reasons why those on the left are receptive to AGW where they should be perhaps (see the prior remarks) more sceptical.The left is conditioned to find fault with America and the corporate culture and behold, AGW fits right into that idea. Given then that there are secondary (and perhaps in many cases dominant) non-scientific reasons why many would be receptive to AGW … that strikes me as problematic.
  • Computer modeling has also been described as computer aided story telling. Computer modelling has been used as a shortcut in design and engineering successfully these days in automotive and aerospace design. Yet, consider for a few moments that these applications are backed up by many decades engineering, wind tunnel testing, materials/structural testing and so on. That level of testing and detail, frankly, has no way of having been matched by climate scientists. Furthermore AGW proponents desire the results of their work to have a large and costly public impact. So, are their data sets, algorithms and methods clearly and publicly accessible? Consider the deletion of files and emails in an illegal response to an FOI request? See this post for remarks on how open the AGW people have been.
  • Finally, I’m embarrassed to admit another reason that I’m sceptical about AGW … is that I was trained as a Physicist. In Physics the best and brightest move, especially in theory, to the “hot” topics. In programming, (see Mythical Man Month) there exist orders of magnitude in productivity between the very best programmers and the average (and the poor). Likewise this is true in Physics as well … at least in theoretical Physics. And here’s were the bias (or perhaps bigotry) of which I will admit to holding. I don’t think climate or meteorology is a hot topic and as a result I’m of the mindset that climate scientists are, well, second rate.This perhaps not a good reason, but for me I suspect it remains a factor.

On Science and Climate-gate

The CRU mini-scandal has gotten an lot of press, at least in the slice of the blogs regularly read by myself, two examples here and here are not unrepresentative. There are two facets of this little kerfuffle that might be noted.

The first matter is to look at this event in the light of Michael Polanyi’s book Personal Knowledge: Towards a Post-Critical Philosophy. In this book, Mr Polanyi examines the traditional (taught) understanding about how science is done and compares that to the reality of how scientific pursuit is actually done. His conclusion is that the scientists passion and belief is the primary driver in scientific process. The notion of experiment, hypothesis, and dispassionate evaluation of fact is actually hogwash. That isn’t to say science is some Derridan relativistic mish-mash. Training, itself an art-filled process which is not as well understood as normally imagined, first must be mastered by the participants. Then trained individuals convince themselves of a hypothesis which fits their intuitions. Then that person attempts to convince the larger community that he/she is correct, that experiments can be and are done which support this hypothesis and data which contradicts the hypothesis is flawed, misunderstood, or not relevant. An example of that last part that he points at early in the book is that throughout the 19th century, no credible member of the scientific community gave any credence to the notion that meteorites were real … in the face of contradictory evidence, actual meteorites falling to earth. It was just that belief in that evidence would discredit the prevailing ideas of the nature of extra-solar stuff … so the data was off scope so to speak.

With this in mind revisit the revelations of events at CRU (recalling also the maxim that one should never assign as mischief what might be instead attributed to incompetence). Ultimately the shocking revelation behind this data is that scientists who are proponents of global warming, like those at CRU, are advocates of their point of view (AGW) prior to examining the data. Yet this is exactly what Mr Polanyi claims occurs in the scientific process. So there should be no surprise that this is what is actually going on under the covers, so to speak.

Secondly, often it assumed that the left/right divide on the AGW issue is one based on a perceived animosity on the right to science in general especially as compared to the left. This however, fails to explain persons like Mr Motls … certainly not one who one could reasonably expect to have any dislike of science. This isn’t to say that there are not those on the right who are in fact distrustful of the scientific community, it’s just that there is likely another factor at play which is at least as important. While the there are those on the right who distrust science, there are at least as many if not more on the left who feel that American corporations and industry mostly do harm to the environment. The trope of corporate malfeasance and disregard for ethics in the pursuit of profit is almost universal on the left. So when AGW arrives as a suggestion it fits right in with the preconception that industrial pursuits are harmful. (Norm has some thoughts on this as well).

With CRU and climate in general the question remains whether the hypothesis and the advocacy for the same is driven more by intuitions developed in the discipline or as much by how it fits with preconceptions formed outside that discipline, i.e., notions of corporate/human harm to the environment.

(non) Archimedean Dreams

In the past, I’ve ventured to consider the hypothesis (ansatze) that a noetic realm, a rough analogue of the Platonic realm of Ideals, has a real existence, in a parallel universe of sorts to our own. Part of this ansatze is that these two universes are not completely disconnected, and that the human intellectual machinery glimpses this realm and it is through this mechanism that our brain’s machinery accomplishes the semiotic scaffold and bridges gap between pattern and synapse to thought, meaning and intention.

What sorts of features might we imagine a noetic realm to have?

  • Would it have any notion of time evolution? I suggest while objects in the noetic realm were suggested by Plato to be “eternal” that this might be wrong. That in a noetic realm, in which objects or “points of space” are conceptual monadic points, there can be some sort of movement or time evolution. An analogy to how that might work might be the flip-flop. A flip-flop is a metastable two state simple electric circuit and without going into the technical details the analogy I’m bringing to this table is that noetic ideas can also demonstrate similar features. Consider some of the simple logic paradoxes, like the statement about the veracity of Cretans made by a Cretan with the postulate that Cretans always lie. The points is there exist logical complexes (ideas) that are metastable like the flip/flop, i.e., paradoxes. In a noetic realm, this might be seen as motion. With motion … time evolution.  Godel’s incompleteness theorem suggests that all logical axiomatic systems (noetic complexes) have un-proveable statements, not all as simple as the always dishonest Cretans. Might this be seen as the same as a statement that all noetic complexes exhibit “movement” when looked at carefully enough?
  • If we take points in a noetic space to be monadic concepts, more complicated ideas will be structures or linkages in this space … which themselves are also a point in that space. This suggests that if we were to give a metric to this space it might be non-Archimedean or ultrametric space. Tree structures are ultrametric if the distance between elements of a tree is counted by how many generations one traverses up the tree before a common element is found. It seems natural that concepts also have a distance relationship that is akin to a tree.
  • Consider for a moment, that this realm has a physics. A series of natural laws that given time evolution describe a dynamical relationship between objects and motion in this space. Imagine too then, that it contains life … and further intelligent life. It is hard to imagine what existence, perception and other notions which are clearly definable in our universe might be like in a realm such as the one I dimly describe above. Maths (a UK term for mathematics that I find attractive which is my excuse for using it) is a concept that is often argued is a purely noetic art. That it doesn’t depend on science or perception but is purely an intellectual (noetic) exercise. Concepts like integer, line and point from which we derive maths it is argued are universal. If we met technologically advanced aliens … we would be able to communicate because we would share a common mathematical technology. In a prior post, I argued that is not necessarily the case, that our mathematical concepts are aligned with our commonly held perceptions of the universe. A creature dwelling in the noetic universe might perceptions sufficiently distinct from our own to render this assumption false (and my argument in that prior post valid).
  • In maths, a common arithmetic simplification which yields a natural ultrametric space are based on p-adic numbers. One of the ideas lurking in the p-adic analytic realm is the Adele ring, which is a infinite vector of p-adic fields with a point at infinity added … which is naturally seen as the real numbers. Might an adelic ring analogy be seen linking noetic reality (realities) to ours in which is the archimedean point at infinity?

And if you think that time evolution or changes in connections is impossible. Consider what you know of the number 2 and other simple counter numbers and from them the integers. Then read this … (or for more fun … get this book: Surreal Numbers with combines the numbers noted in that prior wiki link with an entertaining story about those numbers has narrative parallels with Genesis 1).

Exploring Nuclear Power Again … The Waste Question

No country with nuclear power today has solved the waste disposal problem. The preferred solution being sought today is to disperse the waste in repositories hundreds of meters below the earth’s surface. The (perceived) absence of success in this area is a dominant obstacles that the nuclear industry faces. Last Friday, I after a discussion of nuclear energy started, with a lot of half-remembered data on my side and in order to stop that feature of the conversation, I dug up on the net an authoritative report on the “future of nuclear energy.” These papers are in pdf form:

  1. The full document is here. This is a study by a group of MIT professors on the status of Nuclear power in the US and the world.
  2. The summary is here. This is a summary of the findings in the prior document.
  3. Finally, in 2009 (the original documents were written in 2003) an update of the current situation given the economic and political conditions is given here.

In the discussion last night (on this post) waste seemed the dominant topic. As noted, that post last night was a summary (of a summary). So I’m going to delve in to the report’s waste chapter for more grist. Continue reading →

Nuclear Energy: Some Data for Discussions

Last Friday, I after a discussion of nuclear energy started, with a lot of half-remembered data on my side and in order to stop that feature of the conversation, I dug up on the net an authoritative report on the “future of nuclear energy.” These papers are in pdf form:

  1. The full document is here. This is a study by a group of MIT professors on the status of Nuclear power in the US and the world.
  2. The summary is here. This is a summary of the findings in the prior document.
  3. Finally, in 2009 (the original documents were written in 2003) an update of the current situation given the economic and political conditions is given here.

Anyhow, I’m going to attempt summarize the summary. Please bring up any points on which further elaboration would be useful.

These reports are an attempt to analyze what would be required in order to retain nuclear power as a (the?) significant option meeting our growing electrical power needs and in light of a demand to reduce greenhouse gas emissions. Putting this in concrete terms, the authors put forth recommendations how to best boost nuclear power by a factor of 3, to 1000 Gigawats by 2050. For the carbon enthusiast this would save 1.8 million tonnes of carbon or about a 25% reduction from a scenario in which nuclear power is not increased.

Basic findings:

  • Cost — In deregulated markets, nuclear power is not now cost competitive with coal and natural gas. Plausible reductions in capital costs, operational costs, and reduced construction time could cut into the gap. In their model, with a 40/y plant life currently costs of nuclear are 6.7 … reductions bring that down to 4.2 (which assume commercial risks come down to the level of coal and gas and which account for .9 of the reduction). For comparison coal is at 4.2 and gas ranges between 3.8 to 5.6 depending on market prices of (natural) gas. The units are cents per KW-hour. Carbon credits of course could give it an advantage. It should be noted that costs of running gas and coal plants are fuel dominated, whereas nuclear is not.
  • Safety — Modern reactor designs have very low risks of serious accidents but ‘best practices’ in construction and operations need to be followed. Less is known about the safety of the overall fuel cycle(s) (there are many variants of fuel and fuel cycle/recycle which is one reason this is so complex)
  • Waste — Tied to the above complexity of fuel cycles and safety this ties to the waste question. There are a lot of choices. Geological disposal is feasible … but more research is needed. The authors feel that the geologic disposal in deep bore holes the most attractive alternative and recommend moving R&D in that direction. One reason for this is the drilling site can be coterminous with the reactor facility and transport of waste is not then necessary.
  • Proliferation — Europe, Japan, and Russia use a plutonium reprocessing (breeder) system for fuel which has accompanied unwarranted proliferation risks.

The study authors recommend, in light of safety, proliferation and their evaluation of the waste situation that for at least the next 50 years, a once through fuel cycle provides the best possible combination of pros and cons. Once through fuel cycles, take more raw ore as a resource and have more need for long term storage of waste, but gain on the economics, the proliferation, and fuel cycle safety fronts. Using the once through fuel cycle with a 1000 GWatt power level would require a repository on the scale of a Yucca mtn to be created somewhere every three to four years. This is what prompts the interest in the more advanced, more complicated and expensive closed fuel cycles. These schemes recover the actinides from the waste, reducing the thermal load of the waste on the repository, increasing its capacity and shortening the time it needs to be isolated from the biosphere.

They also note that public education is necessary, for the public at large does not presently see nuclear as an option for the energy (and greenhouse gas) needs for the future. Their specific recommendations for US policy include:

  • Focus its R&D on the once-though fuel cycle
  • Establish a Nuclear System Modeling project to carry out the analysis, research, simulation, and collection/collation of data to evaluate all fuel cycles from the viewpoints of: cost, safety, waste management, and proliferation resistance;
  • undertake an evaluation of uranium deposits as a resource;
  • broaden its waste management R&D program;
  • and support R&D to reduce the cost of LWR costs construction and the development of HTGR as an alternative.

For the Weekend

This weekend I’m going to read these documents prior to a post on nuclear power. Any and all are invited to read them to so that our discussion might be more informed.

  1. The full document is here. This is a study by a group of MIT professors on the status of Nuclear power in the US and the world.
  2. The summary is here. This is a summary of the findings in the prior document.
  3. Finally, in 2009 (the original documents were written in 2003) an update of the current situation given the economic and political conditions is given here.

The summary begins:

At least for the next few decades, there are only a few realistic options for reducing carbon dioxide emissions from electricity generation:

  • increase efficiency in electricity generation and use;
  • expand use of renewable energy sources such as wind, solar, biomass, and geothermal;
  • capture carbon dioxide emissions at fossil-fueled (especially coal) electric generating plants and permanently sequester the carbon; and
  • increase use of nuclear power.

The goal of this interdisciplinary MIT study is not to predict which of these options will prevail or to argue for their comparative advantages. In our view, it is likely that we shall need all of these options and accordingly it would be a mistake at this time to exclude any of these four options from an overall carbon emissions management strategy. Rather we seek to explore and evaluate actions that could be taken to maintain nuclear power as one of the significant options for meeting future world energy needs at low cost and in an environmentally acceptable manner.

Taking Nuclear Seriously as a Carbon Fix

Argonne has a short paper out outlining a “green” energy solution that looks more plausible than any I’ve seen for a while. If you take “carbon” seriously (I don’t but I’m in something of a minority on that) you should read this. If you don’t, however, and do take peak oil or oil independence seriously then you should still read it.

For Green Freedom the basic idea is that you take a nuclear power plant for its supply of electricity and steam. With that you use a potassium/carbon compound CO2 + water + hydrogen via electrolysis to combine in a process that produces methanol which is then in turn further processed to a synthetic gasoline. Basically the nuclear reaction/energy drives a reaction reclaiming carbon and O2 from the air to form that gas, which is then burned in cars re-releasing that carbon back to the atmosphere in a completely carbon neutral process. It is not of course energy/lite, but that isn’t the point here.

The paper suggests some economics, but basically a price point for gasoline right about where it is now, makes installation of new plants feasible.

Of course the anti-nuclear stance of the left is a religious position, data on Gen III and Gen IV nuclear power generation will be of no interest or use in discussions.

Science and Religion: A Historical Review

This is the “long” or expanded version of the faith/science paper for our Church newsletter. It was 4 times longer than requested. I’m posting it here for comments (and a link to the same is provided for interested readers of the newsletter article). The short version which was “submitted for publication” can be read here on-line.

Science and religion

Because the terms science and religion are enormously broad topics they need to be restricted. In this discussion science will refer to the elementary forces and makeup of nature, which what was before the modern era known as natural science and which today is called physics. Religion in this discussion will limit itself to Christian theology and will focus on how that interacts with natural science.

Natural science has gone through three major stages since the study of such matters became systematic and a subject which today would be considered a science. In what follows these stages will be discussed in turn and the relationship of religion with science examined.

Stage 1: A Geometric understanding of Nature.

From the time of the Greek golden age through the 16th century the foundations of our concept of nature and its underlying principles were very different than today. Throughout that period the understanding of nature and its conceptual foundations was based on pure geometry. Study of Euclid and the Elements were crucial not just for mathematical pedagogical reasons, but because the understanding of geometry was seen as key to understanding how nature was constructed. Aristotelian cosmology and Pythagorean mysticism are two examples of how this view of nature expressed itself. Writings from this period commonly allude to geometrical and numerical proportions as significant data. Today it is a common modern error to deride this view of nature as not being driven at all by experiment and observations. For example, Aristotle taught that an object naturally graduated to its “natural” motion, terrestrial objects naturally were at rest and astronomic bodies were naturally in motion. Today we view this as wrong, e.g., Newton’s law that “objects in motion tend to stay in motion and those at rest stay at rest.” Yet the Aristotelian view corresponds and agrees with observation. That is the objects you put in motion come to rest, e.g., throw a baseball and you observe that it comes to rest. Terrestrial objects (baseballs) set in motion do in fact come to rest and the planetary bodies (planets and moons) are observed to remain in motion.

It was during the first four centuries after Christ that orthodox Christian theology arrived at a basic understanding of the relationships between God, man, and the world which were made explicit and hold with some minor variations to this day. The apostolic practices handed down from the first century were explained in philosophical and concrete terms and placed into the contextual understanding of the world that existed at that time. Origen, an Alexandrian patristic theologian explicitly tied his theology with philosophy during an age when philosophy and natural philosophy were not separate fields of study. Consider that in Alexandria, Plotinus was a leading Alexandrian neo-Platonic scholar and a contemporary of Origen. Origen and Plotinus and their students interacted directly attending to each others talks and published works. This was possible because the theological views of nature and relationships of God with the universe was consonant with the natural philosophy of the time.

Stage 2: An Analytic view of Nature.

Between the time of Galileo and Newton the geometrical conception of nature shifted to an analytic one. The laws describing how the motion of objects were governed moved to one described by formulae for objects and forces between them, e.g., Newton’s three laws of motion or later the Maxwell equations describing electromagnetic behavior. Rene Descartes laid essential foundations for methods of replacing compass/ruler inspired geometrical methods with analytic ones, i.e., using algebraic descriptions and manipulations to describe and prove geometrical concepts. This inspired a general movement of mathematical techniques and ways of thinking from the constructive geometric view to an analytic one. By the time Newton published the Principia the revolution was complete. With his development of calculus and the later work of men like Johann Gauss the analytical and mathematical approaches were immensely successful in describing the natural world.

In this time period Christian theology (in the West) also underwent something of a revolution. It was this time that the theological turmoil of Reformation and counter-Reformation occurred. Erasmus, Luther, Calvin, and other Protestants as well as Loyola, Theresa of Avila, John of the Cross, and other Roman Catholics redefined what Christianity meant for the West. The current and cross-currents of theological polemics between these parties honed and sharpened (hardened?) the particular theological tenets and both Protestant and Roman Christians. During this time, as well, the relationship between natural philosophy and theological thought changed to one of separation. There was a parting of ways. Less and less was the Origen/Plotinus relationship the norm. While Christian priests, such as Mendeleev and Priestly, contributed to science it became more and more rare for mainstream theology to confront or interact with modern natural science. Furthermore the creation accounts in Genesis (based in part on a Babylonian cosmology) led some theologians to oppose and confront scientific views of cosmology, a practice which continues apace today. In general theological accounts dealing with nature had less and less real connection with the scientific understandings of the day.

Stage 3: Symmetry Governs Natural Law.

In the 20th century mathematical developments laid the groundwork for another major shift in our basic understanding principles of how the universe is constructed. The mathematical inventive work by Emmy Noether, William Hamilton, and Bernhard Riemann yielded a revolution of our understanding of the universe. These connections where first exploited by Einstein, Kaluza, and Klein who expounded and made clear those principles on which we base our understanding of the universe. Geometry and a mathematical concept known as symmetry [see below for a very abbreviated summary of symmetry and its connection to modern physics] today provide the conceptual framework on which natural science finds its grounding. In 1954 Chen Ning Yang and Robert Mills defined a non-Abelian gauge theory which became the Standard Model. The Standard Model is the current best description of the basic particles and well actually three of the four known forces in nature. In some ways this may be regarded as the return (revenge?) of the much earlier geometric worldview because it is based on symmetry. Geometry then has returned and again today drives our understanding of nature.

A second striking development has also occurred in our physical understanding of nature, that is the quantum understanding of nature. In quantum mechanics concrete things like particles and electromagnetic waves are replaced by things called probability amplitudes and S-matrices. The remarkable success of quantum mechanics has caused something of a crises in the philosophy of science. There is, currently, no satisfactory explanation for how a quantum understanding of nature and be viewed as a real view of nature. Many physicists duck approaching a realistic concrete description of nature with an approach described as positivism. In this view a natural scientist (physicist) is not undertaking to describe reality but instead is only engaged in the prediction of experimental results, an example a proponent of this view is Stephen Hawking but he is certainly not alone. This is a massive retreat from what natural science had undertaken at the outset 3000 years previously.

Yet, theology has not advanced into the epistemic vacuum left by this retreat of physics (and the sciences in general). In part this is part a symptom of a general trend. An underlying cause for this trend may be that generalists today are more and more rare. As the body of work comprising every discipline has grown it has become more and more it is harder for people to do significant work in more than one field because mastery just one discipline is takes significant effort. In fact, sub-field specialization has become the norm, in a time when cross-fertilization between fields of science, the arts, and theology is becomes more and more important. Theology and Physics have both been subjected to this trend.

20th and 21st century theology has not (as yet) really found natural science a subject with which it needs to confront. With a few exceptions like John Polkinghorne, who was an important theoretical physicist and now is a Anglican priest and theologian, little theological thought is being put into trying to reunite and reconcile natural science with theology (this is something) of an exaggeration as Fr. Polkinghorne did chair a conference on that topic and clearly somebody besides he attended. But this is certainly not a leading problem from the point of view of the theological community today. However this problem is precisely the problem that confronts the so-called “division” between faith and science today.
Some Final Thoughts

Natural science over the past 3000 years has gone the distance, from a geometrically motivated view of the universe it traversed through an analytic approach and subsequently returned to a more subtle but nevertheless distinctively geometrically motivated view. In the first period there was no tension between theology and science. During the analytic period, a separation occurred which continues through today. Additionally the scope of what natural science recognizes as within its purview has shrunk. At the same time, the complexity and scope of what natural science (physics) does understand regarding the large and small scale structure of space-time and the natural order is far greater than it was in the 3rd century. The development of understanding that asserts where and how the Trinitarian God stands in relationship to man and His universe which is congruent and in accord with the modern ideas of how space-time is framed should be regarded as an important and incomplete problem for theology today.
A Short note on Symmetry.

Symmetry is a simple mathematical notion. In short a symmetry is a transformation of a geometrical object which leaves it unchanged. Rotating a square 90 degrees is a symmetry transformation, that is after rotation the square is unchanged. Space or space+time symmetry transformations are changes such as rotations, translations and the like. Emmy Noether proved mathematically that for “sensible” theories of motion that every continuous symmetry gives rise to a corresponding conserved quantity. Translational symmetry of space-time by Ms Noether’s theorem thus gives rise to conservation of momentum. Translational symmetry here just means the laws of physics remain unchanged if the origin of your coordinate system is shifted. Rotational symmetry yields conservation of angular momentum. Rotational symmetry means that the laws of nature are unchanged if one spins your coordinates. This is the essential point. To restate, every continuous symmetry (any transformation leaving space and laws unchanged) is connected to a conservation law.

Oscar Klein and Theodor Kaluza considered what it woud mean if at each point in space-time an additional “small” unseen dimension was added, specifically a tiny circle. If one then claims that with this space there might also be a new corresponding symmetry. That is to say that in this new 5 dimensional space-time (3 dimensions of space + the new circle + time makes 5), this symmetry claims that the choice of coordinates one uses in each (little) circle does not affect the equations describing physical laws, i.e., there is a symmetry in the “circle” direction. This condition gives rise to a general constraint condition equations of motion in this space-time and a conserved quantity. What made this interesting was that the resultant constraint equations were identical to Maxwell’s equations which describe electromagnetism. Because of that equivalence to Maxwell’s equations the natural interpretation of the conserved quantity becomes conservation of electric (and magnetic) charges.

When Yang-Mills defined their Standard model that describes the three of the four forces they did so by by replacing the Kaluza-Klein circle at each point with a much more complicated space and then demanding an demanding the analogous symmetry relationship. This yielded analogous conservation laws as well and constraint equations which in turn (by picking the “right” structure to the “complicated space”) are found to describe three of the four fundamental forces of Nature and yield conservation laws for their respective charges. The four forces of nature are the strong force, the weak force, the electromagnetic force, and gravity. Gravity is not reconciled with the Standard model and this reconciliation remains an area of active research. General Relativity is the geometrical model based on exactly these sorts of methods that describes gravity.

Further Reading

Michael Polanyi by Mark T. Mitchell ISI Press.

Mr Tompkins in Paperback by George Gamow

On Science and Religion

Over the next week or so I have to write a short essay for our parish newsletter on the topic “Science and Religion.” I’m going to do the work online here “in public” as it were and see if the comment process can get me a better essay. Anyhow … to start the dread bullet list, i.e., ideas and brainstorming about things I might discuss.

  • It might be interesting to mention the two tensions that have historically, especially in the West, influenced some of the reflections of the religious though on science. St. Augustine, as noted by Mr Polanyi, had an overall negative effect on science. Mr Polanyi notes that this was because of some statements by St. Augustine that science should restrict itself to those studies which bring us closer to God. Yet, St. Augustine writes as well in his Confessions that the Nature itself worships the Creator though our understanding of its workings, intricacies, and beauty. It may be that the former statement took a wrong turn because the latter sentiment was forgotten or misplaced.
  • Three major revolutions have marked our deepest physical understanding of how to view the underlying nature of the material world. Sometime between the Galilean/Copernican era and Newton’s Principia, the older notion of a geometrical order to the universe was dominant. At that time it was the Pythagorean philosophy of science dominated by geometrical concepts. This was replaced by a algebraic interaction view, with Newton and later Gauss making that explicit with the development of calculus. In the early part of the 20th century this too was replaced in turn by the idea that symmetries (gauge theories) shape the structure of physical interactions and relations. Patristic theology arose in the context of a Pythagorean view of nature. Did and does that theology depend at all on our conception of the underlying structure of nature? How might it have to adapt and change as our notions of the universe change?
  • Physical theories of the Universe give us a notion of the large scale structure of space-time, especially dynamical aspects for how to make sense of it. Mathemeticians have solved the Poincare conjecture giving us a classification of all the possible ways in which our three (apparent) spatial dimensions might be constructed. Additionally quantum mechanics yields notions of free-will or indeterminacy at the atomic level. Yet theological discussions, as far as I’m aware, haven’t really confronted the implications of a God existing out of time and what that means with respect to a quantum mechanical relativistic space-time.
  • Eugene Wigner penned a paper on the unreasonable nature of the success of mathematics in describing the universe. It isn’t just that we can use math to describe things we already know, it’s that math so used is unreasonably successful. The mathematical ansatze (guesses) that Newton used to describe planetary motion can without change work in regimes many orders of magnitude in precision and scale afield from the scale of the data supporting them. Mr Wigner did not connect the unreasonable success of mathematics to theology, Scripture, or God. However, that connection is an easy one to make, Genesis 1 with its ontological ordering of nature suggests that nature itself is comprehensible by the mind of man. That nature is unreasonably well described by mathematics, which in turn is an essential part of the mind of man, might suggest that this is not unintentional.

On the Laplace Fallacy (continued)

Recently I had noted earlier, following my reading of Polanyi’s Personal Knowledge, that between the Galilean/Copernican period and Newton’s Principia no new scientific data (no facts) arose to distinguish between these two theories. Yet by the time of Newton’s writing of the Principia the dispute was settled. This was settled not by facts but by a process that has more in common with religious conversion than than the popular notions of what is comprised by scientific method.

Physics has seen three major revolutions. Following the development or conception of what we in this “late modernity” [aside: more on that later] period call Physics by the Greeks the overriding principles underpinning reality were driven by a belief that the world and cosmic bodies followed geometric and numeric patterns. Observation and insight were interpreted within this framework. During the period noted above, a conversion began to occur. A mechanical constraint arithmetic model replaced the old. This held until the latter part of the 19th to the early 20th century when it too was replaced. Currently the view of how to best understand the universe is one driven by mathematical invariances (symmetries). Data and experiment are not and have not been the driving force in moving persons and communities from one to another underlying model for how to perceive nature. Passion and persuasion and conversion are better descriptions of what occurred.

Yesterday I began to unwind what Polanyi was driving at with his attack on the mechanistic view of nature. He principally objected to the idea that that the all kinds of experience can be understood in terms of atomic data. This is more than just a rejection of reductionist methods of scientific advancement. And it is not something which today is abandoned with the discovery of quantum uncertainty, i.e., the free willed electron. Scientific metaphors have a way of becoming dominant metaphors applied outside of their realm of application. Consider how uncertainty, relativity, and evolution are examples of scientific ideas have been abused when used as metaphor in the social arenas. The scientific community using those ideas has given a strange credence to their application in other arenas. So too has the notion that man and his society is ultimately are just collections of clockwork apparatus. It is the dangers related to those, essentially abuses, of the conception of a comprehensible, mechanistic, deterministic universe applied to social studies (econ and politics) and life sciences that the chief dangers lie.

Consider the following abbreviated example, which I hope to elaborate on later. Man when viewed in a mechanistic way enables one to set aside models of human dignity in favor of man as a consumer. Hedonistic consumerism can replace a more, well, frankly human (and realistic) view of man in society.

Laplace

Laplace, some years ago, came up with a notion. This idea was that if one could determine the position and momenta of all the particles in the universe at a given time, then the time evolution of the universe would fix all future events of the universe. This notion is one which persists as some level today. The notion that the all kinds of experience can be understood in terms of atomic data. This is an impossible scenario, yet it persists.

Polanyi writes (pg 141) in his book Personal Knowledge:

Yet the spell of the Laplacian delusion remains unbroken to this day. The ideal of strictly objective knowledge, paradigmatically formulated by Laplace, continues to sustain a universal tendency to enhance the observational accuracy and systematic precision of science, at the expense of its bearing on its subject matter. […] I mention it here only as an intermediate stage in a wider intellectual disorder: namely the menace to all cultural values including those of science, by an acceptance of a conception of man derived from a Laplacian ideal of knowledge and by the conduct of human affairs in the light of such a conception.

There are two threats Polanyi envision to such a notion. One would be a systematic sweeping cultural rejection of science as a perversion of truth. Polanyi wrote this in the 50s, today these currents are becoming perhaps more relevant. The root cause of the modern rejections of science are due to the corruption of science itself by the errant (and dominant) Laplacian error. The second threat is the peril to science from the very acceptance of a scientific outlook based on Laplacian fallacy being used to guide human affairs.

I’d planned to get further on this today … but it’s after ten and I have to turn the pedals some more today. I’ll get back to this.

On Science and Method

The Galileo/Copernican and the Ptolemaic views of the solar system lay in dispute for the 150 years between Galileo and Newton (specifically between the dates of the publication of Copernicus De Revolutionibus and Newton’s Principia). In the period of time between these events, with the possible admission of Kepler’s third law) there were no facts to distinguish these theories. In fact, glancing far to the future, the negative results of the Michelson-Morley experiment demonstrating that the Earth was at rest would have been a point to the Ptolemaic not Copernican view. The scientific (heuristic) passions of the proponents of the Copernican view is what drove the outlook of astronomers to the point where at the publishing of the Principia the Copernican viewpoint was dominant. Attached to the prologue of Galileo’s thesis was a forward by Osiander expressing the point that this view was not necessarily “true” but instead was a “fruitful” way of approaching astronomy. This is a red herring. Ptolemaic astronomy was a fruitful source of inquiry for thousands of years. Astrology has been fruitful employment for 2500 years, Marixism was (and remains alas)
a fruitful mechanism for obtaining political power. Fruitful by itself is not sufficient. Theories are fruitful in that they are believed to be fruitful mechanisms for getting to the truth of reality.

In 1914 TW Richards was awarded the Nobel prize for an extremely accurate measurement of atomic weights. Fifteen years this result was completely scorned as useless, for as that measurement made no allowance for isotopic ratios those painstaking measurements were rendered useless. This was a measurement, of high accuracy, of a value that was discovered to have no correspondence to any features of nature. Accuracy qua accuracy is of no value. One misconception about science is that it is experiment that drives progress. Yet it is theory that is required before experiments to provide the basis for how experimental data is interpreted and in fact for what experimental data is deemed to have any value at all.

New visions and insights drive theoretical breakthroughs. Yet the history of science is littered with far more failures than success. This is not limited to “lesser scientists”. Einstein’s vision following Mach imagined Relativity and against Mach solved Brownian motion. Yet Einstein same said vision rejected quantum randomness. Major theoretical breakthroughs in science require a major reworking of our view of nature, a replacing of an older view with a newer one. Proponents of the new, driven by their heuristic passionate belief in the correctness of their vision, must pursuade on the basis of future intimations of fruitfulness in the search for truth of their vision. In doing so, they also must invalidate the older vision. This process of invalidation is often rancorous and ugly. This “feature” is common and perhaps not easily escapable.

This then suggests some striking things about the scientific process. Theory preceded and both validates and interprets experiment. Major theoretical breakthroughs require persuasion. The passion of scientific discovery must be transformed and moved to the passion of persuasion that the new vision of the truth has intimations that it might be fruitful for further deepening of our understanding of nature. Yet a problem remains. Is there anything left? What differentiates the project of chasing the structure of matter at CERN and Fermilab from astrology? Why was it right for the Copernican view to supplant the Ptolmaic in the period between Copernicus and Galileo and before Newton? There are good answers to these questions but that will have to wait until a later essay.

The first parts of this essay draw heavily on Michael Polanyi’s Personal Knowledge which is an epistemological inquiry looking toward a “post-critical reality” epistemic framework. It might also be noted, this book predated Thomas Kuhn’s The Structure of Scientific Revolutions. Critical reality is the idea that our physical theories accurately represent reality. This is in contrast to the Positivist (which is not as far as I can tell the same as Logical Positivism). This view espoused for example by Stephen Hawking suggests that the question of whether the underlying matches the theory is irrelevant and that physics (or theory in general) merely is a mechanism for predicting experimental results.

Is Pi Real?

From a short dialog today in my combox as an aside to our discussion of Natures lack of determinism and any consequences on discussions of free will.

So you think the universe is not continuous because irrational numbers are not real? Do you think that differentiability is a useful concept but doesn’t really apply to reality? Why then Wigner’s “unreasonable success of mathematics” if there is no underlying reality to those mathematical concepts (like pi).

I wasn’t clear. Pi does not exist in the real world. It’s not that we can’t measure pi exactly, but that it’s literally impossible for it to exist, exactly. How could you have a circle in the real world whose radius or circumference is an irrational number? You couldn’t. So pi, and math generally, is just an elegant approximation of reality.

This is worth a little elaboration. Continuity, mathematically speaking is all “about” that dense uncountable set of irrational numbers. Differentiability likewise requires not just continuity but that the manifold in question be “smooth.” Pi as was noted in a following reply is not limited to the ratio of circumference and diameter but crops of in a myriad of places. My interlocutor JA offers that just like that ratio for pi, all these others are “idealizations” and don’t reflect any reality.

When we make mathematical models of the Universe in Physics the common way of approaching these models is to assume that our measurements are inexact and that many of these models are closer to what is “really” being measured than our inexact measurements. When pi appears in descriptions of electron orbits we think that this value pi is “real” and the measurements of electron energy levels which depend on fundamental constants like pi and Planck’s constant and the electron mass are approximate. Someday we expect that we will arrive a a theory in which Planck’s constant and the electron mass like pi fall out as consequences of a mathematical understanding so that just like circumference/diameter all these numbers will be arrived at via fundamental relationships.

Or take the continuity/differentiability matter, which by the by depends as noted above on irrational numbers as well. Early astronomers like Galileo and Kepler took very imprecise measurements to deduce some relationships to describe motion. Newton and a host of later mathematicians went to work with this erecting an elaborate and very beautiful framework which today are known the Hamiltonian and Lagrangian descriptions of classical mechanics. These equations then can be pressed into service many many orders of magnitude past their original measurements without requiring modification and allow for example cis-lunar docking of spacecraft. These descriptions as well drive our methods and intuitions in the quantum (very short distance or high energy) regions and the relativistic ones as well. One suggestions as to why the mathematics of continuous differentiable manifolds is so important and successful at describing nature is that this description of nature (as continuous and differentiable) is accurate, that is it reflects reality.

Current Physics understands a number of fundamental particles to be “point-like”, that is to say that their best description physically speaking is as a “point.” A point in space is commonly thought to be an idealized mathematical concept. There is no “such thing” as a real “point.” Small dots or specks of dust are used to illustrate for the imagination what something approaching a point might be as a learning aid. However quarks and electrons, for example (and setting aside String theory for now) are described in the theory which we use today that best describes nature, the Standard Model, are point-like objects. Our best description of these (real) things is as a point (and it might be added that protons, neutrons, and baseballs are not point-like in our best descriptions). My eldest daughter recoiled when she heard my description of an electron as “point-like.” The principal problem for her was that electrons could not be point-like and massive. Yet mass is just a property. Like spin and charge, mass is just a numerical value assigned to that point-like object which affects how it interacts with other objects.

That being said, which is more real? The inexact measurement values or theoretical value which they approach? If the things you see with your eyes and other perceptive senses are seeing things which you believe to be real, then I offer that these concepts, pi, continuity, and point-like electrons represent our best understanding of what that reality “really” is. They are as real as the chair you sit upon for they are fundamental pieces of our understanding of how that chair is best described. If the chair is real then there are only two possibilities. Either our current (Standard Model) as our best description of that said chair reflects reality (in which case pi, irrational numbers and so on are also real) or there exists a future theoretical model (consistent with our current measurements) will replace it. If that future theory also has properties like continuity and constants (some irrational like pi) arise naturally in that future (correct) theory then … aren’t irrational numbers therefore real? How could it not be so?

 

Free Will and the Universe: Part 2 (the Theorem)

As I mentioned Friday, I’m going to begin a short discussion about this paper on some consequences of special relativity and quantum mechanics on our view of determinism and the Universe. The authors, John H. Conway and Simon Kochen, establish three “axioms” (and a “paradox”) and from these statements establish consequences which have wide ranging implications. All of these measurements and the following discussion regard the behavior of a spin 1 massive particle. Spin 1 massive particles can have three possible measured values of quantum mechanical spin, namely -1, 0, or 1. Part 1 in which the axioms (and the Kochen-Specker paradox) are discussed can be found here.

In this installment of my discussions of this (which will have at least one and perhaps two more parts) I will examine the theorem which is at the heart of this paper. Blog neighbor Jim Anderson, noting my “homework assignment” finds the third paragraph daunting. The statement of the (strong) Free Will theorem is:

The Free Will Theorem. The axioms SPIN, TWIN and MIN imply that the response of a spin 1 particle to a triple experiment is free—that is to say, is not a function of properties of that part of the universe that is earlier than this response with respect to any given inertial frame.

Conway and Kochen prove this theorem by contradiction, that is they assume the theorem is not true and show that leads to a problem, in this case the the contradiction comes in the form of the Kochen-Specker paradox.

The basic form of the proof is to take two TWIN particles subjected to the SPIN measurement and begins to follow the consequences that these particles are “not free”. What is meant by free? This takes a particular meaning. If this measurement is free it means that the result of this measurement is not the consequence (a function of) of anything which has occurred earlier in any reference frame.

So, the authors express this measurement in terms of a collection of parameters denoted as alpha. In brief, the method employed in the proof is to pare down that unconstrained parameters sets (axis or other prior settings) via group arithmetic and MIN (one of the axioms from yesterday) to be able to finally express the measurement as a function which is recognizable as the same function which by the Kocken-Specker paradox cannot exist. Then, since the function cannot exist then the prior constraints on the particles measurement cannot exist either.

The paragraph quoted by Mr Anderson as less than transparent to the worlds most competent reader are placed there largely, I think, are included to these results to bear on a more recent proposal (called GWR and rGWR in the paper) which attempt remove by stochastic arguments the “measurement/collapse” of quantum wave functions which is philosophically speaking, uhm, difficult. I have not read any of the rGWR papers or any discussions of them so I will leave that for another time.

Mr Anderson (and his commenter) remark that this paper perhaps goes too far, offering

From what I can tell, it’s an attempt to demonstrate free will by noting that at least one property of elementary particles is nondeterministic. This still doesn’t prove the philosophical idea of free will, however. It appears only to impute it to an object, with a lot of anthropomorphizing to make it all work.

I don’t think that’s the case at all, however. The notions of free will which they think this offering lacks “intentionality, “responibility” and so on are not being discussed here. In any discussions of free will and compatabilism see for example wiki or the Stanford Encyclopedia, there is indeed a lot of discussions over whether determinism and free will are can co-exist. Yet, the universe in which we live is not deterministic. So the compatibility problem shifts. It is not a question of whether free will and determinism can exist but how free will arises in a fundamentally non-deterministic universe. The usage of the term “free will” for the theorem is to point that the freedom of the elementary particle to choose it’s “101” (squared) spin statistics result is equivalent and indistinguishable from the experimenters free will to determine the axis by which the measurement will be taken. No the axis of measurement (and the particles choice of 101,011, or 110) is not a moral choice obviously. But glancing through the compatiblism articles cited above, little space is seemingly granted to the considering consequences of a non-determinstic universe … or if incompatiblisim may be possible, i.e., “or that free will is true, therefore determinism is not” … and since determinism is not might free will be a possiblity?

The point is much discussion within the philosophical community grounds itself on the notions of whether or not determinism is true, i.e., whether the universe is really or is really not deterministic. Physics insists that there is an answer to that part of the question. The universe is not deterministic. So however you argue about free will that part of the argument should be settled.

Examining Free Will and the Universe: Part 1 (Axioms)

As I mentioned Friday, I’m going to begin a short discussion about this paper on some consequences of special relativity and quantum mechanics on our view of determinism and the Universe. The authors, John H. Conway and Simon Kochen, establish three “axioms” (and a “paradox”) and from these statements establish consequences which have wide ranging implications. All of these measurements and the following discussion regard the behavior of a spin 1 massive particle. Spin 1 massive particles can have three possible measured values of quantum mechanical spin, namely -1, 0, or 1.

The first of these axioms is a consequence of spin statistics known in this paper for reference as the SPIN axiom. If we take three orthogonal measurements and the norm (or square) of that spin value then the only possible value for a spin measurement consistent with quantum mechanics is that two of those squared spin values are 1 and one is 0 (or “101” in the paper for brevity). This leads to a paradox, named the Kochen Specker Paradox. This paradox arrives as follows.

If we were to set aside the more troubling aspect (from a classical viewpoint) of quantum mechanics for a moment and imagine that the values of possible measurements of the spin was known before the measurement was taken. If we then examine the set of 45 degree rotations about any and all possible axis from the original orthogonal axis. Takeing a subset of 33 of these possible axis and then attempt to assign “1” and “0” values for the axis points spread about the surface. If the measurement values were known ahead of time, then a value should be pre-assignable via some function to these nodal points. But it turns out that no such function exists. That is, it is impossible to assign these values consistently throughout all permutations these 45 degree symmetry transformations. Therefore no such function can exist. Yet of course, experimentally it does. Quantum mechanics is very well established experimetnally. This function does not exist yet this is what is observed. Which means that values of those experiments are not preassigned.

The next quantum mechanical conseqence that is used is called the TWIN axiom by the authors. This is the basics of quantum entanglement. If we create two particles “twinned” or created by a particle anti-particle pairing their squared orthogonal SPIN measurements will be the same if the two measurements of the two particles are taken on the same axis.

Finally the last axiom (MIN) isolates a particular peculiarity of special relativity and brings that into the context of this discussion. In special relatively simultaneity is not a clear cut matter as it was in a Newtonian system. An “event” in a relativistic setting is an occurrence, like the (idealized) snap of a finger which occurs at a singular point in space and time for any given observer. In special relativity two events separated in space can be seen to occur in the opposite order in different inertial frames. That is one observers moving past (and by internal frame that means the observer is not accelerating) by in different directions might observe event “A” to occur before “B” while another observer might observe “B” to have occurred before “A”. The MIN axiom basically asserts that our two experimenters measuring two entangled spin one particles SPIN measurement can independently and freely choose the axis by which they measure the particle.

Man, Society, and Science

Carl Olson of the Insight Scoop notes an article noting a term which he predicts will be in our future, scientific authoritarianism. The cited article notes:

Scientific authoritarianism, as I am using it here, holds that political decisions should be compelled by the political preferences of scientists. It is a very strong form of the ‘linear model’ of science and decision-making that I discuss in my book, The Honest Broker. Hansen believes that the advice of experts, and specifically his advice alone, should compel certain political outcomes.

There are just a few matters that need to take into account in this matter.

  • First off, it is my experience that there are two features found in many of the first rank scientists in our midst. First off, the best and brightest scientists in various fields don’t have the slightest interest in giving advice to politicians and in fact when they do offer political advice they offer very bad advice. I might add that theologians and religious leaders as well, for the most part, also are very horrible when they enter into the political world. There are some good reasons for this. Skills are involved in politics. The ability to read people, judge motivations and to have an estimate of the possible and so on are political skills. To become talented and to rise to the top of a scientific discipline requires three things: talent or genius, a love for inquiry, and a concentration on that field virtually to the exclusion of all else in life. Those people who are at the first rank usually have no talent, or frankly, desire to spend any time with exercising any authority. For them, their life is wholly given to the chase for the truths hidden by and in nature. To make an analogy with popular culture from cinema, while we might hope for our scientific authority to rise from the Mozarts in our midst, we’re going to get the Salieri’s who are the ones who will sully themselves with such matters.
  • Second, those scientists who are not blinded by the possibility of exercise of political authority, i.e., those who are honest with themselves, are aware of the vast gulf between what we know and what is out there to be known. To put it baldly, any scientist who assures you that we “know” the best policy is a liar or a fool. We “know” so very little about ourselves, our universe, and how it is put together.
  • Michael Polanyi in Personal Knowledge offers for us a glimpse at how much we deceive ourselves regarding about the epistemological certainties in science. I cannot recommend this book enough (although I’ll ruefully admit I really do need carve out the time to finish it).

A joke which is part of the culture of Physics and the pursuit of knowledge in that discipline.

A policeman encounters a drunk one night, who is on his hands and knees searching for something in the night beneath a street light. The policeman asks him, “What are you looking for?”
The drunk replies, “My keys.”
“Where did you drop them?” asks the bobby.
“Over there,” the drunk points down the block.
“Why are you looking here then?”
“I can’t see over there, because the light is here,” replies the drunk.

Our search for the mysteries of the universe and ourselves are a lot like that. We search under the light. Our keys … our understanding is to be found, so often, elsewhere down the road … in the dark.

So much of physics and our physical understanding of the universe assumes linearity. The mathematical behaviour and our understanding of linear PDEs and non-linear ones are much like comparing oranges to not-oranges. We look at and search for understanding under the light of just a few lamps. Gradually we uncover and begin to use a few more. But, we are just beginning. To pretend otherwise is foolishness. My advice would be to spurn those offering to give us scientific authority are who are assured in their results and their knowledge and don’t first show evidence of humility and uncertainty and demonstrate they posses a firm grasp of the magnitude of our ignorance.

Contra The Germ Theory of Disease

In part my statement, “I don’t believe in the germ theory of disease” is meant to be provocative for my position is somewhat, err, nuanced. Consider the following to points:

  • 100 people are all exposed to a serious pathogen. Five get sick.
  • A number of you at work have a stressful situation at work, requiring serious overtime. For a week or so, along with the stress, you all work long hours and average four or less hours sleep at night. Many get come down with illnesses toward the end and after this time. This is unsurprising.

The problem with the germ theory of disease is the notion that germs cause disease. Germs do not cause disease. Germs are virtually omnipresent. Clean rooms however show that germs, when not present cannot cause disease. That is germs are a necessary condition for disease … and because of their omnipresence, when the real cause of disease occurs … people ordinarily get sick.

The real cause of disease is the failure of your immune system to prevent illness. There are a lot of reasons for that. Mental state, mental and physical stress levels, nutrition all enter in to keeping or failing to keep your immune system working as it should. Recently, I’ve been involved in a discussion in which notions of witchcraft were discounted as relevant in combating disease. However, if we put together realization that there exist communities in which witchcraft is given credence and that mental stress and state contribute to the effectiveness of the immune system one must conclude that witchcraft as a cause of disease, makes perfect sense.

Oddly enough we tend to ignore our immune system. Regarding physical and mental fitness, we have regimens and advice on how to increase, measure, and keep our mental and physical faculties at tip top condition. An athlete can measure his performance metrics precisely. Cyclists for example, can measure and track VO2MAX, watts/kg, and peak wattage to track and estimate his progress and current fitness. In a few months in the NFL, as another example, will migrate to Indianapolis to put draft prospects through a battery of tests to test fitness to succeed in the football arena. However doctors have no such metrics to measure the fitness of someone’s immune system. There are no “training regimens” to exercise and get your immune system working at optimal levels. In part that is because of the popularity of the “germ theory” of disease. If that catchphrase were replaced by the “immune system breakdown” theory of disease (that is the one to which I subscribe) then one would expect research priorities to be realigned.

More Alien Math

Imagining aliens and alien ways of thought is a fascinating exercise. Recently we’ve had some discussions centering around whether our mathematical concepts and an alien’s mathematics might be a common ground for understanding. My contention is that this is less likely than we imagine. That is, the alien notions of mathematics in a large part depends on their common ways of viewing and explaining the universe which we might share.

Consider first an alien race which dwells in an three dimensional plasma or magnetic fluid environment, say in a solar chromosphere. Nothing in their world has fixed size or boundary so integers, if/when developed are an abstract unusual concept. Furthermore imagine that their basic communications is via the transformation of shapes of magnetic field flux. The fabled Gordian knot might be a blank slate ready for manipulation and a stage for communications to their people. What sorts of maths might a people without integers and with fundamental intuitive understanding of graphs and knots describe?

Consider what you know about math. Take away numbers. Take away paper and your familiar methods of abstraction. Abstraction has to take place within a geometrical graph related algebra. Not a lot of common ground left. There is however, mathematics that remains.

My interlocutors in the previous essays have insisted, pi is a fundamental concept which would be easy, accessible, and it or an analogue might be recovered easily by any alien community of mathematicians. But without shared methods of abstraction that becomes less likely, and perhaps if the integers themselves come to be are a difficult topic of rarefied mathematical abstraction, how then to express pi in that context?

The point is that our expression of mathematics is a product of our physical perceptions and environment as much or more than it is “a pure logical” exercise (if such an object exists). We and the objects we deal with have permanence and boundary. Integers therefore become a simple concept. Word and text became a common tool, so symbol and abstraction, algebra and symbolic representations become natural.

If one is a creature where permanence and boundary is unusual or an abstraction then the expressions of common mathematical objects will become different. If the common tools of abstraction were held in other means then the symbolic strata would be reflected in the mathematical technology.

The point is, very much of our shared mathematics is going to be a result of our shared perception of the universe. Pi arises first in Euclidean geometrical constructions. If Euclidean geometry is an unnatural beast, then pi is going to take longer to arise. Inasmuch as we share the same physical universe, with a quantum description of the four (?) basic force and the families of leptons and quarks that we have discovered, mathematical descriptions of their behavior we think might coincide at that level. Celestial mechanics we hope might similarly provide a common ground. And that is just a reinforcement of the idea that it is our shared perceptions of the universe which gives to us our shared concepts in mathematics … not the other way around.

That Little Thing Called Race

Apparently the left and progressives, as noted recently, find that race and its consequences are the most important historical axis/issue on which to judge American history. On Monday I had asked:

Is this what the left believes, that “race is the single most important and consequential issue in all of American history.” Really? Wow.

There are a number of arguments against this. Here is the first one. What is the most important issue, what is the most important factor to track when viewing history of American and indeed the larger international history?

Math. Specifically, the history and development of the body of Mathematical knowledge.

Consider first the following. Imagine for a moment American history without race. No civil war, no civil rights movement and so on. Possibly without a civil war America would have been in a different place regarding the power of the central government and perhaps in that light a weaker America might have reshaped the outcome of the brewing European conflicts.

But … picture instead a world history without technology, without the advances in power such as steam, oil, and electricity; without the transistor, the printed circuit; without automation and industrialization. Picture instead, America in a world in which technology was still at the level of the Roman era. Wars were still fought with spear, sword, and javelin. There were no airplanes, instead galleys and sailing vessels still plowing the seas.

Continue reading →

Math, Science, and Knowing

This paper by Eugene Wigner entitled “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” gets too little play in the faith/science discussions. He begins:

THERE IS A story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. “How can you know that?” was his query. “And what is this symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the classmate, “surely the population has nothing to do with the circumference of the circle.”

Perhaps a little note to preface this is appropriate. Wigner is adamantly not an uncredentialed crackpot, far from it. Of him, and a select few others, a science historian might write a paper on the “unreasonable effectiveness of Hungarian mathematicians” in 20th century physics and mathematics … and Mr Wigner would be a prime example.
   
Symmetry is a key principle in our modern physical understanding of the nature and as well is often closely connected to beauty in artistic settings. Two of the key insights driving the usefulness of symmetry are the prevalence of gauge theories to explain physical phenomena and the “deep theorem” of Emmy Noether’s which in a fundamental way connects continuous symmetries with conserved quantities.

In the latter part of the 19th century, the mathematicians Lagrange and Hamilton formalized and restructured Newton’s equations of motion. These methods recast the equations of motion for systems of particles and forces into elegant mathematical forms which structure them in a way in which all the modern geometrical methods and tools might be applied to them. Noether’s theorem applied to any generic problem which could be cast in the form in which Lagrange and Hamilton had developed. Her theorem stated that any continuous symmetry that those descriptions of systems possessed meant that a related (conjugate) “current” was conserved. In layman’s terms this means that, for example, because the equations of motion describing the motion of particles is the same where you’re reading this as where I wrote it, means that momentum is conserved. Because those equations of motion are the same today as next week, that means energy is conserved.

Mr Wigner’s essential observation is that in the first place starting from a number of relatively imprecise measurements a great mathematical structure (Lagrangian and Hamiltonian mechanics) is built. Ms Neother’s theorem is but one elegant and precise result that falls out from that mathematical structure. The quantity of results and their precision far exceeds the precision and quality of the experimental data going into the formation of those theories. Or as Mr Wigner suggests:

A possible explanation of the physicist’s use of mathematics to formulate his laws of nature is that he is a somewhat irresponsible person. As a result, when he finds a connection between two quantities which resembles a connection well-known from mathematics, he will jump at the conclusion that the connection is that discussed in mathematics simply because he does not know of any other similar connection. It is not the intention of the present discussion to refute the charge that the physicist is a somewhat irresponsible person. Perhaps he is. However, it is important to point out that the mathematical formulation of the physicist’s often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena. This shows that the mathematical language has more to commend it than being the only language which we can speak; it shows that it is, in a very real sense, the correct language.