Thursday, 27 November 2014

Reichenbach on Non-Euclidean Geometry and Space

Reichenbach on Non-Euclidean Geometry and Space

As far as I understand it, Reichenbach's main argument in his paper is that in order to get anywhere with geometry we need to impose a methodological rule which states/stipulates that there are no universal forces. Only when such a rule is established can we attempt to determine a geometrical framework that is not just an arbitrarily chosen system or one that is tailored to fit a preferred geometry by the compensation of universal forces. Of course, the stipulation of the rule itself makes geometric determination conventional from the outset.

Using his "hump" diagram (also explained in the slides), we can better understand the problem of universal force in a geometry. Imagine that people inhabit the curved surface G: Reichenbach supposes that they would be able to determine the curved shape by geometrical measurements; "noting the differences between their measurements and two-dimensional Euclidean geometry." In other words, people on G could determine their curved geometry from Euclidean by means of comparison. What is important is that they would measure the distance from A-B and B-C as identical/equal. However, if the measuring rods cast shadows onto the flat E plane below, A-B and B-C should be unequal. Now suppose that on E "a mysterious force varies the length of all measuring rods moved about in that plane, so that they are always equal in length to the corresponding shadows" projected from G. Consequently, the people on E would not be able to perceive the change, and they would obtain an equal measurement (a curved geometry) that is respective of the G measurement. 

So, Reichenbach's question is: how can the force be detected if the nature of the geometry may not be used as an indicator? 
If the force is heat, for instance, E would be distinguishable from G, as heat affects different objects differently. Thus heat is a "differential force," and is  not so problematic in the determination of a geometry.
Thus suppose the force is something universal: affecting everything in the same way, and unable to be insulated against. In this case, E might be indistinguishable from G - so long as we find a physical description of objects to fit a certain geometrical framework, (such as the presence of a universal force in a curved "humped" geometry) all potential deficiencies in our account can be compensated for.

Hence for Reichenbach we must stipulate a rule which sets all universal forces to zero, otherwise we could compensate any geometrical system with a physical account, and vice versa. This makes geometric determination a matter of convention: we have freely stipulated away universal force, so we cannot be right or wrong. Furthermore, the definitions of physical objects that we use in a particular framework will need to be "coordinative": we say that "this concept" is coordinated to "this particular thing," and that concepts are "interconnected by testable relations" (so our words coordinate with real things in the world rather than other words/concepts)

At this point he introduces the idea of relativity: "the word "relativity" is intended to express the fact that the results of the measurements depend upon the choice of the coordinative definitions." So we can define a solid body as a physical body that can be pointed out, and a rigid body as one that is unaffected by differential forces, (universal forces unconsidered). Thus our physical definitions are stipulative/subject to choice. In short, our definitions are themselves conventions. Using the new definition of a rigid body, we can therefore stop worrying about the indeterminacy of a geometry in virtue of universal forces, as those forces do not affect the rigid bodies in our framework. I therefore take him to conclude that universal forces are stipulated out in virtue of the fact that rigid bodies will be unaffected by them in a geometry according to our coordinative definition.

He spends the 5th part of the chapter explaining what it means to be a rigid body in more detail, but I will skip over most of his arguments - (details from page 20 of his paper onwards). To summarize, he suggests that there are certain restrictions to be made upon the definition of a rigid body such as "the demand that the obtained metric retain certain older physical results, especially those of the 'physics of daily life,'" or that any definition of a rigid body will depend on a prior definition of a "closed system." He concludes, however, that there is no perfect way of restricting the definition: "this strictness is not possible." Thus "with a different definition physical laws would generally change"..."the laws are true relative to the definition of congruence by means of rigid bodies."

In the 6th part he describes ways in which universal forces might be detectable, but at the same time, indistinguishable from geometry. Below is a suggested hypothetical method for this: (sorry for potato quality of picture, I took it with my phone from the chapter itself due to lack of images on google...)
The line E-D would expand when heated and overtake the point A. However, similar models for the detectability of universal forces could be constructed in other geometries, so there is no single method that could demonstrate to us the nature of our own geometry beyond all doubt. Furthermore, universal forces would make the expansion of the line E-D undetectable due to the "destroying" of "coincidences," where "all objects are assumed to be deformed in such a way that the spatial relations of adjacent bodies remain unchanged." More info about this is on p.25. Basic point is that we wouldn't be able to tell whether or not a force was universal or not in any given geometry. Conclusion: although universal forces would be indistinguishable in any given geometry, they shouldn't be regarded with suspicion as they can be nonetheless (in theory) detectable. Having postulated their existence, we must stipulate them to zero before beginning any determination of a geometrical framework.

My thoughts:
1. I am inclined to agree with some of the comments raised by Norton that were mentioned in the lecture - in trying to correct for certain deficiencies in an account for geometry, we often find correlations and agreements about a particular one, e.g. curved, variable space in our case. So although things seem to be a matter of convention they are only so in a narrow sense - the selection of a geometry is not arbitrary but a result of well thought out and coherent reasoning
2. To add to the first point, the methods used for measurement which give us such coherent reasons for selecting/determining a geometry are varied and large in number, which to me strengthens the justification of a geometrical determination
3. I also agree with Norton that the introduction of a concept of universal forces is enough to render any system or quantity conventional, thus the introduction of such a concept here does nothing of particular significance, nor does it give us an interesting difference between fact and convention to the extent that we may question/revise our methods or determination of a geometry. I'm trying to think of a good example of a system of measurement that could be rendered conventional by the introduction of universal forces...any ideas?

Monday, 24 November 2014

On the consistency of Euclidean geometry

In class today I pointed out that non-Euclidean geometries are logically consistent if the Euclidean one is, and it sure seems to be.  Jon asked whether Hilbert hadn't in fact proven the consistency of Euclidean geometry.  Not having the facts loaded up in my head, I gave a hand-waving argument that he did not:  Consistency proofs are always relative to some background theory that is assumed consistent, so at most Hilbert could have given a relative consistency proof like those of non-Euclidean geometry relative to Euclidean, perhaps taking set theory as the background.  Here is a more concrete answer from the Stanford Encylopedia:

"For the axioms of geometry, [Hilbert proved consistency] by providing an interpretation of the system in the real plane, and thus, the consistency of geometry is reduced to the consistency of analysis. The foundation of analysis, of course, itself requires an axiomatisation and a consistency proof. Hilbert provided such an axiomatisation in (1900b), but it became clear very quickly that the consistency of analysis faced significant difficulties, in particular because the favoured way of providing a foundation for analysis in Dedekind's work relied on dubious assumptions akin to to those that lead to the paradoxes of set theory and Russell's Paradox in Frege's foundation of arithmetic."  (http://plato.stanford.edu/entries/hilbert-program)

So rather than giving a set-theoretic model, as I suggested he might have, Hilbert gave a model in analysis, but the point is the same.

One might still take issue with my blanket statement, 'Consistency proofs are always relative.'  In fact, it is sometimes possible to give a purely syntactic consistency proof, as Hilbert later tried to do -- a proof, that is, that some particular system of symbolic expressions and transformation rules can never lead to an expression of the form 'P and not P'.  This is easily done, for example, if neither the axioms nor the rules of inference contain the expression 'not'!  But such a proof is not very meaningful unless the formal system of logic is complete, i.e., unless all logical consequences of a theory can be derived from it syntactically.  (This is usually cashed out in model-theoretic terms:  P is a logical consequence of T if P is true in all models where T is true.)  

But even given such a syntactic consistency proof in a complete logic, we might still say that the proof is relative, for we have then reduced the question of the consistency of our theory to that of an informal theory of formalisms, i.e., of symbols and transformations of symbol strings.  As Hilbert thought, such a theory might be so simple and clear as to escape any serious worry of inconsistency.  But this brings us full circle, for as I intended to suggest, Euclidean geometry is already beyond any serious worry of inconsistency.  It would seem that the most one can do is reduce it to other theories that seem even less questionable.

Friday, 21 November 2014

Putnam on time and physical geometry

Hello all.

Since it's Friday night and we don't yet have a blog post on Putnam, let me just ask you all to share your thoughts.

Putnam argues that the relativity of simultaneity in special relativity resolves one of the major problems of time.  It shows, he argues, that the future is just as real as the past and present.  Here's a short, rough version:  There are no privileged observers.  So if being real is determined by some spacetime relation, it must be a relation not to me but to "an observer".  Now suppose that relation is: being in the present or past of the observer.  There are observers in my present and past, so they are real.  Events in their present and past are also real, because such events are in the present or past of "an observer".  But some of those events are in my future!  So events in my future are real.  And by chains of observers, all events in my future are real.

In the lecture I briefly reviewed Sklar and Stein's alternative views.  In short, Sklar suggests that 'real' might be relative and intransitive, while Stein separates 'real' from 'present' -- in fact, he proves that if 'real at' *is* transitive, and something in the past light cone of an event is real at that event, then all events in that past light cone are real at that event.  So Stein's (noncommittal) suggestion is that all and only those events in the past light cone of an event are real at that event, and the simultaneity hyperplane doesn't enter into it.

So what do you think?  Does Putnam's argument hold up?  Where exactly does it fail?



Wednesday, 5 November 2014

Werndl’s new implication of chaos for unpredictability


In this paper, `What Are the New Implications of Chaos for Unpredictability?’, Werdnl proposes a new definition for chaos, defends it, and uses it to show that chaotic systems (now precisely defined) have a particular unpredictability property---the new implication of chaos for unpredictability.

Werndl spends the majority or the paper defending her proposed definition, that a system is chaotic just when it exhibits mixing (aka strong mixing) which is precisely defined in section 3.2, equation (4). It’s easiest to see what mixing means once you’ve made the move of equating a physical measure defined on a phase-space to the probability that an arbitrary system will be in the region of phase space (given an appropriate normalisation of the measure). Werndl makes this move without arguing for it except for saying that `it is quite natural under certain assumptions’. Nevertheless, with this in hand we can gloss what it means for a system to exhibit mixing: given two regions of the phase space A, and B; let TA be the region of points resulting from allowing the points in A evolve, under the dynamics of the system for a very long time; then the probability that an arbitrary system will be in both TA and B is equal to the product of the probabilities that a system is in A and B respectively. Another picture of this is that after large amounts of time-evolution, any bundle of initial conditions becomes spread out evenly over the whole phase space (although becoming highly filamentous in order to preserve its initial measure).

Remembering, from elementary probability theory, that the probability of two independent events occurring is the product of the probabilities of each of them occurring in isolation, one is quickly led to Werndl’s new implication for predictability. Suppose you knew that a system started out in a particular region of the phase space, and that it had evolved for a large amount of time.  Suppose, also, that you wanted to know what the likelihood is of it being in some other region after the time-evolution. When the system is mixing, knowing the initial conditions is no help whatsoever for finding where the system will end up. Since the initial phase-space bundle has been spread all around the phase-space, the probability of it ending up in some region is just the same as the probability of an arbitrary system being in that region. This (along with her defence of mixing as a definition of chaos) justifies her claim that: `a general new implication of chaos to unpredictability is that for predicting any event … all sufficiently past events are approximately probabilistically irrelevant.’

Finally, a note on Werndl’s project of defining chaos:

In this week’s seminar we argued about what worth there was in defining chaos or given a vague definition of chaos what worth there is in the concept at all. Werndl defends the adequacy of her definition using the criteria due to Brin, Stuck, and Devaney that `(i) [a proposed definition] captures the main pretheoretic intuitions about chaos, and (ii) it is extensionally correct’ [her emphasis]. I think she successfully shows that her definition meets both of these criteria, but this doesn’t provide support for redefining chaos in the first place. I say redefining as, I submit, chaos is already defined (albeit vaguely) by its use.

Werndl’s motivation for this redefinition seems to be to answer the question: `What are the new implications of chaos for unpredictability?’ which, in the abstract, she suggests ought already to have been answered based on the commonplace views of chaos-theorists (of all flavours). In my mind this isn’t sufficient, as it would be perfectly permissible for the new implication to be exemplified by only some chaotic systems (under a more permissive definition) or some chaotic systems and some systems that aren’t (under a more restrictive definition).

The new implication could equally well (if not better) be thought of as a `new implication of strongly mixing, measure preserving, dynamical systems for predictability’ as an implication of chaos. It seems to me that the real motivation for the redefinition is political, in that chaos is a more sexy name with which to describe ones field---after all Jurassic Park didn’t have a theorist of strongly mixing measure preserving dynamical systems, but rather a chaos theorist.