Thursday, 30 October 2014

Defining Chaos

Hi guys,

Sorry for the late post. Batterman’s paper was quite dense for me since it was full of concepts I hadn’t encountered before and so took me a while to digest them.

In this paper Batterman criticises Stone and Ford separately. In this post I am going to mention some points which came to my mind during reading his critique of Stone (I may post another note for Batterman’s critique of Ford in the next couple of days) and comparing it with lecture slides.

1- Stone introduces three criteria for deterministic system: (a) there exists an algorithm which maps a state of system at any given time to a state at any other time (not probabilistic), (b) a given state is always followed by the same history of state transitions, (c) any state of system can be described with arbitrarily small nonzero error.

Batterman says condition (c) which is known as well-posedness is equivalent to the notion of continuous dependence which says: a solution depends continuously on the initial data if convergence of initial data entails convergence of solutions. It is not transparent to me that these two are equivalent. To make it clear we first need to find out what Stone means by the vague term “describe” in condition (c). We may want to say that by this term he means 'a description of the states of system which uses algorithm and some other states'. So we may reconstruct condition (c) as: 'any state of system can be described, with arbitrarily small nonzero error, using algorithm and some other state of system'.

Let’s compare this reconstruction with condition (d) which Stone introduces for defining the notion of predictability: "any state of the system can be generated from the algorithm with arbitrarily small nonzero error from any other state of the system”. These two sentences seem very similar except for their second quantifiers. 

However, it seems, condition (d) and reconstruction of condition (c) are far from precise mathematical definition of continuous dependence and non-sensitive dependence which, in the paper, are respectively taken to be equivalent to (c) and (d). The mathematical definition of continuous dependence is in the slides, and the mathematical definition of non-sensitive dependence is the following:

∀δ, ∃ε, x0, ∀x1,t such that
dist(x0 , x1) < ε → dist(φ(x0, t), φ(x1, t)) < δ

One can see that condition (d) is far from this definition, at least in terms of its details: in condition (d) there is no mention of time variable. I don’t know why Batterman doesn’t criticise Stone for this.

2- Stone claims  that determinism is a necessary condition for predictability. Batterman provides an example of an indeterministic system which is predictable to falsify Stone’s claim: "one can make decent predictions about quantum systems”. My question is: Can one make decent non-probabilistic predictions about quantum systems? If not, then I doubt this does count as a counterexample for Stone’s claim as I think what Stone means by prediction is non-probabilistic prediction.

3- Stone says "systems which do not admit closed form solutions have this property that the error present in the specification of the initial state can be amplified exponentially". The problem is Stone doesn’t provide any argument for this claim and I couldn’t find any criticism about this in Batterman's paper.

4- I pondered for a while on the difference between discontinuous dependence and sensitive dependence and finally reached the conclusion that having (ε, δ)-definition of the mathematical notion of limit in mind, one can say that the difference between discontinuous and sensitive dependence is analogous to the difference between (1) a situation in which a discontinuous function like f(x) doesn’t have any limit when x approaches a number like c and (2) another situation in which the limit of a function like g(x)  goes to infinity as x approaches c.

5- In the slides it was said that exponential unstable systems may have continuous dependence but Batterman says they may have non-sensitive dependence. I was wondering which one of these claims is true and how one can prove either of them by using the mathematical definition of exponential instability: dist (φ(x0,t), φ(x1,t)) ≥ (dist (x1,x0))λt

P.S: I realised that in the paper Batterman does not assert that Stone's condition (d) is equivalent to non-sensitive dependence. So I was wrong in saying that these two are taken to be equivalent throughout the paper (However he explicitly claims condition (c) and continuous dependence are equivalent). He also doesn't mention anything about the relation between this condition and continuous dependence. He only claims requirement (d) can be satisfied by exponential unstable systems. So I was also wrong in claiming that Batterman says exponential unstable systems may have non-sensitive dependence.

5 comments:

  1. Hey Somayeh :)

    thanks for the post. I'm going to say outright that I don't understand any of this...

    Now that I've cleared the air (!) what I did seem to notice when I was reading the papers was the problem you mention in number 2. I think the claim that we can make non-probabilistic predictions about quantum systems is quite a strong one, and one that therefore requires significant backing, or a description of what such prediction would entail, even if the claim is generally regarded as correct.

    That's it!

    Emma

    ReplyDelete
  2. I just updated my post by adding a postscript as well as making some minor changes in numbers (4) and (5), one of which contained a typo!

    ReplyDelete
  3. RE deterministic predictions on quantum systems

    Indeed one can make deterministic predictions on quantum systems in certain situations, but in general predictions will be probabilistic.

    To give an example: Suppose you measure the momentum of a free particle at some time and find it to have momentum p; then you wait for some time without making any measurements; then you measure the particle's energy. The resulting energy will be E=p^2/(2m) with certainty.

    A cartoon explanation of why this is:
    The first measurement of momentum forces the particle's state to be one with an exact value of momentum corresponding to the measured value, p (this is the `collapse of the wavefunction') as opposed to a state with multiple values of momentum (ie a superposition of states).
    Momentum is a conserved quantity of free particle systems so the state stays in a state with fixed momentum for any length of time as long as you don't make further measurements. (A full explanation of why momentum is conserved would be quite formal, but basically the conserved quantities in quantum mechanics are the same as the conserved quantities of an analogous system in classical mechanics [but with the meaning of the terms `quantities' and `conserved' changed to be appropriate for quantum systems], conservation of momentum is a well known fact in classical mechanics and in quantum mechanics an analogue of it remains.)
    Energy is a `compatible observable' with position which means that the system's having a definite value for momentum means it also has a definite value for energy (roughly this is because energy of a free particle only depends on its momentum).

    ReplyDelete
  4. A thought on Stone's conditions (c) and (d).

    They cannot be the same, for (c) is one of the conditions for determinism, (d) is the condition of predictability, and those two notions are independent from each other, (c) can hold while (d) fails. Indeed, I think (c) is quite different from (d) and your definition in that it is just a requirement that a single state be arbitratily accurately specified. That does not involve a second state, as would the use of the algorithm and as is the case in the notions of dependence on initial conditions. It is in fact about the possibility to state these very initial conditions precisely. The reason for this conditon is the two limitations we have in describing initial states that Batterman describes.

    However, it is indeed a bit confusing that he takes (c) to be analogous to well-posedness. For the latter does involve two states, an initial one and the one a solution describes. In fact, this sounds much more like (d).

    ReplyDelete
  5. Somayeh, thanks for taking on a difficult paper. Some remarks:

    1. I think you’re quite right that (c) is far from a mathematical definition of continuous dependence. In fact, I think Batterman is mistaken to suggest that Stone intends it that way. It looks to me like (c) is just an attempt to specify that the physical variables have exact values, as Mitch suggests in his comment. Stone introduces (c) as follows:

    “[W]hile we lack perfect measuring instruments, and
    we lack a way of finitely stating every real number,
    we can nonetheless say that in a deterministic sys?
    tem, our representation of the state space of the
    system can be made arbitrarily accurate. For deter?
    ministic systems, the accuracy of a state description
    is infinitely refinable, even though any given state
    description will contain some error. By contrast, a
    nondeterministic system will have an upper bound
    of accuracy. Thus in quantum mechanical systems,
    there is a limit beyond which we cannot extend the
    accuracy of the description of, say, both the position
    and momentum of an electron.”

    Nice job, by the way, expressing the negation of SDIC.

    2. You’re right that stone means non-probabilistic prediction, but I think Batterman concedes that point.

    3. "systems which do not admit closed form solutions have this property that the error present in the specification of the initial state can be amplified exponentially”— I don’t think Stone actually says that. He does say that chaotic systems have that property, and he quotes an authority on this. Exponential divergence has often been used as a criterion of chaos, e.g., by Batterman, but Werndl argues well against that criterion.

    4. There may be something right in your analogy, but it is also potentially misleading, since a function that approaches infinity at a finite value c is considered discontinuous at c. I think a better analogy would be this: Consider a function f(x) that approaches a finite value a as x approaches c from the left but approaches a different finite value b as x approaches c from the right. This f(x) is literally discontinuous, and it’s graph looks like a broken curve. Now suppose we round off the tips of the curve at the break and reconnect the curve with a very steep, nearly vertical line. This is somewhat analogous to sensitive dependence. The value of the function depends very sharply on x, but it is not literally discontinuous.

    BUT, that said, remember two other important features of sensitive dependence: (1) the instability (orbits that separate by delta) occurs everywhere, i.e., in every small neighbourhood of phase space, and (2) the separation does not have to occur by any specific time, just by *some* time, i.e., eventually.

    5. I think you’ve cleared this up correctly in your P.S.

    Emma and Jon, thanks for your comments on prediction in QM. We can come back to the details of QM next term, but I think it’s worth asking: Given that you can make *some* non-probabilistic predictions in QM, does that make them predictable in Stone’s sense? Does it mean they satisfy (d)?

    ReplyDelete