Sunday, 15 March 2015

The Gibbs Approach

I will give an overview of the Gibbs Approach as described by Frigg and the foundational issues that arise in this approach. I will end with some questions for discussion.
 
This approach differs quite fundamentally both from the Bolzmanian approach and everyday physics in that it does not describe and analyse the single system of interest. Instead SM now deals with an uncountably infinite collection of systems, an "ensemble". For all these systems the Hamiltonian is the same but they are distributed over different states (I guess, all possible states in phase space).
This ensemble as a whole is now the object of study. Its state is described by combining the states of all systems (i.e. the point in phase space of each) in a function ρ over the phase space. This function is treated as the probability density reflecting the probability of finding some random system from the ensemble to be in a state in a certain region of phase space.
This approach allows for a treatment of physical magnitudes observed in experiments as expectation values: such a magnitude is expressed by a function, and together with the density function the expected value can be calculated, which is the prediction of the formalism for the outcome of the experiment, the “phase average".
Now for equilibrium a necessary condition is the stationarity of the distribution, that is, if the macro-system is in equilibrium, the density function of the ensemble that describes that macro-state as an average does not change over time. An important kind of stationary distribution is the distribution that maximises Gibbsian entropy SG (ρ) , given some assumption constraining energy and the number of particles. Depending on those assumptions three important distributions result, the micro-canonical, the canonical and the grand-canonical distribution.


The first is the analysis of single macro-states in terms of ensembles. How can a theory make true statements and successful predictions about something that it doesn't describe? The typical textbook solution that combines ergodicity and time averages fails, because the idea of infinite time averages is untenable. One solution by Malament and Zabell makes use only of ergodicity and drops the time averages. Not all relevant systems are ergodic, but perhaps they are ϵ -ergodic. Another approach restricts the theory to systems with large degrees of freedom and the functions to so-called sum functions. These two guarantee that a system behaves as if it was ergodic. Both of these approaches are problematic and unresolved, according to Frigg.
The second problem is the problem of interpreting probability that we know from quantum mechanics. The three options on the market are frequentism, time averages and an epistemic interpretation. All three are difficult and controversial (also a familiar insight from quantum mechanics).
The third problem is that the Gibbs approach does not work for non-equilibrium states. For the formalism implies a constant Gibbs entropy, which conflicts with the idea of thermodynamical entropy. Furthermore there can't be a change from stationary distributions to non-stationary ones or vice versa. Frigg lists a number of approaches that try to adress this problem.

A couple of questions to discuss are:
  • When describing a problem of frequentism Frigg claims that it is problematic to see an ensemble as an urn from which one system can be drawn. I don't see why that is. It does not seem absurd to think of this abstractly as choosing randomly one of all the systems in the ensemble, just like taking a ball from an urn.
  • I wonder how different interpretations of probability could lead to different formalisms, as Frigg claims in (2008).
  • Also I am not quite sure how exactly the approach solves the main problems of recurrence and reversal that the Bolzman approach faces. Does the formalism in terms of ensembles not have these problems? 

1 comment:

  1. RE frequentism and balls-from-urns analogies:

    Frigg says (field guide):
    ``A common way of looking at ensembles is to think about them in analogy
    with urns, but rather than containing balls of different colours they contain
    systems in different micro-states ... this point of view naturally lends itself to a frequentist analysis of probabilities.''

    I am unclear as to why balls-from-urns style probability questions naturally lend themselves to frequentist analysis. Rather, I would have thought, that they most readily lend themselfves to subjective probability analysis. Why is it that Frigg makes this link?

    ReplyDelete