This ensemble as a whole is now the object of study. Its state is described by combining the states of all systems (i.e. the point in phase space of each) in a function ρ over the phase space. This function is treated as the probability density reflecting the probability of finding some random system from the ensemble to be in a state in a certain region of phase space.
This approach allows for a treatment of physical magnitudes observed in experiments as expectation values: such a magnitude is expressed by a function, and together with the density function the expected value can be calculated, which is the prediction of the formalism for the outcome of the experiment, the “phase average".
Now for equilibrium a necessary condition is the stationarity of the distribution, that is, if the macro-system is in equilibrium, the density function of the ensemble that describes that macro-state as an average does not change over time. An important kind of stationary distribution is the distribution that maximises Gibbsian entropy , given some assumption constraining energy and the number of particles. Depending on those assumptions three important distributions result, the micro-canonical, the canonical and the grand-canonical distribution.
This whole procedure is empirically very successful but conceptually puzzling. There are essentially three problems with it.
The first is the analysis of single macro-states in terms of ensembles. How can a theory make true statements and successful predictions about something that it doesn't describe? The typical textbook solution that combines ergodicity and time averages fails, because the idea of infinite time averages is untenable. One solution by Malament and Zabell makes use only of ergodicity and drops the time averages. Not all relevant systems are ergodic, but perhaps they are -ergodic. Another approach restricts the theory to systems with large degrees of freedom and the functions to so-called sum functions. These two guarantee that a system behaves as if it was ergodic. Both of these approaches are problematic and unresolved, according to Frigg.
The second problem is the problem of interpreting probability that we know from quantum mechanics. The three options on the market are frequentism, time averages and an epistemic interpretation. All three are difficult and controversial (also a familiar insight from quantum mechanics).The third problem is that the Gibbs approach does not work for non-equilibrium states. For the formalism implies a constant Gibbs entropy, which conflicts with the idea of thermodynamical entropy. Furthermore there can't be a change from stationary distributions to non-stationary ones or vice versa. Frigg lists a number of approaches that try to adress this problem.
A couple of questions to discuss are:
- When describing a problem of frequentism Frigg claims that it is problematic to see an ensemble as an urn from which one system can be drawn. I don't see why that is. It does not seem absurd to think of this abstractly as choosing randomly one of all the systems in the ensemble, just like taking a ball from an urn.
- I wonder how different interpretations of probability could lead to different formalisms, as Frigg claims in (2008).
- Also I am not quite sure how exactly the approach solves the main problems of recurrence and reversal that the Bolzman approach faces. Does the formalism in terms of ensembles not have these problems?