A previous post briefly reviewed convex analysis. Here I’ll review the application of convexity in basic thermodynamics.

**Equilibrium states**

The concept of thermodynamic equilibrium is a generalization of mechanical equilibrium, where all forces and torques cancel each other. Informally, the idea is that a system in thermodynamic equilibrium has stable, unchanging macroscopic properties, which may be characterized by an n-tuple of extensive variables. How many and which variables depends on the system at hand and is, as far as I know, an empirical matter. For simple fluids, which consists of only one kind of particles, the extensive variables are the internal energy, the volume, and the number of particles. More complicated systems require additional variables. In general, the extensive variables may be collected in a vector , with the first variable being the internal energy . Some of the extensive variables may be fixed by external constraints, while others are free to vary.

Identifying the n-tuple with an equilibrium state, the first assumption is:

Postulate 1: The manifold of equilibrium states is a convex set.

For simplicity, this convex set will be taken to have the form .

Postulate 2: There is a function , calledentropy, of the extensive variables of a system. At thermodynamic equilibrium, the extensive variables take values that maximize the entropy subject to the external constraints.

Before the entropy function can play a useful role in the theory, it is necessary to know some of its properties:

Postulate 3: The entropy is (i) additive over subsystems, (ii) homogenuous in the sense that (with ), and (iii) a strictly monotonically increasing function of the internal energy .

When two systems with equilibrium states are considered subsystems of a larger system the equilibrium states of the joint system can be taken to be the convex set . For convenience, the manifolds and are assumed to be equal. This can always be achieved by adding to the to the manifold fictional coordinates that are considered subject to constraints . These constraints are lifted when the two subsystems are able to interact by exchanging the quantity represented by . Consider, for example, two rigid vessels containing hydrogen molecules and oxygen molecules, respectively. When isolated, the systems are characterized by their internal energies, volumes and number of particles (hydrogen molecules and oxygen molecules, respectively). Thus the extensive variables of the first system could be taken to be , but it is convenient to let and consider as an additional external constraint on the system since no oxygen or water molecules can enter the isolated system. Since the system is isolated and the vessel is rigid the variables are also subject to the constraints . The variables for the vessel containing oxygen molecules are chosen and constrained analogously. When the two vessels are brought into contact and their volumes are connected, the joint system is characterized by . Letting denote the sum of extensive variables for the subsystems, the relevant constraints are now:

- (constant total energy),
- (constant total volume),
- (constant number of hydrogen atoms), and
- (constant number of oxygen atoms).

Notice that the joint system has a new macroscopic degree of freedom, , that is not a real degree of freedom in either of the subsystems before they are brought into contact.

Keeping the above discussion in mind, the meaning of Postulate 3(i) is that the entropy of the joint system can be decomposed into entropies of the subsystems,

,

where the entropy of a subsystem is a function of only the extensive parameters of that subsystem. Here, and are defined on the same domain and are the same function.

**Concavity of the entropy function**

Postulate 2 asserts that the joint equilibrium state of two interacting subsystems is the solution to the optimization problem

Here the matrix defines which linear combinations of variables are constrained. Often, but not always, the constraint is simply that is constant. Note that there were two constraints of a more general form in the above example with the vessels of hydrogen molecules and oxygen molecules. Defining the function

it is now possible to reexpress the optimization problem as

Now temporarily assume that is invertible so that is uniquely determined from the constraints on the joint system. In that case may be considered the entropy of a joint system formed by bringing into contact two isolated systems, initially in states and , respectively. Furthermore, the function may be identified with the subsystem entropy functions , because it is defined on the same domain and it represents the same physical quantity. Writing , it now follows that when two systems are brought into contact, with resulting changes of states from to , , the entropy of joint system is

where the inequality follows from the expression for in terms of a supremum and the expression as a whole holds for all . The last inequality holds in full generality and together with extensivity (Postulate 3(ii)), it follows that entropy is a concave function, i.e.

,

for all .

Turning to the somewhat artificial case when is not invertible and the joint system is therefore not constrained to the states of constant , but can relax its state further (perhaps by being able to exchange particles with its environment), the above discussion remains valid with the following modification

The last inequality remains unchanged and concavity follows as before.

Conclusion: The entropy is a concave function of the extensive variables.

From concavity it follows that the entropy has no minima (except at the boundary of the manifold of equilibrium states), no saddle points, and all local minima are also global minima. When the entropy function is differentiable, the state of maximum entropy may therefore be determined by seeking stationary points of the entropy (or, more precisely, of a Lagrangian taking the constraints into account).

**Energy representation**

At this point it is useful to change the notation slightly. In order to clearly distinguish functions from function arguments, the entropy function and the internal energy function will in this section be denoted by the calligraphic symbols and , respectively. The fact that is a strictly increasing function of the internal energy (see Postulate 3(iii)) enables the internal energy function to be defined implicity through the equation

.

Strict monotonicity of the entropy function guarantees that this equation has a unique solution. From the concavity of it now follows that for the convex combinations and , with ,

.

Using the monotonicity of the entropy function to “invert” this relation now yields

.

Thus, the internal energy is a convex function. Instead of maximizing the entropy subject to the constraint that the sum of all subsystem energies is constant (and other constraints not involving energy), one may equivalently minimize the internal energy subject to the constraint that the sum of all subsystem entropies is constant (and other constraints unchanged).

Conclusion: Energy-constrained maximization of entropy is equivalent to entropy-constrained minimization of internal energy. Both methods yields the equilibrium state of a thermodynamic system.

From this principle one recovers as a special the mechanical condition for equilibrium. Mechanical equilibrium is attained when the potential energy is minimized. At zero temperature the internal energy coincides with the potential energy and the thermodynamical and mechanical equilibrium conditions are equivalent.

**Intensive variables**

When the internal energy is differentiable one may define an intensive variable for each extensive variable,

,

.

The derivative w.r.t. volume yields the (negative) pressure, the derivative w.r.t. a particle number yields the corresponding chemical potential, the derivative w.r.t. an external electric field yields the polarization, the derivative w.r.t. an external magnetic field yields the magnetization, derivatives w.r.t. strain deformations yield the stress tensor, and so on. Most intensive quantities are familiar from other branches of physics. The temperature (derivative w.r.t. entropy) is special in that it has no analogue in other branches of physics. Intensive variables provide a convenient way to express equilibrium conditions. For example, two fully interacting subsystems are in equilibrium when all their intensive variables are equal.

If the internal energy is not differentiable, one may introduce intensive variables as the new variables that are introduced when the internal energy is Legendre-Fenchel transformed (see the previous post on convex analysis).

**Thermodynamic potentials and Massieu functions**

Thermodynamic potentials are partial Legendre-Fenchel transforms of the internal energy. Transforming the internal energy w.r.t. entropy yields

.

The new variable can be identified with the temperature and for a differentiable internal energy function it will coincide with the definition in terms of a derivative. A partial Legendre-Fenchel transform flips the convexity/concavity property of a function. The internal energy is convex in all extensive variables), while the transformed function is concave in the temperature and convex in the remaining (extensive) variable. In general, the Legendre-Fenchel transforms are concave in the intensive variables and convex in the extensive variables.

The Legendre-Fenchel transformed functions, called *thermodynamic potentials*, are primarily useful in situations when the thermodynamic system of interest is in equilibrium with an environment with known intensive parameters. For example, if the system is in equilibrium with an environment with known temperature and pressure, it is very useful to perform two Legendre-Fenchel transforms that replace entropy and volume by temperature and pressure. The resulting thermodynamic potential is called *the free energy* and the equilibrium properties of the system follow from holding temperature and pressure fixed while minimizing the free energy w.r.t. the remaining variables.

*Massieu functions* are analogous to thermodynamic potentials, but they arise from Legendre-Fenchel transforms of the (negative) entropy rather than of the internal energy.

**When postulates fail: monotonicity and spin chains**

Spin chains in external magnetic fields provide a simple example of how the postulates above can fail. Consider a spin chain consisting of spins that interact with an external magnetic field . Readers not familiar with the quantum mechanical concept of spin may think of microscopic magnets having the peculiar property that they are either parallel or anti-parallel to the magnetic field. Letting and , denote the number of spins that are parallel and anti-parallel to the magnetic, respectively, the internal energy of the system is given by

where is a constant. In statistical mechanics, the Boltzmann entropy is defined as the logarithm of the number of microstates consistent with a given thermodynamic equilibrium state. The number of microstates consistent with a magnetization $\mu (N_1-N_0)$ is given by

and the Boltzmann entropy is

.

By varying the number of parallel spins over the interval one obtains a discrete set of points that define the Boltzmann entropy as a function of the internal energy. A complication arises here since the notions of convexity and concavity used above are only defined for functions of continuous domains, while the entropy is only defined for a discrete domain of energy values. However, extending the Boltmann entropy function through interpolation circumvents this problem. (Alternatively, a fully quantum mechanical treatment using the von Neumann entropy also circumvents this problem, without resolving the underlying failure of the thermodynamic postulates.) The underlying problem is instead that the Boltzmann entropy $S_{\text{B}}(U)$ is not an increasing function of the internal energy, in contradiction with Postulate 3(iii). Plotting the points , as done on the left in the figure below for the case , reveals that the Boltzmann entropy reaches a maximum at zero internal energy, and decreases as the internal energy is increased further.

The slope at different points on the entropy curve is the *coldness* of the system, defined as the inverse temperature. At the maximum of the entropy, the coldness passes zero, and the temperature tends to infinity. When the maximum is approach from the left, the temperature tends towards and the spin chain becomes infinitely hot! For positive values of the internal energy the temperature is negative and as the maximum is approached from the left the temperature tends towards . It is easier to think of this in terms of coldness, which decreases monotonically with increasing internal energy. Thus, a higher internal energy also corresponds to a lower coldness, i.e. a hotter system. In a sense, is the hottest of all temperature limits, is a colder limit that nevertheless is hotter than any positive temperature. Heat spontaneously flows from a system with negative temperature to any system with positive temperature and the negative temperature states that occur for are perhaps best thought of as some kind of pseudo-equilibrium states, rather than true equilibrium states.

The non-monotonicity of the Boltmann entropy for the spin chain also means that the maximum entropy principle cannot be equivalently reexpressed as a minimum internal energy principle.