[go: up one dir, main page]

cards on the table

edit

It occurs to me that it may be useful to say the following. For definiteness and simplicity, I have in mind an isolated body. Whether in thermodynamic equilibrium or not, its whole-system instantaneous microstate can reach some region   of phase space. Just seeing the whole-system instantaneous microstate at some point in   does not tell us whether the macrocondition is equilibrium or non-equilibrium.

To find that out just from knowledge of the whole-system instantaneous microstate, we need to follow its trajectory for a good length of time, even for a rather long time. Overwhelmingly, not remotely as long as the Poincaré recurrence time, but still much longer than the time needed to make a measurement of, say, local temperature or wall pressure. To verify thermodynamic equilibrium or non-equilibrium, we need time to make very many measurements well separated in time.

Equilibrium is characterized by all measurements of every particular supposed state variable hovering around their respective means. The whole-system instantaneous microstate shows no drift over time, however long, practically 'covering', but not necessarily 'filling', the whole of   practically uniformly over time. Thermodynamic entropy gives a precise measurement of how the practically uniform 'covering' of   actually 'fills' it over infinite time, a sort of time-averaged logarithmic density × d area integral. Such an integration is job for mathematicians. They have an arsenal of definitions of various entropies. Our IP mathematician friend is expert in this, and thinks it is the underlying basis of the general concept of 'entropy'; he has a good case.

Statistical mechanics provides a sort of Monte Carlo procedure to estimate that integral, using ergodic theorems.

Non-equilibrium is characterized by some sequence of measurements drifting a significant 'distance' through phase space. The drift may involve repeated distinct visits of the whole-system instantaneous microstate to some region of phase space, but it must be evident that they are repeated distinct and separate visits, not just little excursions in a permanent and persistent hovering pattern. In general, for a non-equilibrium trajectory through the phase space of whole-system instantaneous microstates, over some long observation time interval  , the trajectory will drift from some region   to some other region  , with negligible overlap  . Thermodynamic entropy does not apply here. Other so-called 'entropies' may be defined ad lib, but they refer to some kind of 'time rate of entropy production'.Chjoaygame (talk) 20:09, 19 December 2020 (UTC)Reply

I think of it this way: It is an *assumption* that every trajectory will visit any neighborhood in phase space with a probability proportional to the "volume" of that neighborhood. This is just another way of saying that each microstate is equally probable. Phase space may be divided up into a large number of macrostates, each with their own information entropy. For systems with a large number of particles, the microstates corresponding to the equilibrium macrostate hugely outnumber the volume of the nonequilibrium microstates combined. It follows that, starting from a nonequilibrium microstate, the trajectory will wander into the equilibrium macrostate region and practically never leave. Observationally, that is the signature of an equilibrium state - the macrostate is unchanging. Since the information entropy of a macrostate (and, by Boltzmann's equation, the thermodynamic entropy) is proportional to the log of the phase space volume occupied by that macrostate, the information entropy of the equilibrium macrostate is the largest. A trajectory from a non-equilibrium microstate does not "drift" in any particular direction any more than a trajectory from an equilibrium microstate does. A sort of random walk from any point in phase space will almost certainly walk you into an equilibrium microstate, and almost certainly not walk you into a non-equilibrium microstate, no matter what kind of state you started from. In phase space, trajectories do not "hover" around equilibrium microstates. The macrostate variables do "hover" around their means, however. PAR (talk) 21:17, 19 December 2020 (UTC)Reply
We are putting our cards on the table. I have some problems with your just above comment.
Your point of view is that of statistical mechanics. Statistical mechanics is a clever, indeed brilliant and even masterly, and handy mathematical procedure for a sort of Monte Carlo integration, using a concept of random walking, relying on ergodic assumptions. Statistical mechanics is a highly sophisticated topic, taught after several years of advanced education in physics. I don't see it as obvious that it is suitable for novices who are uneducated in physics.
The notions of 'an equilibrium microstate' and of 'a non-equilibrium microstate' belong specifically to statistical mechanics.
A physical trajectory as conceived by Boltzmann is not a random walk, but is generated by Newton's laws of motion. Mathematicians today try to deal with such trajectories as such. Thermodynamic equilibrium and non-equilibrium are characterized by trajectories, not by isolated points. Every point on an equilibrium trajectory has an equal status, as, if you like, 'an equilibrium microstate'. No point on an equilibrium trajectory is 'a non-equilibrium microstate'. Every point on a non-equilibrium trajectory has, if you like, an equal status as 'a non-equilibrium microstate'. No point on a non-equilibrium trajectory is 'an equilibrium microstate'. Boltzmann continued the work of Maxwell and others, using the statistical mechanical procedure, but that does not actually make a Newtonian trajectory into an actual random walk.
For system in thermodynamic equilibrium, a fluctuation is a fluctuation; a fluctuation doth not a departure from thermodynamic equilibrium make.
It might be said that the switch from Newtonian trajectory to random walk is made by a mind projection fallacy. It is not obvious that we should impose that fallacy on novices who are not expected to be trained in academic physics.Chjoaygame (talk) 23:35, 19 December 2020 (UTC)Reply
I have to revise the above.
That the overlap   should be negligible certainly gives a non-equilibrium trajectory. But such is well and truly and thoroughly non-equilibrium. More tightly, non-equilibrium just needs  , though that doesn't exactly settle things, because I haven't tightly said what I mean by the trajectory being in a region   at a time  . What sort of region is  ?
For thermodynamic equilibrium, the condition   is necessary and sufficient if at least one of { ,   } holds.
We may consider a thermodynamic process that starts when two equilibrium systems   and  , that separately occupy regions   and  , are exposed to each other, and ends when a thermodynamic process isolates the final joint system, so that its initial instantaneous microstate obeys the conditions   and   and   with   in an obvious notation for the final thermodynamic equilibrium. (To be strict, even this doesn't really do the trick.) The second law requires something such as  , in a suitable notation, with   denoting a proper subset relation. The second law requires more, making a strict statement about entropies.
A non-equilibrium process is not so simple to define microscopically in general terms. But surely it requires at least definite initial and final conditions? And that they belong to different regions,   in some sense. But it doesn't require such strict separation as makes negligible the overlap  .Chjoaygame (talk) 03:15, 20 December 2020 (UTC)Reply
I did not mean to imply that a trajectory in phase space was a random walk. I used the phrase "sort of a random walk". I agree, using classical physics, the trajectory is determinate, and not truly random. However, it is an *assumption* that every (determinate) trajectory will eventually enter, and then leave, ANY given neighborhood of phases space if you wait long enough, and, after waiting a long time (many Poincare recurrence times) the probability that the system will be in a given neighborhood is equal to the volume of that neighborhood divided by the volume of the entire phase space. The ergodic assumption is that if you take every trajectory as equally probable, you will arrive at the same conclusion.
I disagree with your statement "For system in thermodynamic equilibrium, a fluctuation is a fluctuation; a fluctuation doth not a departure from thermodynamic equilibrium make." Thermodynamics ONLY deals with systems in the thermodynamic limit. The equivalent of the thermodynamic limit in stat mech is the limit of an infinite number of particles. In that limit there are no fluctuations, or more exactly, systems which are "fluctated" away from equilibrium have a measure zero. The states fluctuated away from equilibrium exist, but the probability that they are visited is zero, in the thermodynamic limit. Only then does the second law hold rigorously.
For a finite system, there will be fluctuations, and we are, strictly speaking, outside of thermodynamics. The line between an equilibrium state and a non-equilibrium state becomes blurred. The number of microstates which represent "exact" equilibrium is actually very small, and every other microstate can be seen as a fluctuation away from that exact equilibrium, and therefore a non-equilibrium state. The second law does not rigorously hold. Or, we can draw an arbitrary line which says "microstates representing fluctuations in macrostate properties less that such-and-such will be declared equilibrium microstates, all others are declared non-equilibrium microstates". With such a declaration, for a large but not infinite system, you can pick a dividing line in which the second law is fairly rigorous, and the difference between a fluctuation and a non-equilibrium state is clear. However, where that dividing line is drawn subjective, not carved in stone.
I disagree with your statement "Thermodynamic equilibrium and non-equilibrium are characterized by trajectories, not by isolated points." For a system in the thermodynamic limit, or a finite system with a line drawn somewhere, a given microstate is either an equilibrium microstate or it is not. It does not matter how it got there. There is no such thing as an an "equilibrium trajectory". There are just trajectories. Because the set of equilibrium microstates so dominates phase space, any trajectory in a large system will, after time, most likely be found in an equilibrium microstate and for a system in the thermodynamic limit, it will "almost certainly" (i.e. with measure 1) be found in an equlibrium microstate.
The second (revised) part of your statement, using R, confuses me. What does R represent, a region of phase space? If so, I find the first statement "That the overlap ..." totally confusing. In the thermodynamic limit, when two systems are brought into contact, they form a single system in a non-equilibrium microstate. The ensuing trajectory will certainly take the system to an equilibrium microstate.
Thank you for your careful response. We are indeed putting our cards on the table.
One question is about your sentence "I agree, using classical physics, the trajectory is determinate, and not truly random." I agree with you there, but others might not, and might strongly disagree. I would like to continue to work with you, taking that as agreed.
Another question is about your "thermodynamic limit". I don't agree about that. I am reading you that you want to work in a limit in which fluctuations are utterly infinitesimal and of practicality zero magnitude. I am guessing that you go that way because you like to bear in mind a statistical mechanical background in which such a limit is convenient; if you have a different reason, please tell me. I like to think of a finite body. For that, fluctuations are thinkable in such cases as critical states; one can practically see fluctuations in them. Einstein considered systems that to me seem to be thermodynamic systems with fluctuating entropy. I suppose such a system to interact with a thermal reservoir through a diathermal wall. Earlier in this conversation, I have set out my thoughts on this topic, but I think those thoughts have now been archived. In summary, theoretically possible fluctuations occur in the conjugate variable of a state variable that is fixed constant by the definition of the system's state variables and walls. For example, a system defined by   can suffer fluctuations in   and in  , but neither in   nor in  . Yes, such fluctuations will usually be on the fractional order of perhaps 10−20, and too small to detect. But I am happy to talk about them and to try to think about them. I see no reason to make them unthinkable by going to the limit of an infinitely massive system. I am happy to think of them as so small as to be usually practically negligible. So I disagree with your sentence “For a finite system, there will be fluctuations, and we are, strictly speaking, outside of thermodynamics.” Instead, I would just say that fluctuations are negligibly small.
In a deterministic finite system, a trajectory of whole-system instantaneous microstates could (let me say 'can' for a small enough system) in principle be defined. It will almost certainly be deterministically chaotic, and will explore its phase space. In the above example, every point in the trajectory will have the fixed constant   and  , because the walls are perfectly rigid, smooth, and elastic. But locally and occasionally   and   will be determined as suitable space–time averages. This sets a formidable, indeed a practically forbidding, mathematical or computing problem. But to me it makes physical sense. For me, the physics wins. I would be perfectly happy to consider a system of 100 or even 10 finite sized molecules. Then the Poincaré recurrence time might even be accessible. Locally and occasionally, there will be detectable fluctuations of   and  . I guess that you would say that such fluctuations are departures from thermodynamic equilibrium, and are composed of non-equilibrium whole-system instantaneous microstates. I would say that they are par for the course, and in perfect accord with the thermodynamic equilibrium, because they belong to the thermodynamic equilibrium trajectory. I would object that your criteria of non-equilibriarity were arbitrary, unenlightening, and confusing. So I hold to my views that a fluctuation doth not a departure from thermodynamic equilibrium make, and that every point on an equilibrium trajectory has its equal claim to be, if you like, an equilibrium point, and that no point on the equilibrium trajectory is a non-equilibrium point, etc..
For me, an equilibrium trajectory so defined will explore and define its proper region in whole-system instantaneous microstate phase space. Geometers will measure the entropy of the trajectory. Yes, my regions such as   are regions in that phase space, defined by their respective trajectories. If that is accepted, I think (modulo some details that may be fixable) that what I wrote above makes sense.
I suppose that a numerical computation of such a finitely defined trajectory would thoroughly comply with your "However, it is an *assumption* that every (determinate) trajectory will eventually enter, and then leave, ANY given neighborhood of phases space if you wait long enough, and, after waiting a long time (many Poincare recurrence times) the probability that the system will be in a given neighborhood is equal to the volume of that neighborhood divided by the volume of the entire phase space. The ergodic assumption is that if you take every trajectory as equally probable, you will arrive at the same conclusion." I guess that a mathematician might prove it without recourse to numerical computation. We agree that it is ok to watch the proceedings from our seats for times measured on Poincaré clocks; we will be supplied with popcorn ad lib. But I think it is a mathematical stratagem to make your assumptions, and is not obvious from a naïve or novice physical viewpoint. I think those assumptions were discovered by strokes of genius on the parts of Bernoulli, Herapath, Waterston, Clausius, Maxwell, Boltzmann, and their ilk. If they get into the article, they should be presented and celebrated explicitly as such, that is to say, not as obvious physical facts, but as brilliant mathematical stratagems.
If I can prevail upon you to consider things from the point of view that I have just set out, I hope that you may allow that my idea of 'equilibrium trajectories' will do instead of your ideas such as of 'non-equilibrium points' in an equilibrium state. I think the point of view that I have just set out is physically intuitive, simple, and logically valid. I think that if we assume it, we can write a simpler and more naïvely comprehensible article. I think that the point of view that I have just set out is the one taken by pure mathematicians. I accept that it is unfamiliar to academically trained physicists, who perhaps may even find it idiosyncratic or out there. An advantage of this point of view is that the thermodynamic entropy of an equilibrium state of an isolated system is a fixed constant, and so that the second law is true without probabilistic modification.Chjoaygame (talk) 09:33, 20 December 2020 (UTC)Reply
Ok, lets do this step by step to find out the point of disagreement.
  • Do you agree that the second law of thermodynamics in effect states that entropy will never decrease?
  • Do you agree that for an isolated finite system, entropy fluctuations cannot be eliminated? (Due to Poincare recurrence). If you disagree, please outline a practical finite system in which entropy is without fluctuation.
  • Do you agree that for a system with entropy fluctuations, some will constitute a decrease in entropy and will therefore be, strictly speaking, in violation of the second law?
  • You state: "every point in the trajectory will have the fixed constant S and V." I assume by "point" you mean microstate. If that is correct, can you outline a method of calculating the entropy of a microstate?
PAR (talk) 15:44, 20 December 2020 (UTC)Reply
Ok, this procedure may help. I may observe that in a galaxy far away, the empire's wagons appear to be circling.
  • Do you agree that the second law of thermodynamics in effect states that entropy will never decrease?
I accept that such statements are dearly beloved of those who make them. I think they are slick and shoehorned into brevity, while leaving the nature of entropy and the second law mysterious and baffling. Indeed, their slickness may gratuitously contribute to the bafflement. One person I know is puzzled as to why the subjective concept of knowledge comes into the explanation of the objective fact expressed by the second law. In a nutshell, it is because probability in this context is a superfluous concept, as observed by Guggenheim in 1949. In the language of Jaynes, it comes in via a mind projection fallacy, not from the physics. To deflect concerns that may arise here, I will say that the second law says that when bodies of matter and radiation are brought together so as to interact intimately, then their total entropy increases. The 'never decreases' wording, as I have mentioned here before, implicitly (and distractingly) allows for the convenient purely theoretical concept of 'reversible thermodynamic processes'.
  • Do you agree that for an isolated finite system, entropy fluctuations cannot be eliminated? (Due to Poincare recurrence). If you disagree, please outline a practical finite system in which entropy is without fluctuation.
No, I do not agree. Poincaré recurrence does not signal entropy fluctuation. It just follows from the laws of motion.
Requested outline: A molecule or billiard ball moves elastically in a rigid enclosure. The enclosure is so shaped that the particle never exactly retraces its trajectory. The particle goes nearly everywhere in the enclosure. It often visits every finite small region of the enclosure. It traces out a possibly or nearly spacefilling trajectory. The entropy of the thermodynamic system is a property of the state of thermodynamic equilibrium, and is so defined. It is not a property of an instantaneous point on the trajectory. The nearly spacefilling trajectory, taken as a whole, defines the entropy of the thermodynamic equilibrium state. It is time invariant because it is defined by the whole trajectory. This is the way mathematicians think about the matter nowadays. The concept of probability provides an attractive mathematical procedure to calculate the entropy, as in Monte Carlo, but that is not the only way to do the calculation. The entropy is a geometric property of the trajectory, and can be calculated directly in geometrical terms, without appeal to probability.
In this simple example, only space is explored, and the particle moves with constant speed except at instants of collision. In examples with several particles that can be excited or de-excited in a collision, collisions generate various speeds, so that a more complicated phase space is required. This excitation–de-excitation possibility dispels the mystery of why systems that lack it do not show the usual phenomena. See below.Chjoaygame (talk) 21:17, 20 December 2020 (UTC)Reply
  • Do you agree that for a system with entropy fluctuations, some will constitute a decrease in entropy and will therefore be, strictly speaking, in violation of the second law?
This is like asking me 'have I stopped beating my wife?' Entropy does not fluctuate in an isolated system in a state of thermodynamic equilibrium. My reason is in the just foregoing examples. I think it saves a lot of unnecessary heartache to think of the second law without the artificial worry of 'fluctuating entropy in an isolated system'. Entropy fluctuations appear to occur in a system in thermal equilibrium across a diathermal wall with a temperature reservoir in the surroundings; such a system is not isolated. Such equilibria show no temperature fluctuations. No violation of the second law occurs because it is about total entropy, which is a property of the system and reservoir considered jointly as an isolated system.
  • You state: "every point in the trajectory will have the fixed constant S and V." I assume by "point" you mean microstate. If that is correct, can you outline a method of calculating the entropy of a microstate?
In general, an instantaneous microstate, aka 'point in the trajectory', does not have a physical entropy. There is no reason to try to calculate it. Physical entropy is a property of a trajectory, which can be identified by the law of motion that generates it. That's how present day mathematicians do it.Chjoaygame (talk) 20:51, 20 December 2020 (UTC)Reply
Thinking it over.
Above, I wrote “In examples with several particles that can be excited or de-excited in a collision, collisions generate various speeds, so that a more complicated phase space is required. This excitation–de-excitation possibility dispels the mystery of why systems that lack it do not show the usual phenomena. See below.”
Yes, such excitation and de-excitation brings in a topic from which I am topic banned. I might write more about it were I not banned. It does indeed make the business stochastic, and probabilistic. This blows away some of my above reasoning. Cercignani mentions some curious closely relevant facts without explaining them. A quietly historically recognised example is the case of the inverse fifth power particle force law. It was early recognised, I think by Maxwell, as exactly solvable, and does not show the expected spreading. For this reason, it is not widely celebrated; indeed it is often not mentioned. Now for the first time I understand it; I don't recall reading this explanation. Avoiding WP:OR, I guess someone will fill me in on it. It may deserve specific explicit appearance in the article. But it does not detract from the main concern here, about physical entropy being a property of a trajectory, not of an instantaneous microstate.Chjoaygame (talk) 21:48, 20 December 2020 (UTC)Reply
So we can at least agree that to ask for the entropy of a microstate is an improper question, to put it a bit more politely than your wife-beating analogy. :) PAR (talk) 23:31, 20 December 2020 (UTC)Reply
The wife-beating thing is a standard joke.Chjoaygame (talk) 23:48, 20 December 2020 (UTC)Reply
Thank you for coming to my defence. I suppose that some participants do not see that we are aiming to clear away weeds that hide the flowers.
Next, coming to “So we can at least agree that to ask for the entropy of a microstate is an improper question.” Do you really mean that we agree about that? To go to further explicitness, are you agreeing that thermodynamic entropy is a property of a trajectory?Chjoaygame (talk) 23:56, 20 December 2020 (UTC)Reply
The crucial insight, that, for mixing or spreading of speeds, molecular collisions need to be inelastic, comes as news to me; a valuable product of your questioning; simple and obvious when one thinks about it. Just reflecting on it now, it seems at first glance that it has to be stochastic. That it is physically quantal may perhaps be relevant or irrelevant; not sure. On second thoughts, I am not sure that it has to be stochastic; not sure. Next: perhaps the mixing of speeds can be adequately achieved simply from a diversity of initial speeds, with many particles in the system? Relevant here are Planck's carbon grain, and that for local thermodynamic equilibrium, one has both the Maxwell-Boltzmann and the Planck black body distributions.Chjoaygame (talk) 00:10, 21 December 2020 (UTC)Reply

I am not going so far as saying that entropy is a property of a trajectory, until we clear some things up. When I asked whether you agree that the second law says entropy never decreases, I should have said "always increases". Can we agree, then, on Planck's statement of the second law:

Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged

with the proviso that the limiting case is realistically unachievable.

You say that a trajectory "traces out a possibly or nearly spacefilling trajectory". What do you mean by "nearly spacefilling"? Also, you speak of "equilibrium trajectories", and I don't know what that means. Every trajectory, given enough time, will visit any microstate (or more exactly, any arbitrary neighborhood of any microstate). This will include non-equilibrium microstates. Given enough time, there is no trajectory that is confined to the set of equilibrium microstates. What, then, is an "equilibrium trajectory"?

Regarding inelastic collisions, I think any collision which does not result in an excitation will be elastic. This does not mean that there can be no spread of velocities, since only a head-on collision (of like objects) will preserve the individual speeds. Anything else will change the speeds and produce spreading. I still think it is deterministic, because knowing the exact position and velocity of the two objects will yield a deterministic account of the collision, and a microstate will provide these exact positions and velocities.

PS - I know the joke - it's an obvious example of a "leading question" or more formally an "improper question". Another version is "do you still steal candy from small children?" PAR (talk) 00:34, 21 December 2020 (UTC)Reply

PAR (talk) 00:34, 21 December 2020 (UTC)Reply

No, I take candy only from babies, not from small children.
Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged.
with the proviso that the limiting case is realistically unachievable.
Agreed, with the further proviso that the bodies in question are actually thermodynamic systems with defined entropies. Even Planck was beguiled by the thought of omniscience.
You say that a trajectory "traces out a possibly or nearly spacefilling trajectory". What do you mean by "nearly spacefilling"?
I am not clued up on space-filling or nearly space-filling curves, but there are various snowflake-like examples that are widely quoted. (Off the top of my head, the name 'Peano curve' comes to mind, but I am not clued up.) They vary in just how deeply they fill their respective spaces. Mathematical entropies measure that.
Also, you speak of "equilibrium trajectories", and I don't know what that means. Every trajectory, given enough time, will visit any microstate (or more exactly, any arbitrary neighborhood of any microstate).
Yes. But that is for an isolated system. Then   is guaranteed. Yes, for an isolated system, every trajectory is an equilibrium trajectory. There is no out for the system; it is trapped until it is rescued by a thermodynamic operation, or for ever.
For a non-equilibrium system, we need  . It is then no easy thing to say a lot about the trajectory.
This will include non-equilibrium microstates.
I am saying that for a 'non-equilibrium instantaneous microstate', one needs a non-equilibrium trajectory, that eventually gets somewhere that is distinctly other than where it started.
Regarding inelastic collisions, I think any collision which does not result in an excitation will be elastic. This does not mean that there can be no spread of velocities, since only a head-on collision (of like objects) will preserve the individual speeds. Anything else will change the speeds and produce spreading. I still think it is deterministic, because knowing the exact position and velocity of the two objects will yield a deterministic account of the collision, and a microstate will provide these exact positions and velocities.
Not exactly sure of the details here. Agreed that inelastic collisions excite the particles. My current guess is that balls of equal mass show preservation of speed when they collide? Yes, the direction of the motion changes, shown as change of velocity. As for determinism, I am inclined to favour your position, but I think we may be looking at complications that at present I am far from sure about.
On a tangent. Cercignani stimulated me to check exactly what Liddell & Scott have to say about the word ἐντροπή. They say "turning inward, respect (for someone), modesty, humiliation, subtle twists, tricks, dodges." One is reminded of the reflectiveness and reservedness of introverted personalities. Did Clausius or the ancient Greeks know that inelastic collisions help lead to diversity of motion of microscopic components of bodies? Did they know are quantal?
Still on the tangent. Cercignani defers to Gallavotti's father on why Boltzmann used the ending ode in 'ergode' etc.. I wouldn't defer to him. I think it means 'way' or 'path', as usual. Chjoaygame (talk) 02:21, 21 December 2020 (UTC)Reply
On the same tangent. What I saw in Liddell & Scott makes me think that it isn't right to say, as the article says, that the Greek word meant 'transformation'. I don't intend to try to do anything about that.Chjoaygame (talk) 02:32, 22 December 2020 (UTC)Reply
Rather, perhaps, 'turning inward' might be seen as referring to Poincaré recurrence, while 'subtle twists, turns, and dodges' might refer to such things as inelastic collisions?Chjoaygame (talk) 02:57, 22 December 2020 (UTC)Reply

Poincaré recurrence theorem

edit

The Poincare recurrence theorem (PRT) says that any trajectory eventually visits every neighborhood of the phase space. Repeatedly. That includes non-equilibrium neighborhoods. The system is never trapped forever. Given enough time, a system starting out in a non-equilibrium configuration will eventually return to that configuration, as well as any other non-equilibrium state available in the phase space. PAR (talk) 04:43, 21 December 2020 (UTC)Reply

I am not clued up on the details of the Poincaré recurrence theorem, but I guess that it refers only to a time-invariant system of ordinary differential equations. Such a system is the dynamical representation of a system that starts and ends with  , that is to say, an isolated system. I think the Poincaré recurrence theorem says something like 'any trajectory eventually visits every accessible neighbourhood of the phase space'. Guggenheim emphasises that the relevant effect of a thermodynamic operation (the system is "tampered with") is to make more of phase space "accessible".
For a non-equilibrium system, the requirement is  . I guess that the Poincaré recurrence theorem does not deal with such a system. For a non-equilibrium system, one needs something such as a time varying external force, or an externally imposed and maintained flow of matter or energy into or out of the system, supplied by the surroundings. Using ordinary differential equations, as does the Poincaré recurrence theorem, such a system is much more difficult to deal with than is an equilibrium system.
So I still believe that the difference between a non-equilibrium and an equilibrium system is decided by the whole trajectory. This means that whether a point is an 'equilibrium point' or a 'non-equilibrium point' is decided by the whole trajectory to which it belongs.Chjoaygame (talk) 05:18, 21 December 2020 (UTC)Reply
Please investigate the PRT. Every neighborhood of the phase space is accessible. Any trajectory will eventually visit any finite neighborhood you choose and if you look at it for many recurrence times, the sum of the amounts of time it spends there will be proportional to the volume of that neighborhood. This is what is meant by the less rigorous phrase "every microstate is equally likely". For a large but finite system, like a glass of water, the fractional volume of the non-equlibrium states in the phase space is miniscule, but not zero. That is why any trajectory will quickly (relative to the recurrence time) enter the equilibrium neighborhood, and then spend an enormously huge majority of its time, but not all of it, in that equilibrium macrostate neighborhood. PAR (talk) 06:01, 21 December 2020 (UTC)Reply
As I understand phase space, it is something like  , where   denotes the number of point particles; it has to be more complicated than that, but that will do me for now. The space   is of infinite extent, including infinite particle velocities and suchlike impossible things. For obedience to the laws of motion, only a very tightly constrained region of phase space is accesssible, namely the region that keeps the particles in their box, with total energy equal to the system energy. So I think that nothing remotely like every neighbourhood of phase space is accessible. That is for a time-invariant (isolated system) phase space. For a time-varying (non-equilibrium) version, things are much trickier. The accessible region is dictated by the actual trajectory of interest.
We may consider a time-invariant (isolated system, thermodynamic equilibrium) case. Suppose we start a trajectory at   at  . We run the ordinary differential equation forward for ever. The system obeys the laws of motion. The trajectory goes like an ultra-tangled ball of one-coloured woollen yarn, inside and practically all over the region  , which is well and truly finite. That is the accessible region. Though it goes practically all over the accessible region, it doesn't actually entirely fill it. There are many other trajectories, each of a different coloured yarn, that also go practically all over the accessible region. None of these intersects any other of them. All colours of the rainbow, and uncountably many more. All in a common nearly accessible region. Each trajectory partly 'fills' the common region, and its entropy measures how much it 'fills'. The law of motion has the property that the countless respective trajectory entropies are all equal.Chjoaygame (talk) 09:28, 21 December 2020 (UTC)Reply
When I said "the phase space" I was referring to exactly what you described - that portion of the R^6N space which encloses the volume and specifies the energy. Every point in that space is equally accessible, all others are not. The PRT does not say that some trajectories are restricted to the equilibrium regions of the phase space. It says the exact opposite - EVERY trajectory will eventually fill the phase space. There is only one color of yarn. Please read Poincare recurrence theorem.
The entropy of a macrostate is proportional to the log of its volume in phase space, and therefore tells you how much time ANY trajectory will spend there. The idea that   for a non equilibrium state, is, by the PRT, only temporary. Eventually, the trajectory will return to Rinitial, hence the "recurrence" in the "Poincare recurrence theorem". PAR (talk) 10:49, 21 December 2020 (UTC)Reply
As perhaps you may have noticed, I was a bit cautious, dithery, or vague about my statement  . I now see that my dithering was because I was inexperienced and new to this problem. The correct position is not adequately expressed by saying merely that  . My dithering showed that I had not yet got a full and explicit grasp of the situation.
For the non-equilibrium case, the correct position is that   and   are not coherently comparable; it doesn't adequately cover the situation merely to say that they are unequal. In reality, for the non-equilibrium case, they are regions of effectively different phase spaces, because they belong to distinctly different times; the law of motion is time-dependent, an explicit function of time, radically different between   and  . In contrast, the prime assumption of the Poincaré recurrence theorem is that  , the law of motion being time-invariant. This is implicit, not explicit, in the Wikipedia article. I have said things equivalent to this in my current posts, but now I am spelling it out more explicitly.
The Poincaré recurrence theorem refers strictly to an isolated system (thermodynamic equilibrium), for which the phase space accessibility region is time-invariant, because the law of motion is time-invariant, because it expresses time-invariant constraints. Non-equilibrium means precisely that the accessible region is a function of time because the law of motion is an explicit function of time, expressing time-dependent constraints; this is what defines and characterizes the non-equilibrium case. The Poincaré recurrence theorem does not remotely refer to this case.
Generally speaking, for ordinary differential equations, all kinds of lovely things hold in the time-invariant case that are scarcely meaningful in the time-dependent case.Chjoaygame (talk) 11:45, 21 December 2020 (UTC)Reply
In other words, the Poincaré recurrence theorem refers to an ordinary differential equation such as
 ,
that expresses a time-invariant law of motion, with constant coefficients for an isolated equilibrium system.
The non-equilibrium problem is expressed in an ordinary differential equation such as
 ,
where time appears explicitly as an argument in the law of motion, with time-dependent coefficients.
The difference is radical.Chjoaygame (talk) 13:19, 21 December 2020 (UTC)Reply
The laws of motion are microscopic in nature. The collision dynamics between two colliding molecules are not affected by whether the system as a whole is in a state of equilibrium or not. Non equilibrium microstates are governed by the same Hamiltonian equations as the equilibrium microstates. This is my understanding. Do you have any references that I can use to better understand what you are saying? PAR (talk) 18:12, 21 December 2020 (UTC)Reply
It is a while since I looked at Gallavotti, G. (1999), Short Treatise of Statistical Mechanics, Springer, Berlin ISBN 9783642084386. I don't recall specific items in it. I don't necessarily endorse everything he says. He has a point of view.Chjoaygame (talk) 06:16, 22 December 2020 (UTC)Reply
Off the top of my head, I can't give you references. I can, however, say that the equilibrium and non-equilibrium Hamiltonians are different in just the way that I indicated above. In the language of mechanics, an equilibrium Hamiltonian is not an explicit function of time, while a non-equilibrium Hamiltonian is an explicit function of time. Yes, two given molecules well inside a given system, well away from the walls, and when externally imposed long-range forces are the same, obey the same laws whether they belong to equilibrium or non-equilibrium systems. But this doesn't say what happens near the walls, nor when the externally imposed forces are changing or have changed. A non-equilibrium system has significant and long-lasting changes in at least one of those factors. For a simple example of a non-equilibrium system, we may consider one being heated by conduction through a wall. A particle hitting the wall will, on average, rebound faster than it impinged. On the other hand, in an equilibrium system, with an isolating wall, the rebound speed will be equal to the impingement speed. With another kind of non-isolating wall, that passes matter, a particle impinging from the system may pass into the surroundings, and at another time, a particle impinging from the surroundings may pass into the system; such passages appear in the laws of motion as explicit time dependence, a thing not found in the equilibrium case; such passages don't occur with isolating walls, a fact expressed in the time-invariance of their laws of motion. When a particle passes into or out of a non-equilibrium system, the phase space will gain or lose six or more degrees of freedom; there won't just be a different accessible region; it will be a different phase space; this is an example when merely saying   will not adequately express the situation.
As for 'filling'. Again, I am not deeply clued up on this topic. My current understanding of the time-invariant equilibrium case is that a trajectory explores the region  , but does not visit every point in it. For every point in it, however, there is a trajectory that passes through it. In other words, there are indeed many different coloured yarns. My understanding is that mathematicians define several kinds of 'entropy', and I don't know which of those is or are applicable for thermodynamics. I believe that an 'entropy' measures how 'densely' a given yarn fills  , along with its total area. Different kinds of mathematical 'entropy' assess 'density' differently. For thermodynamic entropy, there is a suitable assessment of 'density'.Chjoaygame (talk) 19:18, 21 December 2020 (UTC)Reply
Copying and pasting from near the beginning of this conversation:
Non-equilibrium is characterized by some sequence of measurements drifting a significant 'distance' through phase space. The drift may involve repeated distinct visits of the whole-system instantaneous microstate to some region of phase space, but it must be evident that they are repeated distinct and separate visits, not just little excursions in a permanent and persistent hovering pattern. In general, for a non-equilibrium trajectory through the phase space of whole-system instantaneous microstates, over some long observation time interval  , the trajectory will drift from some region   to some other region  , with negligible overlap  . Thermodynamic entropy does not apply here. Other so-called 'entropies' may be defined ad lib, but they refer to some kind of 'time rate of entropy production'.
The flaw in that was the idea that in general "the trajectory will drift from some region   to some other region  , with negligible overlap  ." More correctly, in general, in the non-equilibrium case, one can't rely on a common containing region  . If there is no such common  , then   won't just be negligible. Instead, it might turn out to be verging on nonsense. In general for the non-equilibrium case,   and   will be well and truly different. On the other hand, for the equilibrium case, it is essential that the common   exist and contain both   and  , with indeed  . That the initial and final conditions are specified as regions takes care of the fact that, in general, there will be many different coloured yarns.Chjoaygame (talk) 21:10, 21 December 2020 (UTC)Reply
Putting it another way. In the equilibrium case of an isolated system, I am saying that every point in the accessible region of phase space is, in your terminology, an "equilibrium point"; "non-equilibrium points" are arbitrarily chosen and in my thinking are merely products of the physicist's fancy. In the non-equilibrium case of a non-isolated system, I am saying that every point in the accessible region of phase space is, in your terminology, a "non-equilibrium point"; again, in the non-equilibrium case, "equilibrium points" are arbitrarily chosen and in my thinking are merely products of the physicist's fancy. This is summarised in the statement that 'the trajectory decides'. The trajectory is determined by the law of motion. That is why the mathematicians talk of 'the entropy of the law of motion' more than of 'the entropy of the trajectory', but in the equilibrium case they are one and the same entropy.Chjoaygame (talk) 21:44, 21 December 2020 (UTC)Reply
The view that I am putting is not too orthodox. My view is that the Poincaré recurrence theorem is a sort of base or default case. Our friend the professor thinks that Poincaré recurrence is so out there that only an idiosyncrat would want to talk about. For him, a Poincaré recurrence practically never happens, I guess. For me, for thermodynamic equilibrium, every point is a Poincaré recurrence. I think the real physics is that Poincaré recurrence belongs uniquely to, and uniquely characterizes, thermodynamic equilibrium. It expresses the symmetry of the state of thermodynamic equilibrium, why thermodynamic entropy is defined only for a state of thermodynamic equilibrium, and why a one-size-fits-all one-time entropy doesn't make sense for a general physical non-equilibrium process/state. Phil Attard's several-time–entropy hierarchy makes Jeffreys–Jaynes sense though hardly ordinary physically practicable. I mean that for a proper non-equilibrium entropy, at least two times are needed in its specification.Chjoaygame (talk) 02:14, 22 December 2020 (UTC)Reply
An example of an extreme of where the time-invariant system can reach would be when all the particles except one are together in a corner, and the remaining particle has all the prescribed energy, or some such bizarre instantaneous microstate. The system is isolated, and so has a definite finite prescribed total amount of energy, not exceedable by fluctuations, which are not allowed when the total energy is prescribed. It cannot visit regions of phase space that require more energy than that. Beyond such bizarre instantaneous microstates, the remainder of phase space is inaccessible.Chjoaygame (talk) 05:38, 21 December 2020 (UTC)Reply
An isolated system by definition, has zero fluctuations in energy. Yes, the microstate you describe is bizarre, but once the full energy particle collides with the motionless corner particles, the approach to equilibrium will begin. If the particle is bouncing back and forth between two walls, then it will never collide with the other motionless particles, and yes, the trajectory will not visit all neighborhoods. This brings up a fine point about Poincare recurrence: There are certain microstates which never repeat, but their total volume is zero, even for a finite system. Its like talking about the equilibrium configurations of a solid cube on a table. Balanced on a corner is equilibrium, but unstable, the slightest perturbation ruins it. You don't have to worry about it when rolling a die (which, by the way, is a deterministic process). Likewise, we cannot control or prepare the microstate, and so we don't have to worry about the trajectory beginning or stumbling into such a bizarre microstate, even though the process is deterministic. PAR (talk) 06:17, 21 December 2020 (UTC)Reply
The 'approach to equilibrium' is a pipedream derived by a mind projection fallacy from the way the statistical or Monte Carlo algorithm works. From the instant of onset of isolation, or until the instant of de-isolation, the equilibrium system is isolated. No point on the particular trajectory is privileged. Every point on each particular trajectory is a fully qualified candidate for Poincaré recurrence for its respective particular trajectory. All points are just ordinary points on their respective trajectories, all of equal status. This is the thing about Poincaré recurrence. Designating a point as 'non-equilibrium' is an arbitrary exercise. For equilibrium or non-equilibrium, the decisive thing is the whole trajectory.
A non-equilibrium trajectory may seem at an instant to 'intersect' an equilibrium trajectory, at least nominally. But an instant later the two phase spaces that seem to have hosted the 'coincidence' turn out to be different. The 'coincidence' is illusory, or so transient as to have practically no physical significance.Chjoaygame (talk) 09:28, 21 December 2020 (UTC)Reply

paragraph for ease of editing

edit

First of all, we have to settle on the definition of a non-equilibrium system. Your description of a container with a heated wall may well have a time dependent Hamiltonian, I am not sure, but it is not an isolated system. I would like to restrict our discussion to isolated systems, in which case the Hamiltonian is not explicitly time dependent.

I would like to use the term "phase space" to mean that portion of the e.g. R^6N space constrained by the volume and energy of the isolated system.

I think part of the point of our disagreement is that you are saying the statistical mechanical entropy of an isolated system is the log of the volume of the phase space (therefore constant) while I am saying it is only the volume covered by the microstates of the equilibrium macrostate. The two are practically equal for a "large" system, like a glass of water, which contains something like 10^23 molecules, but this is just a note, it doesn't resolve anything. My problem is that your statement does not translate to "the thermodynamic entropy of an isolated system is a timeless constant".

As in the mixing example, the instant after the wall is removed you have a single isolated system that is not in equilibrium. Strictly speaking, the instant after the wall is removed, the thermodynamic entropy is undefined. This contradicts your above statement. Also, the instant after the wall is removed, the system is represented by a point in the phase space. By the PRT, After a sufficiently long time, the trajectory of that point will return arbitrarily close to that initial point. This means that even after the system has equilibrated, it will spontaneously "disequilibrate" after a sufficiently long time (multiple recurrence times), rendering the thermodynamic entropy again undefined.

My statement that every trajectory fills the phase space (one color yarn) is not necessarily a result of the PRT. It is a result of the assumption that every microstate is "equally likely". It is Boltzmann's ergodic hypothesis. In non-probabilistic terms, it means that, over many, many recurrence times, every microstate in the phase space will be visited just as often as any other. The idea that there may be two colored yarns would mean that some microstates are never visited, and this would violate the previous statement. Consider this quote from the PRT page:

Nothing prevents the phase tube from returning completely to its starting volume before all the possible phase volume is exhausted. A trivial example of this is the harmonic oscillator. Systems that DO cover all accessible phase volume are called ergodic (this of course depends on the definition of "accessible volume").

It is my contention that the "accessible volume" must be the entire phase space, because otherwise every microstate would not be "equally likely", as defined non-probabilistically above.

PS - You say "But using the "probability" interpretation carries the regrettable risk of suggesting that the motion of the particles is in fact random". I will not make the mistake of assuming that a probability interpretation implies a random rather than deterministic trajectory thru phase space. PAR (talk) 06:30, 22 December 2020 (UTC)Reply

By your leave, I have shifted your comments because they seem to me to be about the Poincaré recurrence theorem rather than about my musings in the other section.
Yes, we have to sort out what we mean by an equilibrium and a non-equilibrium system. At present, we differ.
For me, the two kinds of system are radically physically different. As I read you, the difference is a fine point that is subjectively decided.
For me, it is not good to try to think of an isolated system as a non-equilibrium system. I think you will disagree rather strongly. For you, an isolated system can be in a non-equilibrium stage of development and eventually settle into an equilibrium stage of development. For me, such an evolution makes the terms 'equilibrium' and 'non-equilibrium' arbitrary. I accept that it is commonly taken your way.
For me, for a non-equilibrium system, there must be non-zero flow of some kind. It can be of matter or of energy or of both. That means that a non-equilibrium system for me is not isolated. The flows have to be between the system and its surroundings.
For you, it is ok to speak of a non-equilibrium system, with no internal partition, and with zero flows between the system and its surroundings. The non-equilibriarity is entirely within the isolated system. For me, that is a recipe for perpetual confusion, misunderstanding, and arbitrary distinctions, which I go so far as to think of as capricious.
I could carry on here and try to justify my view, but it may be that you will find it so unpalatable that it is not worth continuing the discussion. If you are willing to consider my reasons, please let me know, and I will continue.Chjoaygame (talk) 10:56, 22 December 2020 (UTC)Reply
Ultimately, I don't care about orthodoxy or commonly held views or what I presently find palatable, I only care about a logically sound, consistent argument. If you are of a like mind, then please continue.
PS - I may be online a lot less during the holidays.PAR (talk) 14:10, 22 December 2020 (UTC)Reply
Great. Thank you.
My reason, I guess, is just that I like to keep clear of arbitrariness. I feel that every 'cell' of phase space has one and the same weight. So I feel that a bizarre 'cell' has as much weight as a 'commonplace' cell. I don't see how to choose privileged or excellent cells. I suppose I have the Poincaré recurrence theorem in mind. As I recall it, it says that given a neighbourhood of a point on a trajectory, that trajectory will eventually go back to that neighbourhood. The trajectory will in general not actually re-enter and re-trace itself. (In general it won't be strictly periodic.) In that sense, all points on a given trajectory have equal status. So I see it as arbitrary to distinguish well-mannered from ill-mannered points.
The isolation has to start when the trajectory is somewhere, at the moment when the wall permeability is stopped, and no more flow can occur through it. I don't like to think of that somewhere as any worse or better mannered than any other somewhere. So I see the system as in its equilibrium state right from the moment when the wall permeability is stopped. I accept that this itself might be viewed as arbitrary or otherwise objectionable, but I can only say that to me it seems fair.
So, for me, every point on an equilibrium trajectory is, if you like, an 'equilibrium point'. We don't try to define the thermodynamic entropy at a point. We define it for a trajectory. It is trajectories that 'fill' regions of phase space. To earn the status of a 'non-equilibrium point', a point has to belong to a non-equilibrium trajectory.
I don't know if this counts as a reasonable argument?
So, for me, a non-equilibrium macrostate/process is comprised of non-equilibrium trajectories, and a non-equilibrium trajectory is comprised of 'non-equilibrium points', if you like. Specification of such things can be tricky.
Now we come to how many colours. I am happy with the idea that a trajectory can 'cover' a 'cell' without actually filling it, or even actually occupying a finite fraction of it. Trajectories are made of very thin thread, so thin that there is plenty of room in a cell for countless of them, of countless colours, to revisit a given cell for ever without intersecting each other. In a sense, mathematical entropy measures what infinitesimal fraction of a 'cell' is 'occupied' by such an infinity of revisits of a single trajectory. Very thin threads. But very many revisits. A trajectory is practically infinitely long, so its thinness doesn't stop it from 'occupying' some measurable fractional contribution to the extent of phase space. I accept that the foregoing is hairy. I just trust that it can be made reasonable by some analytic argument. I guess that the counter-argument is that the infinity of revisits of a single one-coloured trajectory is numerous enough to 'occupy' the whole of the accessible phase space. My view is that such questions are the bread and butter of topological analysts, and are above my pay-grade. I leave it up to them because I believe that they define various different kinds of mathematical 'entropy'. The real numbers are countless.
I like to live in a world in which equilibrium thermodynamic systems are isolated and have time-independent Hamiltonians, while non-equilibrium systems suffer maintained interaction, often including flows, with their respective surroundings, having much more tricky dynamics, including phase spaces that suffer changes in particle numbers and perhaps other indignities. Way outside our purview. So I can gladly concur with your wish "to restrict our discussion to isolated systems, in which case the Hamiltonian is not explicitly time dependent." Perhaps, for you, it might be unwise to say we won't consider non-equilibrium systems as I define them? My view entails that I allow myself to forget “that even after the system has equilibrated, it will spontaneously "disequilibrate" after a sufficiently long time (multiple recurrence times), rendering the thermodynamic entropy again undefined.” I would see remembering that kind of 'disequilibration' as an unnecessary intellectual burden, with no payoff. In my view, each equilibrium trajectory will include countless bizarre and ill-mannered points, but they won't be counted pejoratively, as of vicious statuses or demerits.
For the article, it might be wise not to unnecessarily explicitly emphasise this viewpoint, because to do so would be to tweak the tail of the tiger. It might be wise just to quietly omit mention of it. We could just forget to make explicit allowances for some things in some statements. So we can just say that the second law asserts a greater total entropy, not mentioning the reversible limiting case until we specifically want to consider it. I think the Poincaré recurrence theorem doesn't compel us to distinguish points of exceptional status. We are free to mention bizarre, deviationist, and otherwise idiosyncratic or 'out there' points, without labelling them pejoratively as 'non-equilibrium'.
I think that with this simplified approach, the Poincaré recurrence theorem definitely does not apply to non-equilibrium processes. In a non-equilibrium process, something is happening or flowing all the time, so the specification of a non-equilibrium macrostate is a rather tricky and complicated exercise, that we can mostly or entirely avoid in this article.
Non-isolated equilibrium macrostates are specified by intensive state variables. For example, an equilibrium macrostate specified by temperature, kept constant by diathermal interaction with a constant temperature reservoir in the surroundings, can suffer fluctuations of entropy. A tiny spot of heat can slip in or out. I am at this moment not sure how to deal with that. Average entropy, a function of state though not a state variable, becomes definitely greater for thermodynamic processes so specified, and fluctuations of entropy will be practically macroscopically negligible, except perhaps at critical points. Still not sure how to deal with it.
Enough for now.Chjoaygame (talk) 16:38, 22 December 2020 (UTC)Reply
It seems to me we are dealing a lot with a semantic problem, not a physical problem. Semantic problems don't deal with physical content, they deal with labels attached to concepts, and arguing over semantics is a distraction from the physics, but necessary for communication.
We agree that there are what you call "bizarre and ill-mannered points" in the phase space of an isolated system, and what I call "non-equilibrium" points of that system. We agree that all points of phase space are equivalent in the sense that, physically speaking, none are more "attractive" than any other point. However, you attach emotional weight to the term "disequilibrium" by saying "... they won't be counted pejoratively, as of vicious statuses or demerits", preferring the less perjorative "bizarre and ill-mannered". I am totally opposed to attaching this kind of emotional weighting to physical terms. If terms are used that draw a distinction, I am (or at least aspire to be) totally blind to any emotional weight carried by those terms. I prefer to use the most common terms when discussing something scientific, rather than having to "translate" on the fly, but if that causes a lack of communication, then I will adopt new terms.
Ok, for the sake of communication, let's use the terms "non-equilibrium" as you use it, "equilibrium" as I use it, in other words an unchanging macrostate in the system, and use the term "bizarre" for what you call the "bizarre and ill-mannered" states of an isolated system. For an isolated system, you stop referring to "bizarre" states as "equilibrium states", and I stop calling "bizarre" states "non equilibrium states".
So your statement "So I see the system as in its equilibrium state right from the moment when the wall permeability is stopped." will be changed to "So I see the system as being in a bizarre state right at the moment when the wall permeability is stopped."
Also, your statement "... a world in which equilibrium thermodynamic systems are isolated and have time-independent Hamiltonians, while non-equilibrium systems suffer maintained interaction," becomes "... a world in which equilibrium and/or bizarre thermodynamic systems are isolated and have time-independent Hamiltonians, while non-equilibrium systems suffer maintained interaction,"
These two above statements may not be "palatable" to either of us, but we have to make these distinctions in order to have a conversation.
Then there is the problem of trajectories in the phase space. Strictly speaking, in classical phase space, the trajectories are not just thin, they are one dimensional, infinitely thin, if you will. Let's define a trajectory to be just that, a one-dimensional path thru phase space. (this assumes no "hard" collisions, in which the trajectory is jumping discontinuously from point to point). The PRT does not deal with trajectories per se, but rather with neighborhoods and finite volumes of phase space. A single microstate is a single point in phase space and the PRT says that any trajectory beginning at that point will return to a neighborhood of that point, and the smaller the neighborhood (the less volume it has), the longer that time will be. The Poincare recurrence time is a function of the volume of the neighborhood you choose. The path of a neighborhood thru phase space is referred to as a "phase tube". As they move thru phase space, the time-invariant Hamiltonian equations of motion can be shown to preserve the volume of the initial neighborhood. For a one-dimensional trajectory, the recurrence time is infinite. A finite neighborhood sweeps out a "phase tube" and the recurrence time is finite. I repeat the quote from the PRT page, referring to an isolated system:

Nothing prevents the phase tube from returning completely to its starting volume before all the possible phase volume is exhausted. A trivial example of this is the harmonic oscillator. Systems that DO cover all accessible phase volume are called ergodic (this of course depends on the definition of "accessible volume").

I think the idea of a one-dimensional thread is a better analogy that a length of yarn. We can use the idea of possibly different colored threads to describe various trajectories in phase space. A phase tube is a bundle of threads. The time-independent Hamiltonian equations of motion mean that the threads do not diverge from each other.
Can you rephrase your ideas in these terms? I don't believe I have introduced any assumptions above, only asking for increased distinctions. We must not say "I prefer not to make that distinction" but must, instead, say "I can prove that that is a distinction without a difference". PAR (talk) 20:53, 22 December 2020 (UTC)Reply

phase tubes

edit

Thank you for your care and patience in this. I do see what you have written as careful and patient, and I do value those virtues.

Yes, you are right that I have used moralistic and aesthetic terms that have no place here. I am happy to delete them from further discussion. I agree that it was a mere rhetorical device without physical content. I used the terms to colour up my view that we are talking about a distinction that I think is poorly defined or arbitrary in general.

So your statement "So I see the system as in its equilibrium state right from the moment when the wall permeability is stopped." will be changed to "So I see the system as being in a bizarre state right at the moment when the wall permeability is stopped." Agreed.

I agree that trajectories are curves in the sense that they have no width. They are like geometrical lines or points, having no area or volume. I agree with your view that "the idea of a one-dimensional thread is a better analogy that a length of yarn. We can use the idea of possibly different colored threads to describe various trajectories in phase space."

A thing that I find unpalatable is the idea of a "phase tube", as "a bundle of threads." Perhaps I can pin my case on objecting to "The time-independent Hamiltonian equations of motion mean that the threads do not diverge from each other." Perhaps I am mistaken, but I think that an essential characteristic of deterministic chaos is that the threads do routinely diverge from each other. I am saying that this happens routinely with time-independent Hamiltonian equations of motion when the dimensionality of 'phase space' is above two. It is impossible when the dimension is two, common when it is three (forget that such is not a mechanical case), and getting close to generic when it is more. Nearly head-on collisions are an example.

It occurs to me, perhaps tangentially, that this may be a factor that influences traditional thinking. People intuit 'deterministic chaos, what nonsense!' I am not suggesting that you might intuit so. But I think it fair to say that I did not learn of deterministic chaos till some years after my undergraduate days; perhaps that dates me, but I am talking about traditional thinking.

Where am I intending to go with this? I am at present going towards trying to turn the burden of proof to 'I can prove that that is a distinction with a physical difference in the general case.' In special cases, yes, it might make sense. But even then it seems to me like the cases that are routinely put in terms of a pack of cards. Such cases do suggest the 'order-disorder' interpretation, and may to some degree justify it. But I think they are so special as to be practically arbitrary. In other words, I am talking about a physically general justification for the classification of points or segments in a trajectory as 'equilibrium' or 'non-equilibrium'.

At this stage of our discussion, I definitely don't want to actually ask for such a justification. In the reading of Jaakko Hintikka, I would call it petitio principii for me to do so: prematurely asking the principal question. At this stage of our discussion, I would like just to deal with my possibly mistaken or irrelevant belief that the threads routinely diverge from one another, my distaste for the 'phase tube' idea.Chjoaygame (talk) 23:38, 22 December 2020 (UTC)Reply

The phase tube concept arises from Liouville's theorem and is a necessary condition for the PRT. To quote from the PRT page:

For instance, all Hamiltonian systems are volume-preserving because of Liouville's theorem.

In other words, if you pick a neighborhood in phase space at time zero, e.g. a small 6D sphere, then every point in that neighborhood will be the t=0 point on a trajectory. At some time "t" later each of those trajectories will be at a particular point in phase space. Liouvilles theorem states that these points at time t will still form a neighborhood (not be scattered all over the place) and the volume of that neighborhood will be the same as the volume of the original sphere. However, the time t neighborhood may be distorted, no longer a sphere. I interpret this qualitatively as saying the tight bundle of threads near t=0 will not diverge from each other, join, or unravel. They may stretch, with different threads stretching differently, but nearby threads stretch more or less together. This tight bundle of threads is called a phase tube. Maybe the quote from the PRT page:

Nothing prevents the phase tube from returning completely to its starting volume before all the possible phase volume is exhausted. A trivial example of this is the harmonic oscillator. Systems that DO cover all accessible phase volume are called ergodic (this of course depends on the definition of "accessible volume").

makes more sense.
I'm not sure, but I don't think this denies the possibility of chaos. The original sphere can get distorted, stretched and twisted in such a way that the distance between two points that were nearby to begin with, separate exponentially in time. That doesn't require divergence. PAR (talk) 08:14, 23 December 2020 (UTC)Reply
Thank you for that.
I recall reading textbooks with pictures of typical developments of initial neighbourhoods. Some of them show the initial neat and tidy neighborhood developing into a sprawling thing of the same area, with thin curvy spiderlike legs. Yes, the 'area' is preserved, but not the shape. Often it is nice to use a neatly convex neighbourhood, but it is not part of the definition of a neighbourhood that it not be spidery shaped. I am not closely clued up on the proof of Poincaré's recurrence theorem, but, for the general case, I don't think it can rely on keeping a nice convex neighbourhood nice and convex. Yes, if a law of motion preserves convexity, then for it, the offered proof of the theorem will work. But I think most laws of motion will not comply with that condition. I think that when two initially nearby trajectories diverge exponentially, for chaos, they don't keep to tidy convex neighborhoods. 'Area' ('volume') preserving yes, but not convexity preserving. I think convexity preservation pretty much rules out chaos. In other words, I think that exponential separation does require 'divergence'. I am pretty much equating 'no divergence' with 'convexity preservation'. For example, I think near head-on collision is a typical form of chaos generation. As soon as I saw the phase-tube idea in the Wikipedia article, I thought 'this looks too convenient'. On reflection, I think it is too convenient. I don't understand why you say "two points that were nearby to begin with, separate exponentially in time. That doesn't require divergence." Perhaps a glance or two at some texbooks may help. I think the threads can easily 'unbundle'. I will report back.Chjoaygame (talk) 12:06, 23 December 2020 (UTC)Reply
This is the sort of thing I had in mind. I will still report back.Chjoaygame (talk) 12:22, 23 December 2020 (UTC)Reply
Looking further at your above comment, I get the feeling that you would like to establish your position by use of the 1984 Newspeak stratagem. I get the feeling, though you don't actually say so, that, when I mean 'transient egregiously inhomogeneous state', you would like me to call it a 'non-equilibrium state'. When I mean 'nearly persistent nearly homogenous state', you would like me to call it an 'equilibrium state'. I would prefer it if, when I mean 'transient egregiously inhomogeneous state', you would let me call it a 'transient egregiously inhomogeneous state'. I would prefer it if, when I mean 'nearly persistent nearly homogenous state', you would let me call it a 'nearly persistent nearly homogeneous state'.Chjoaygame (talk) 19:32, 23 December 2020 (UTC)Reply

further analysis

edit

Looking at some textbooks, I haven't seen the term 'phase tube'. I have seen statements of 'Poincaré's recurrence theorem' in terms, not of throughout-duration descriptions of continuous trajectories, but in terms of discrete-time recurrences without regard to what happens between the recurrences. As for trajectories that start with instantaneous microstates near to one another and, after unspecified, and possibly very diverse, respective long times, return to near where they started, I haven't seen suggestions that they should stay near to one another throughout the respective various long times, as is suggested by the 'phase tube' conception. Could the phase tube conception for the theorem perhaps be Wikipedia original research? Chjoaygame (talk) 19:32, 23 December 2020 (UTC)Reply

We are using "divergence" in two different ways. We can think of a trajectory as a point moving through phase space at possibly different velocities at different times. In the thread analogy, we can place marks on the thread indicating equally spaced time intervals and the distance between the marks may not be constant. At some particular time, a neighborhood in phase space is a single closed surface and all the space (microstates) inside that closed surface. Those microstates are points on a subset of trajectories that pass thru that volume element at the same time. This is very like the description of a fluid element in fluid mechanics. Liouville's theorem states that the fluid is incompressible - the volume of a fluid element never changes. In fluid mechanics, this is equivalent to saying that the divergence of the velocity field is zero. That's how I was using the term "divergence", not as a measure of how far apart two points on the initial volume drift away from each other. Yes, the volume element can be distorted in all different ways, including loss of convexity, as it moves thru phase space, without its volume ever changing and with the divergence of the "velocity" always being being zero. My use of the word "divergence" is not a measure of how far two previously close points on a trajectory drift away from each other. We have to use two words for what each of us calls "divergence". Lets use "spatial divergence" to mean what you meant, and "velocity divergence" to mean what I meant.
Ok, then, trajectories may spatially diverge, we agree. But they do not "unravel". We define a neighborhood as a volume element, a single closed surface and all the points inside it, and at any time later, that volume element will have evolved into another volume element defined by a single closed surface with all the points inside it, with the same volume as the original. All the trajectories passing through the first volume element pass through the second and any that do not pass through the first will not pass through the second. The volume element will not have split into two or more pieces, The second volume element will have no missing points because a trajectory has no end point and no beginning point. That is what is meant by a "phase tube". I think you can see that a phase tube is a set of trajectories, and therefore has no beginning point and no end. The ergodic assumption is that any phase tube will cover the entire phase space. In the phase space of a harmonic oscillator, the trajectories are circles and do not cover the entire phase space, so it is not an ergodic system. In the statistical mechanical representation of a thermodynamic system which is assumed ergodic, you might wish to say that a trajectory that has no beginning and no end and is confined to a closed volume of the R^6N space must form a tangled loop which may or may not cover the phase space. This is mathematically intractable, which is why the PRT says instead that any trajectory will return arbitrarily close to to itself given enough (finite) time. I suppose you could then say that, with the ergodic assumption, any trajectory will close upon itself in infinite time.
Now we can talk about whether there are two or more separate classes of trajectory loops which remain forever separate from each other. PAR (talk) 18:34, 23 December 2020 (UTC)Reply
We were writing at nearly the same time.
I have not got the impression that textbooks say that phase tubes hold trajectories together so that they do not "unravel". 'Volume' preservation, yes. Recurrence, yes. Prevention of 'unraveling', I don't see. I don't see talk of neighbourhoods. I see talk of sigma algebras in phase space. I haven't yet worked out how those two concepts relate. More investigation needed.Chjoaygame (talk) 19:50, 23 December 2020 (UTC)Reply
The idea of the Poincaré recurrence theorem, that till now I have had, and still have after glancing at some books, is that a trajectory, counted as starting at a point in phase space, will in a suitable finite time return to pass through any neighbourhood of that point. I haven't yet picked up the idea that the volume element won't split up into several pieces. Ergodicity is a different matter. More investigation needed.Chjoaygame (talk) 20:05, 23 December 2020 (UTC)Reply
Agreed. I am basing what I said upon my understanding of phase space as, by Liouville, containing an incompressible (i.e. volume-preserving) frictionless fluid, and then drawing conclusions from my understanding of fluid mechanics. I am not drawing it from an understanding of phase space directly.
I think it is correct to say that no trajectory has a beginning point or an end point. It may have a t=0 point that can be taken as an "initial point", etc. but Hamilton's equations of motion are time reversible and will describe a trajectory for t<0 as well. No trajectory will arrive at any microstate and stop there and by reversibility, there will be no stationary trajectories that suddenly begin moving.
I think it is correct to say that the PRT then says that every trajectory is a loop of possibly infinite length, being careful with that word "infinity". Finally there is ergodicity, which says that every phase tube covers the space. I'm not sure how to state that in terms of trajectories, or if that can even be done. I think it is correct to say that every trajectory in an ergodic system is infinite in length.
In fluid mechanics, the only way for a volume element to split is if there is a singular point (like the tip of an aircraft wing) and I think it is correct to say that there are no singular points in phase space worth worrying about, all microstates being equal. Try to draw the trajectories for a splitting volume element and also draw the trajectories that fill the region between the split pieces. PAR (talk) 00:22, 24 December 2020 (UTC)Reply
We are writing at the same time as each other. This comment was written before I read your just preceding one.
I am not finding it too easy to get a brief and uniformly stated account of 'the Poincaré recurrence theorem'.
A textbook, Tél & Gruiz (2006), Chaotic Dynamics: an Introduction Based on Classical Mechanics, Cambridge University Press, ISBN 9780521547833, on page 261 tells me informally
an ensemble of particles initially bound to a certain region spreads as time passes in such a way that its shape ceases to be compact, i.e. it grows offshoots, and finally forms a 'uniform' net of very thin filaments over the entire phase space.
I didn't find them actually stating a 'Poincaré recurrence theorem'.
That doesn't yet get us to where we want to go. More investigation needed.
One approach to our theorem is by way of measure theory. The 'sets' of interest are 'measurable sets'.
What I recall of measurable sets is that they a multifarious crowd. Though they mustn't have too many outlying isolated points, and mustn't have too many isolated missing points from an otherwise 'continuous' subset (I am not going to try to say what I mean by that), they don't have to be compact, let alone convex. They are diverse. So I think they are not suitable candidates for status as definers of 'phase tubes' such as are contemplated in the Wikipedia article on our theorem.
An informal statement of the theorem is offered by Aaron Smith (2014), https://digitalcommons.coastal.edu/honors-theses/23/ :
the system will, within a finite amount of time, return to a state arbitrarily close to its initial state.
That is how I think of Poincaré recurrence.
Aaron Smith writes:
Poincaré Recurrence Theorem. If   is a bounded space with measure   and   is a measure-preserving transformation, then for any set with positive measure  , the subset   of points that never recur to   has measure zero [4].
On page 2, Barreira & Valls (2013), Dynamical Systems: an Introduction, Springer, London, ISBN 9781447148340, write
Then we introduce the notion of the topological entropy of a dynamical system (with discrete time), which is a measure of the complexity of a dynamical system from the topological point of view.
On page 200, they write
Poincaré’s recurrence theorem (Theorem 8.1) says that for a finite invariant measure almost all points of a given set return infinitely often to this set.
On page 188, they write
In this section we show that any finite invariant measure gives rise to a nontrivial recurrence. More precisely, for a finite invariant measure almost every point of a given set returns infinitely often to this set.
Theorem 8.1 (Poincaré’s Recurrence Theorem) Let   be a measurable map and let   be a finite  -invariant measure on  . For each set  , we have
 ,
where   is a sigma-algebra of measurable sets.
Abraham & Marsden (1987), Foundations of Mechanics, 2nd edition, Addison–Wesley, Redwood City CA, ISBN 080530102X, on page 208 write
Exercise 3.4F. (a) (Poincaré Recurrence Theorem). Let   be a compact manifold,   a smooth vector field on   with flow  , and   an  -invariant volume. For each open set   in   and  , show that there is an   such that  . [Hint: Since   have the same measure, they cannot be disjoint if   is large enough.]
I guess that   is the accessible phase region.
So far, I am not yet finding support for 'phase tubes' as key to our theorem. I am not saying that this is much help for us. I am just reporting progress.Chjoaygame (talk) 03:14, 24 December 2020 (UTC)Reply
Now replying to your just previous comment.
I think our theorem is about 'dissipative' motion, something different from hydrodynamical motion. As I understand hydrodynamic motion, it is expressed in partial differential equations. For us, chaotic motion is generic, or almost generic, and in phase space it is expressed in ordinary differential equations. For us, phase space has many virtually singular points. They virtually terminate omega-sets, and are the keys to hyperbolic sets of trajectories. I am not on top of the details. Arrowsmith & Place (1990), An Introduction to Dynamical Systems, Cambridge University Press, Cambridge UK, ISBN 0521316502, talk about them on page 154 et seq.. The physics means that we have to consider many nearly head-on collisions.Chjoaygame (talk) 12:14, 24 December 2020 (UTC)Reply
So that we are clear on the concept of "measure" it can be understood in terms of volume or probability. In the interval [0,1] on the real number line, the interval [1/3, 2/3] has measure 1/3, its the "volume" of the interval. The point 1/2 has measure zero, it's "volume is zero". The rational numbers in [0,1] have measure zero. The set of numbers computable by a finite computer program (such a e or pi) has measure zero. Probabilistically, If you have a random generator of real numbers, after many many trials, very very close to 1/3 of them will land in the interval [1/3,2/3], none of them will be 1/2, none of them will be rational, and none of them will be computable by a finite computer program. For an infinite number of trials, "almost all" of them will be non-computable and therefore irrational. Thats the same as saying non-computable numbers in [0,1] have measure 1.
You list 5 statements of the PRT. Numbers 1,3 and 5, I understand. #2 is qualitative. #3 I would have to study up on. In 1,3, and 5, "measure preserving" means "volume preserving", "measure zero" means it is not worth worrying about, and "almost all" is code for "measure 1".
If you view the phase space as containing a fluid, then I am certain it is not a dissipative fluid. The Hamiltonian equations of motion for a trajectory in phase space are equivalent to that of a non-dissipative fluid. Phase space may have many singular points, but they are not worth worrying about, they have measure zero. The probability of a trajectory encountering one is zero. You may devise a scenario where a volume element splits, but it will have measure zero. The probability of it occurring is zero. This means that the concept of a phase tube is valid, even if that "tube" becomes a crazy looking shape after a while, it is still contained by a single closed surface.
I have no idea what is meant by "They virtually terminate omega-sets, and are the keys to hyperbolic sets of trajectories". PAR (talk) 17:44, 24 December 2020 (UTC)Reply
Happy Christmas. It is Christmas Day here. I am shortly to drive some hundred kilometres to my sister's for that. I must be brief.
In my undergraduate days I did a course in measure theory and passed the exam. I am no expert in the topic, but I am more or less familiar with it.
I now take your point that 'phase-tubes' make sense, and are valid. Till now, I didn't give enough weight to the craziness. Yes, they become very crazy. I guess for me their craziness dominates. Now allowing for that, I give you the phase tubes. Yes, they give a proof of the Poincaré recurrence theorem.
For ordinary differential equations, I am more or less familiar with 'phase' space in low (2, 3, 4) dimensions. Not with higher dimensions, but the principles are the same, with chaos becoming overwhelming in higher dimensions. Chaos is impossible in 2 dimensions. It is common, though not necessary in 3 dimensions, and practically universal in 4 or more dimensions. I am not clued up on 'dissipative' motion and I think you are right about Hamiltonian motion not being dissipative. I made a mistake there.
It is some time since I was busy with ordinary differential equations, and I am not quick and easy with the higher dimensional thinking. But the concepts of omega-sets and such things as hyperbolic sets of trajectories are the staples of the topic. Hyperbolic points are the key to chaos. To present arguments about that, I will need to go carefully and think and learn. I will need to rely on you having a fair idea of those concepts. They are set out in books on bifurcation theory, and on further topics. The classic is Andronov, Leontovich, Gordon, Maier (1967, English 1973), Theory of Bifurcations of Dynamic Systems on a Plane, various distributors, printed in Israel, Israel Program for Sciientific Translations, Jerusalem, London, etc., ISBN 0470031948, paper-and-ink printed version hard to get because it is routinely missing, I guess stolen, from libraries (perhaps things have improved?), perhaps unobtainable as a computer file? There are many other books on the topic. I think I will do well to find and read more recent books than I have here now. I guess the topic has developed and simplified and clarified over time.
I guess your leading interest will be in 'entropies', 'dimensions', and suchlike, for instantaneous microstates. I will say that such are important, interesting, valid, and relevant for us, but are not exactly thermodynamic entropy. Ergodic theory links them to thermodynamic entropy, which I contend belongs to trajectories. My newly added "character" sub-heading flags them for future discussion.
Must go now, already late for my sister's.Chjoaygame (talk) 00:46, 25 December 2020 (UTC)Reply
This morning I was perhaps carried away with Christmas bonhomie.
My worry about the 'phase-tube' is that it seems to be defined as the sequelae of a neighbourhood, while the measure theoretical approach seems to define it as the sequelae of an element of the sigma-algebra of measurable sets. I think those two things are significantly different. The term 'phase-tube' is comfortable for the sequelae of a convex set, but not for me for those of an arbitrary element of the sigma-algebra of measurable sets.
It seems I need to get a better idea of how you view a 'neighbourhood'. For me, the default understanding is that a neighbourhood (of a point) is a simply connected bounded set (that properly contains the point — that means that the point is not a boundary point of the neighbourhood). My impression is that the 'phase-tube' idea would prefer the neighbourhood to be convex, but that isn't actually stated, and perhaps isn't intended? My student-days textbook tells me that an open set in a metric space can be a union of open sets. As I read that, it allows an open set to have well separated parts. But I suppose that a neighbourhood is not allowed to have well separated parts. That is to say, a neighbourhood is required to be a connected set. And I have a feeling that it is not allowed to have holes in it.Chjoaygame (talk) 16:45, 25 December 2020 (UTC)Reply
Later posted note.
You wrote above: “At some particular time, a neighborhood in phase space is a single closed surface and all the space (microstates) inside that closed surface.”
It seems that my default idea of a neighborhood (neighbourhood) was assuming too much. I may be confused here. I wrote just above: "But I suppose that a neighbourhood is not allowed to have well separated parts. That is to say, a neighbourhood is required to be a connected set. And I have a feeling that it is not allowed to have holes in it." Now I have checked in a book by Dixmier, General Topology (2010). He writes
1.3.1. Definition. Let   be a topological space and let  . A subset   of   is called a neighborhood of   in   if there exists an open subset   of   such that  .
I couldn't find Dixmier defining a term 'simply connected', though that term comes from my memory; I think it means 'connected with no holes'.
It seems that I was not following Dixmier when I assumed that a neighbourhood must be connected and have no holes. I have an impression that the 'phase-tube' idea also assumes that?Chjoaygame (talk) 10:48, 26 December 2020 (UTC)Reply
In contrast, it seems to me that an element of the sigma-algebra of measurable sets, such as is used in the measure theoretic proof the Poincaré recurrence theorem, is allowed to consist of the union of infinitely many pairwise non-intersecting measurable sets. It may have gaps between the constituent measurable sets, and there may be holes in the constituent sets. I don't see why the measure-theoretically defined tube that starts as an arm, developing peripheralwards into a 'hand', shouldn't split into 'fingers' that don't rejoin into another 'hand'. I don't see why it can't start having already split into infinitely many 'fingers'. Not only can the initial measurable set consist of union of disjoint separate sets, but also even the separate sets can develop as separate fingers. I don't see that the measure theoretic proof has anything to permanently glue the parts of the neighbourhood together?
I think that the existence of the fingers and gaps contradicts the most obvious character of the 'phase-tube' idea, the "tight bundle"? Perhaps that most obvious character isn't logically necessary. But it does seem to be a major selling point for the 'phase-tube' idea.
As far as I can see, the measure-theoretic proof can go through unimpeded by all those splits. But the splits don't fit with the 'phase-tube' as a "tight bundle" of trajectories. Nor do they fit with the idea "that the threads do not diverge from each other." Nor with the “won't "unravel"” idea. For those ideas, something more would be required than the measure theoretic proof.
I don't know how important for you is the 'tight-bundling' idea. Perhaps you don't need it? For a 'phase-tube' to split into pieces, it would need to be chopped with scissors at some time. At this stage, I am not thinking of that. I am thinking of the 'phase-tube' progressively splitting like fingers coming from a hand or like tentacles from the body of an octopus, the body representing early times, the tentacles later times; yes, the tentacles would need to stay thick, and not taper like those of an octopus. But it wouldn't actually be cut into segments.
Quoting: "This means that the concept of a phase tube is valid, even if that "tube" becomes a crazy looking shape after a while, it is still contained by a single closed surface."
Summarising: The 'phase-tube' will not actually be cut at some time with scissors, so it will not actually be cut into "pieces". The continuity of trajectories is not interrupted by the 'phase-tube' dividing itself into ever more numerous, ever finer fingers, with 'volume' preservation.
'Volume' preservation is not too simple to see in a 'phase-tube' in a many-dimensional phase space. The preserved 'volume' in phase space is at an instant, and is, say,  -dimensional. To see it in the context of a 'phase-tube', one thinks of the 'phase-tube' as  -dimensional, cut in cross-section at a time, with the cross-section being the  -dimensional phase space 'volume' at that section-time. The 'surface' of the preserved 'volume' is  -dimensional. The section cuts every trajectory, but is only a mathematical sampling procedure, that doesn't constitute an actual breach of continuity of any of the trajectories. There has been nothing to prevent "unbundling" or "unraveling". As a cross-section sample, the preserved  -dimensional phase 'volume' can well appear split into many separate  -dimensional sub-'volumes', each bounded by its own  -dimensional surface, though this doesn't terminate or originate any trajectory in the unsampled actual  -dimensional 'phase-tube'. The routine textbook picture of the spidery development of the original neighbourhood assumes something that is not necessarily granted. It rightly illustrates 'volume' preservation, and rightly suggests some 'craziness', but in the end, not enough 'craziness': it arbitrarily imposes an unmandated no-fingers condition. I hadn't previously tweeked to this, so it comes as useful news to me.
I think this makes sense, though it took me a fair while to work it out.
So I think the 'phase-tube' idea can be shoehorned home, but I think it doesn't give a good picture of the measure-theoretic proof.
Personally, I would prefer a proof that starts with a single trajectory at some arbitrary instant of time and follows that trajectory for long enough. Such may be mathematically difficult or perhaps intractable. But if possible, it would be more enlightening. Nowadays, as a practical matter, the mathematical intractability is often dealt with by doing numerical integrations with computers, literally following trajectories. Not the most desirable way, but persuasive enough for many.
The Poincaré recurrence theorem is not the last one in the book on this topic. There is at least another one by Birkhoff. I haven't nutted out exactly what it says, or how it is proved.Chjoaygame (talk) 16:45, 25 December 2020 (UTC)Reply

stochastic radiation

edit

This may be the place for me to recall my previous comment, that I suspect that inelastic collisions are important for thermodynamic equilibrium. In principle, inelastic collisions can still be Hamiltonian. But I suspect that for thermodynamic equilibrium, they have to be quantal, with the possibility of excitation of molecules that radiate stochastically, not deterministically. For thermodynamic equilibrium, the co-occurrence of the Maxwell-Boltzmann and Planck distributions is essential, not just a curious coincidence. As I see it now, the stochasticity of radiation will destroy the purely Hamiltonian character of a strictly classical dynamical system. In dynamical systems theory, Hamiltonian systems have special features. I suspect that in departing from strictly classical dynamics, we are departing from important special features of Hamiltonian systems. This has to do with Maxwell's discovery that inverse fifth power particle interactions are solvable and do not show the usual features of entropy change. I can think only of stochastic radiation as an explanation of this. Far from sure about this.Chjoaygame (talk) 04:09, 28 December 2020 (UTC)Reply

character of instantaneous microstates

edit

It has helped me, that I have articulated the notions of transient inhomogeneity and enduring homogeneity, which are properties of instantaneous microstates, as distinct from equilibrium and non-equilibrium, which are properties of trajectories. I intend to try to think of transience and endurance as occasional. I am trying to read a little in Tél & Gruiz about how to make this quantitative. I am guessing that you will like this. I will report progress.Chjoaygame (talk) 12:39, 24 December 2020 (UTC)Reply

Moved this section. It is practically a lead-in to talk of ergodic theory.Chjoaygame (talk) 04:37, 27 December 2020 (UTC)Reply

ergodic hierarchy

edit

Very interesting article on ergodicity: [1]. If we could understand everything in this article, we might be a lot closer to agreement. PAR (talk) 01:42, 27 December 2020 (UTC)Reply

I found this interesting quote in Section 2 (Ergodicity): "As a consequence, X cannot be divided into two or more subspaces (of non-zero measure) that are invariant under T." Here X is the phase space, and "T is a measure-preserving transformation on X" or in other words, the volume-preserving equations of motion.


Ok, I will read it asap.Chjoaygame (talk) 01:45, 27 December 2020 (UTC)Reply
For now, not logging in to get pdf. Will read online. Note topic is ergodic hierarchy.Chjoaygame (talk) 01:54, 27 December 2020 (UTC)Reply
Section 1
Familiar territory. Two dimensions. One plumbob. No mention of time-dependent law of motion. Introduce discrete time, in preparation for Poincaré map for future discussion. Talk of 'space' average, using schoolboys, not the plumbob. No mention of bifurcation. Straight into measurement error, which is not our topic. Talks about sets of points as interest; we are interested in sets of bobs that define single points in phase space. Multiple points in phase space is a mathematical stratagem that calls for ergodic theory, I guess choosing for mathematical reasons to bypass consideration of full phase-space trajectories. I guess intending to use Boltzmann's ergode (Gibbs's microcanonical ensemble) rather than Boltzmann's holode (Gibbs's canonical ensemble)? I claim that our topic should prefer the holode trajectories. Chjoaygame (talk) 02:22, 27 December 2020 (UTC)Reply
Section 2
No motivation for space average for us, so far as I can see. Schoolboys do not obey a law of motion, as least as far as I can remember!
Our 'space' averages, at least as I currently guess, are many-particle averages over the many possible instantaneous microstates of many statistically and causally independent particles in their respective private phase spaces?
I am not proposing that our systems become metrically separated or decomposed. I am not opposing the full applicability of the Poincaré recurrence theorem to isolated systems. I will, however, assert that, in general, something more complicated is needed for systems subject to external flows of matter or energy; for the present, we are not tackling such, I think.
Quoting: "In sum, from now on, unless stated otherwise, we consider discrete measure preserving transformations." They intend to consider not trajectories, but rather Poincaré maps. Interesting for us, but not the foundation of our topic. Chjoaygame (talk) 02:38, 27 December 2020 (UTC)Reply
Section 3
Quoting: "dynamical systems theory, which studies a wider class of dynamical systems than ergodic theory." Fundamentally, physically, our topic is dynamical systems. Ergodic theory is a mathematical technique for doing some calculations on dynamical systems.
The ergodic hierarchy is not familiar territory for me. Many years ago I read some books about it, but the topic has probably advanced since then. I will be going more slowly now.Chjoaygame (talk) 02:58, 27 December 2020 (UTC)Reply
Section 4.0
Quoting: "However, all these schools use (slight variants) of either of two theoretical frameworks, one of which can be associated with Boltzmann (1877) and the other with Gibbs (1902), and can thereby be classify either as ‘Boltzmannian’ or ‘Gibbsian’. For this reason we divide our presentation of SM into a two parts, one for each of these families of approaches." If Cercignani is to be believed, it seems likely that the writers of this Stanford article are like the majority of scholars, who have not read Boltzmann's papers. Yes, convention attributes the canonical ensemble to Gibbs, but it was used by Boltzmann, who called it a 'holode', if I understand Cercignani aright.
Quoting: "These molecules bounce around under the influence of the forces exerted onto them when they crash into the walls of the vessel and collide with each other. The motion of each molecule is governed by the laws of classical mechanics in the same way as the motion of the bouncing ball." This seems to say that we are on the same page.
Quoting: "There are two different ways of describing processes like the spreading of a gas." Ok.
Quoting: "As a result, the gas quickly disperses, and it continues to do so until it uniformly fills the entire box." They imply spatial dispersal, and don't talk about more general diversification of microscopic motion at this point.
Quoting: "The gas has approached equilibrium." This is where the ways part. At this point, our beloved schoolboy-days maths teacher would say "It's all over now, boys."
The article in the word "approached" seems perhaps to go the way of many, perhaps the orthodox, that I claim is arbitrary and that I claim leads to confusion, with no pay-off. I think you already understand that such is my view. Perhaps I may briefly sketch a defence. At a certain instant of time,  , the wall was made more permeable; that ended the initial macroscopic thermodynamic equilibrium, and started the process. At a later instant of time,  , the wall was made impermeable, ending the process, creating our isolation, and starting our final macroscopic thermodynamic equilibrium. Who is to say how far the minglings, dispersals, and diversifications had progressed at  ? I claim that thermodynamics doesn't try to say.
Quoting: "This fact is enshrined in the Second Law of thermodynamics, which, roughly, states that transitions from equilibrium to non-equilibrium states cannot occur in isolated systems, which is the same as saying that entropy cannot decrease in isolated systems (where a system is isolated if it has no interaction with its environment: there is no heat exchange, no one is compressing the gas, etc.)." Not too sure what they have in mind here. Need to attend to this question. Why did they write "approached"? Are they denying Poincaré recurrence? Is it redundant that they add that "entropy cannot decrease" when they have just said that "transitions from equilibrium to non-equilibrium states cannot occur in isolated systems"?
Perhaps it's time for me to take a break.Chjoaygame (talk) 04:28, 27 December 2020 (UTC)Reply
Section 4.1
Quoting: "Hence, to every given microstate   there corresponds exactly one macrostate. Let us refer to this macrostate as  . This determination relation is not one-to-one; in fact many different   can correspond to the same macrostate. We now group together all microstates   that correspond to the same macro-state, which yields a partitioning of the phase space in non-overlapping regions, each corresponding to a macro-state."
I wouldn't be happy with that for our purpose. It isn't clear whether it is talking about instantaneous states or enduring or perpetual states. This article seems not to be interested in trajectories. That may be ok for ergodic theory, but I think it isn't adequate for thermodynamic systems in general. It seems to come close to erasing the concept of a trajectory. It may be ok for studying the statistics of instantaneous microstates, an interesting and important topic, but not all that we need.
note
I have had a little look for more up-to-date, and hopefully more streamlined and so simpler, textbooks. Not much success so far. I learn that our IP mathematician friend's hero Katok has died. A recent book coauthored by one of Katok's colleagues is Fisher, T., Hasselblatt, B. (2019), Hyperbolic Flows, European Mathematical Society, Berlin, ISBN 978-3-03719-200-9. Hyperbolic flows are more or less the same thing as chaotic flows. Their introduction starts
This book presents the theory of flows, that is, continuous-time dynamical systems from the topological, smooth, and measurable points of view, with an emphasis on the theory of (uniformly) hyperbolic dynamics. It includes both an introduction and an exposition of recent developments in uniformly hyperbolic dynamics, and it can be used as both a textbook and a reference for students and researchers.
Books on dynamics tend to focus on discrete time, largely leaving it to the reader (or unaddressed) to transfer those insights to flows, where the origins of the theory actually lie.[1] It is thus often implicit that “things work analogously for flows,” or that “this is different for flows,” and aside from geodesic flows, many theorems about flows have had little visibility beyond the research literature. Although much about flows can indeed be found in the research literature, doing so usually involves a combination of diligence and consultation with experts. We fill this gap in the expository literature by giving a deep “flows-first” presentation of dynamical systems and focusing on continuous-time systems, rather than treating these as afterthoughts or exceptions to methods and theory developed for discrete-time systems.
In my language, they are saying that their book takes trajectories as primary, and Poincaré maps as derivative. In this area, much work is done by actual numerical solution of ordinary differential equations, tracing out particular trajectories, but this book tends to consider things in terms of general theorems with proofs. On pages 141–151 they consider Hamiltonian systems. On pages 155–209 they have a chapter on ergodic theory.
This isn't too simple. More investigation is needed. The reason that I think this stuff is for us is that trajectories trace the raw physics, and so are conceptually simpler for novices. We can state the problems without being entangled in mathematical techniques for their solutions. Chjoaygame (talk) 02:59, 28 December 2020 (UTC)Reply
Quoting: "One can show that, at least in the case of dilute gases, the Boltzmann entropy coincides with the thermodynamic entropy (in the sense that both have the same functional dependence on the basic state variables), and so it is plausible to say that the equilibrium state is the macro-state for which the Boltzmann entropy is maximal (since thermodynamics posits that entropy be maximal for equilibrium states). By assumption the system starts off in a low entropy state, the initial state   (the gas being squeezed into the left half of the box). The problem of explaining the approach to equilibrium then amounts to answering the question: why does a system originally in   eventually move into   and then stay there?" Yes, this muddle is widely accepted. But we are here to help the novice avoid muddle. On the face of it, this seems to deny Poincaré recurrence? And we are interested in a broader range of systems than dilute gases.
The section goes on to consider various questions in ergodic theory, that for the present I will bypass.Chjoaygame (talk) 17:05, 27 December 2020 (UTC)Reply
Section 4.2
Quoting: "So far we have only dealt with equilibrium, and things get worse once we turn to non-equilibrium." Off limits for us at present.
Quoting: "main problem is that it is a consequence of the formalism that the Gibbs entropy is a constant! This precludes a characterisation of the approach to equilibrium in terms of increasing Gibbs entropy, which is what one would expect if we were to treat the Gibbs entropy as the SM counterpart of the thermodynamic entropy." Let's ignore some things in this. I think the rational view is that the strict constancy or otherwise of the thermodynamic entropy depends on the wall permeability.
With an isolating wall, the thermodynamic entropy is constant, though the system can exhibit occasional transient local inhomogeneities as well as enduring homogeneity, as per Poincaré recurrence.
With a rigid diathermal wall, microscopic heat transfers can occur, and entropy can fluctuate. With a flexible adiabatic wall, microscopic work transfers can occur. With a wall permeable to matter, microscopic transfers of matter and energy can occur, with fluctuations of entropy; the dimensionality of the phase space fluctuates. These cases are more complicated than that of the isolated system. They may call for analysis in terms of Poincaré maps, or joint consideration of system and surroundings. We will likely not consider them right now.Chjoaygame (talk) 17:34, 27 December 2020 (UTC)Reply

reasoning

edit

In this new subsection I will try to address various questions that I suppose you may be asking. The most obvious one is 'why does chjoaygame fuss about the phase-tubes?' I won't be able to do this in a neat way, but I will try by shreds and patches. Broadly speaking, it is because I am a sceptic.

A start is this, from Grandy's Foundations of Statistical Mechanics, on page 17.

Thus, at the end of the last century Boltzmann had developed the kinetic theory in its essentials to almost its present form. We have seen, however, that the Boltzmann equation has been derived in a rigorous way only for extremely low densities, and modern investigations show that an extension to higher densities is highly unlikely (Cohen, 1973; Uhlenbeck, 1973). Despite these observations the equation continues to be applied to other systems and the H-theorem taken to be a general expression of the second law of thermodynamics. Although the objections of Loschmidt and Zermelo can be maintained for certain initial states in which the Stoßzahlansatz fails, these are surely exceptional states, it is thought, and one is tempted to conclude that the H-theorem is true on average. Unfortunately, more recent work has demonstrated that the H-theorem, and therefore the Boltzmann equation on which it is based, cannot be generally valid even for a dilute gas (Jaynes, 1971).

Looking up the Jaynes reference, I find

Violation of Boltzmann's H Theorem in Real Gases

Jaynes, E. T.

Abstract

The well-known variational (maximum-entropy) property of the Maxwellian velocity distribution is used to shed some light on the range of validity of the Boltzmann transport equation. It permits a characterization of the initial states for which the Boltzmann H theorem is violated. In particular, it is shown that: (a) Any monatomic system for which the equilibrium potential energy exceeds the minimum possible value possesses a continuum of initial states for which the approach to equilibrium takes place through an increase, rather than a decrease, in Boltzmann's H. (b) If the initial distribution of particles is spatially homogeneous and Maxwellian, the approach to equilibrium will take place through an increase (decrease) in the Boltzmann H, according as the initial potential energy is less (greater) than the equilibrium value. (c) A necessary condition for the H-theorem-violating phenomenon is that the approach to equilibrium takes place through a conversion of kinetic energy into potential energy; a sufficient condition requires also that the initial velocity distribution be sufficiently close to Maxwellian. (d) These H-theorem-violating conditions are readily attained experimentally; for example, the free expansion of oxygen gas at 160 °K and 45-atm pressure produces an experimentally realizable violation of the Boltzmann H theorem.

Publication: Physical Review A, vol. 4, Issue 2, pp. 747-750

Pub Date: August 1971

DOI: 10.1103/PhysRevA.4.747

Bibcode: 1971PhRvA...4..747J

I can't remotely pretend to analyze and criticize that paper, but I am inclined to think it must be right, because of the many assumptions that are needed for the H-theorem, and because I suppose that Grandy and Jaynes know what they are doing.

Again from Grandy, on his page 25:

D. Ergodic Theory
    A system is ergodic if the orbit of its representative point in r-space, taken over the infinite time interval [-∞,∞], passes through all points on the energy surface. It is not at all clear that Boltzmann thought real systems were actually ergodic, and the idea was not pushed vigorously until the review article by the Ehrenfests (1911). They made the distinction between ergodic and quasi-ergodic systems, where in the latter the image point of that system need only pass arbitrarily closely to every point of the energy surface. Shortly thereafter it was proved that no physical system could be ergodic (Rosenthal, 1913; Plancheral, 1913), and a quest to prove physical systems quasi-ergodic was undertaken, and continues almost unabated today.
    "Now I believe that almost all real physical systems are 'essentially' ergodic. Indeed, this is necessary for understanding why equilibrium statistical mechanics, which includes a description of fluctuations in thermal equilibrium, works so well in the real world" (Lebowitz, 1972). But long-cherished beliefs must face facts eventually: "The compelling fact is that, with one main exception ... , Hamiltonian systems are not ergodic" (Smale, 1980). The exception, of course, is the hard-sphere system studied by Sinai, and mentioned earlier.

I could quote more of this if you like. Smale and Sinai are reliable authorities on such topics.

None of that is fully decisive for our present purpose.

But I submit to you that with such a degree of difficulty found by the guys who know this stuff, we should be very leery of trying to present anything like a detailed argument that relies on ergodic or related theory. Falling under this heading, I think, are arguments about order in terms of card sorting and such like. The card order thinking comes from ergodic theory, and I think it distracts from the simple physical ideas, and is likely to mislead or confuse the novice, and may be more or less invalid.

I favour instead trying to appeal to physical ideas. We can make statements that we think are physically true. We don't have to tie them up in mathematico-physical reasoning that might be criticized, or might overtax the novice's mind. I think that the spreading idea is physically based, intuitively understandable, and avoids the problems associated with ergodic theory.

For example, we can say that in a body of matter and radiation, radiation propagates throughout the body. In an isolated body in its own state of internal thermodynamic equilibrium, the radiation is nearly in the Planck black-body distribution, which is the most diverse possible for the physical conditions. The radiation will sometimes interact with the matter, in a manner that is not predictable by classical physics. This interaction appears as diverse modes of excitation of molecules and suchlike. Microscopic motion of the molecules is very irregular, and it helps to spread the diversity of motion. The result is a thorough spread of diversity of motion of the microscopic components throughout the body. Such a spread most often appears as a nearly enduring near homogeneity of the body. Occasional transient inhomogeneities can occur, but they are very rarely detectable.

I will perhaps think differently tomorrow. But that's enough for now.Chjoaygame (talk) 12:21, 28 December 2020 (UTC)Reply

Neighborhood

edit
My idea of a neighborhood is that it is defined by some closed surface. An open neighborhood would include all points enclosed by the surface, a closed neighborhood would be the open neighborhood and the surface itself. A neighborhood does not consist of disjoint parts, and has no missing interior points. Any interior point can be connected to any other interior point without going outside the enclosing boundary. Importantly, in an N-dimensional space, it is N-dimensional. In a 2-D space, a line segment containing x is not a neighborhood of x. A circle containing x is a neighborhood of x, as long as x is not on the boundary. In order to make things mathematically tractable we often have to deal with neighborhoods rather than points, and phase tubes rather than trajectories, and then take the limit as the neighborhood collapses to a point, and the phase tube collapses to a trajectory.PAR (talk) 17:03, 28 December 2020 (UTC)Reply
Yes, that is a sort of natural idea of a neighbourhood. That was my default idea too. But the usual measure-theoretic proofs of the Poincaré recurrence theorem do not work with neighbourhoods. They work with measurable sets. It is necessary to analyze the two kinds of object. I have tried to do that.Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
A neighborhood is a type of measurable set. Yes, the PRT does not require neighborhoods, it just makes things easier to visualize. It's easier to say it returns to a neighborhood rather than saying it returns to a collection of disjoint subsets of phase space scattered all over the place.PAR (talk) 02:46, 29 December 2020 (UTC)Reply
I will need to check this. I think you are saying that every neighbourhood is a measurable set. I guess you may hold that not every measurable set is a neighbourhood. Perhaps you will clarify. I have looked up the definitions of neighbourhood and of measurable sets. I think they don't coincide. Perhaps I am confused about that? I think the definition of a neighbourhood depends on or even defines a topology. I seem to remember talk of Hausdorff neighbourhoods and other kinds of neighbourhoods, but I will need to check. Still by rusty memory, I seem to recall that besides Hausdorff, there are some four other kinds of neighbourhood?Chjoaygame (talk) 07:33, 29 December 2020 (UTC)Chjoaygame (talk) 16:33, 29 December 2020 (UTC)Reply
When I say neighborhood of point p, I am talking about a single closed 6N-1 dimensional surface and every point enclosed by that surface, and p is an interior point. This certainly constitutes a measurable set. Yes, all neighborhoods are measureable sets, they have a finite volume but not all measurable sets are neighborhoods. Two disjoint neighborhoods does not constitute a neighborhood, yet it is a measurable set. PAR (talk) 10:12, 29 December 2020 (UTC)Reply

Splitting phase tube

edit
The neighborhood that you have described in the just-above paragraph is much the same as the one that I initially supposed. It has no disconnected parts. It is simply connected. Simple connection is an idea that we learnt in complex function analysis. I looked up the definition of neighbourhoods in two topology textbooks. They didn't say that a neighbourhood must be simply connected. They didn't mention simple connection. My impression is that they didn't intend to imply that a neighbourhood must be connected, or simply connected. But however one might define a neighbourhood, for the measure-theoretic proof, what counts is the definition of a measurable set. A measurable set is not necessarily connected, nor simply connected. It can be a countable union of pairwise disconnected sets. I have said this above somewhere in my walls of text.Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
I think the phase tube idea requires that any neighborhood at time zero will remain a neighborhood as it propagates forward in time. That means it will not split into fragments ("unravel"), nor will it ever develop any missing interior points or regions. It may, however, become wildly distorted. I think the equations of motion will have to support these ideas. I think it is important to note that if you start at microstate x(t), then the equations of motion give a unique x(t+dt) and a unique x(t-dt). That means that a trajectory cannot split nor can two trajectories join, nor can two trajectories intersect. Also, the idea that trajectories are like flow lines in a frictionless incompressible fluid says that a phase tube cannot split unless there is something to split it, something analogous to a solid object, and phase space contains no such objects. I don't have a rigorous proof of this, however. If you believe a phase tube can split into two or more separate neighborhoods, please tell me the mechanism of the splitting. PAR (talk) 17:03, 28 December 2020 (UTC)Reply
For the measure-theoretic proofs, the 'phase-tube' may initially already be split into countably many pairwise disconnected sets. I think that even an initially simply connected 'phase-tube' can easily split, as an arm into a hand and fingers. A 'phase-tube' is initially a bundle of trajectories. There is nothing that says that they cannot drift apart. The idea of deterministic chaos is that they very often drift apart. That is why deterministic chaos is described by 'hyperbolic flows'. I have said this above in my walls of text. For easy search of my walls of text above, on a closely related matter, I have now font colored a paragraph about the adventures of a phase-tube.Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
For the measure-theoretic proofs, a phase tube is not required. When you say "the 'phase-tube' may initially already be split into countably many pairwise disconnected sets. ", that is a contradiction of terms. A phase tube is by definition always composed of a simple neighborhood at every instant in time.PAR (talk) 02:46, 29 December 2020 (UTC)Reply
I haven't so far found a usage of the term 'phase-tube' for this purpose in the literature. Perhaps I haven't looked hard enough. You propose a definition in terms of a certain kind of neighbourhood. I haven't seen just that definition in the literature. I guess it may be your own. I will need to check, as I have noted above.
You write “When you say "the 'phase-tube' may initially already be split into countably many pairwise disconnected sets. ", that is a contradiction of terms.” That is why I put the word 'phase-tube' in single quotes. Your comment here depends on the definition of terms. The measure-theoretic proofs use a measurable set. I am not yet convinced that they go through as you propose, with a neighbourhood instead of a measurable set. I think this needs close checking.Chjoaygame (talk) 07:55, 29 December 2020 (UTC)Reply
The phase tube is used as a finite-measure stand-in for a trajectory, to make the mathematics tractable. In the limit of the initiating neighborhood collapsing to a point, the phase tube collapses to a trajectory. A phase tube which splits must ultimately be traced back to a trajectory that splits. Since that is impossible, a phase tube cannot split. Chaos means that the distance between two trajectories increases exponentially, where distance is measured simultaneously in time. This only means that the original neighborhood distorts, not that it unravels. If a phase tube starts out as a golf ball, and at some time later looks like a dinner plate or even a small octopus (same volume), the points on the dinner plate or the octopus may be diverging away from each other at an exponential rate, but neither the dinner plate nor the octopus are splitting up into separate pieces. The entire time development is a phase tube. Again, a phase tube which splits must ultimately be traced back to a trajectory that splits and that is impossible.PAR (talk) 02:46, 29 December 2020 (UTC)Reply
You write “The phase tube is used as a finite-measure stand-in for a trajectory, to make the mathematics tractable.” I am not persuaded that it is legitimate to simply take a 'phase-tube' as a finite-measure stand-in for a trajectory. I think that “to make the mathematics tractable” doesn't constitute a convincing argument. I think some careful topological reasoning is needed to establish the legitimacy of that.
You have a strong belief that “A phase tube which splits must ultimately be traced back to a trajectory that splits.” You argue that “In the limit of the initiating neighborhood collapsing to a point, the phase tube collapses to a trajectory.” I am not yet convinced that such limits perform as you believe. I think you are playing fast and loose here. I agree that in a classical physics phase space, a trajectory does not split. But I don't think that binds one trajectory to another, and I think you suppose that the trajectories of a neighbourhood, as you choose to define it, are somehow bound together. I think that would need proof.
Besides this, I have remarked above that we are not sure that we are really dealing with a classical phase space for the physical situation, if we take radiation into account.Chjoaygame (talk) 08:17, 29 December 2020 (UTC)Reply
It is not evident to me that the splitting of a phase tube demands splitting of a trajectory. Yes, a trajectory can't split. But unravelling would do, I think? You seem to say that unravelling doesn't happen.Chjoaygame (talk) 16:50, 29 December 2020 (UTC)Reply
Lets not complicate things by bringing in excitations, quantum mechanics, radiation, etc. If we cannot solve the simple problem we will never solve the complicated one.PAR (talk) 10:25, 29 December 2020 (UTC)Reply
I think it quite likely that the physics requires excitations, though I can't prove it. I think omission of the physical fact of excitations is likely the cause of most of the conceptual problems of the field. On the other hand, I think the occurrence of radiation is a truly physical part of thermodynamic equilibrium, and it would be better to bring that to the attention of the novice. The Maxwell-Boltzmann and Planck distributions go together like the two horses of Plato's model. It must help explain things in reality, because it is a fact.Chjoaygame (talk) 11:29, 29 December 2020 (UTC)Reply
Though it would much complicate things if we were to try to give a mathematical account, I think it doesn't over-complicate things just to give a physical account that includes radiation.Chjoaygame (talk) 16:41, 29 December 2020 (UTC)Reply
With regard to the phase tube splitting, please imagine a point in phase space at the center of a ball. (A ball is a sphere with all of its interior points). Lets not get all involved with how the ball distorts to begin with. Now imagine that ball moving in phase space, and then splitting into two smaller balls. That is a splitting phase tube. Now imagine the initial neighborhood shrinking to one tenth its size. The phase tube will split again, but earlier that the first one. As we keep shrinking the initital neighborhood, the splitting phase tube comes more and more to resemble a trajectory. suppose the radius of the initial neighborhood is shrunk to some incredibly small number, the phase tube would effectively be a trajectory that splits at some point. If not, please explain to me what is going on in that splitting region.PAR (talk) 10:25, 29 December 2020 (UTC)Reply
You are proposing an argument in terms of topology of sets that may or may not be neighbourhoods, not decided, and of trajectories, that are one-dimensional threads, that I am saying are cut when there is an instantaneous collision. What is going on in the splitting region is a discontinuity in the trajectory in phase space, not in the space coordinates, but in the momentum coordinates. I am saying now that the trajectory is discontinuous, with a finite jump. For a case with no singularity at a collision, probably something more may be needed. We could think about that after we have agreed on the case of the instantaneous collision.Chjoaygame (talk) 11:29, 29 December 2020 (UTC)Reply

Phase space as an incompressible frictionless fluid

edit
The idea that trajectories are like flow lines in a frictionless incompressible fluid is not suitable for the present case. That idea belongs to a theory of partial differential equations, not to a theory of ordinary differential equations. I have said this in my walls of text above.
Hamilton's equations of motion are partial differential equations, not ordinary differential equations. The independent variables are the 3N position and 3N momentum coordinates, and time. From the Wikipedia article on Liouville's theorem, "That is, viewing the motion through phase space as a 'fluid flow' of system points, the theorem that the convective derivative of the density,  , is zero follows from the equation of continuity by noting that the 'velocity field'   in phase space has zero divergence (which follows from Hamilton's relations).". The zero divergence of the velocity being zero is what makes the flow incompressible. PAR (talk) 02:46, 29 December 2020 (UTC)Reply
You write “Hamilton's equations of motion are partial differential equations, not ordinary differential equations.” I think that depends on how you mean.
A trajectory is a product of an ordinary differential equation. If you choose to say that Hamilton's equations of motion do not define a system of ordinary differential equations, then I don't see you can talk about trajectories at all.
I will try to say how I think it is.
A function is called 'the Hamiltonian', defined by
 .
For an isolated system our special case gives Hamiltonian that is not an explicit function of time.
 .
That is an explicit function of  , but not a function of time. It can be read as an implicit function of time because we take   and   to be functions of time, now explicit functions of time:
  and  .
The partial differentiations appear in
     and      .
For the calculation of a trajectory, these are not integrated as partial differential equations. They are integrated as ordinary differential equations. The symbols   and   denote ordinary functions of   and  . There is no suggestion that the partial derivative symbols intend that the functions should be integrated as partial derivatives for the present purpose, namely the calculation of trajectories.
In other words, the equations that are integrated with respect to the variable   are intended to be read as
  and   ,
where   and   are simply functions of   and  , without regard to their past history of derivation by partial differentiation.
In this reading, the trajectories are calculated simply from a system of ordinary differential equations.
I think it hardly makes sense to regard trajectories as calculated directly from partial differential equations.
The ordinary differential equations do not express actual physical fluid flow; the likeness to fluid flow is only an analogy. The ordinary differential equations express trajectories. Yes, the trajectories are like flow lines in incompressible fluid flow, but they are not bound together by anything beyond volume preservation. Flow lines in an incompressible fluid can be split by a solid obstacle. Particle collisions here play the part of solid obstacles. For a simple kind of head-on collision, there is a singular instant of time when some time derivatives vanish abruptly, or more accurately cease to exist, and then reappear somewhere else in phase space. They are not differentiable at that instant. In a head-on collision, the velocities of the colliding particles just abruptly reverse their signs. In a non–head-on instantaneous collision, the velocities of the particles will change abruptly in general by a finite amount, and again at the instant of collision their trajectories will not be differentiable. I think this amounts to a singular departure from Hamiltonian motion?
It is the collisions that count for everything in the case of chaotic motion of particles. The collisions cannot be dismissed simply because the head-on ones are of zero measure. For a head-on collision in the case of the inverse fifth power law, the system can be treated as continuous in time, and the time derivatives will in general not vanish abruptly. I guess this is why this solvable case is said to be exceptional and to fail to show the usual properties with respect to entropy.
Collecting my just foregoing two paragraphs, I think that we are not looking at completely Hamiltonian motion. As I understand it, Hamiltonian motion requires continuity of trajectories. I think we are looking at piecewise Hamiltonian motion. I guess this is part of the reason why there are problems relating our motions with ergodic theory.Chjoaygame (talk) 10:09, 29 December 2020 (UTC)Reply
There are several ways that a trajectory can separate from its former close neighbours. Yes, a trajectory itself does not split, intersect another trajectory, nor join one. But two neighbouring trajectories can be describing two paths separated by a head-on collision path. I have indicated that in my walls of text above. That is one way. Another way is that a molecule can be excited or de-excited in a collision. Another way is that a molecule can spontaneously radiate or be excited by a photon. I am not clear in my mind whether even a simple trajectory is adequate to decribe this. So I am saying that collisions and excitations/de-excitations are, if you like, comparable with the solid objects that can split a fluid flow line. They are countable in a finite body. In an infinite body, I am not sure, but I guess they would still be countable? Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
First of all, lets not complicate things by dealing with hard collisions which have an instantaneous duration in time. Lets assume that all collisions are soft, so that a trajectory in phase space is continuous, rather than consisting of a point which disappears at one place and appears at another. Also, Hamilton's equations are generalizations of the physics of the situation. You may get the Hamiltonian wrong, but, on a microscopic level, there is no such thing as "non-Hamiltonian" physics.
Secondly, P(p,q) and Q(p,q) are not just some random functions of p and q. They are constrained to obey the partial differential equation:
 
In addition, the Hamiltonian for an isolated system is time-reversible. This makes all the difference. If we define a position in phase space generically as  , then the velocity of a point is  . Hamiltons equations say that   which says that the divergence of the velocity is zero, which says that if we look at the motion of points as a flow, it is incompressible. Since Hamilton's equations are reversible, the flow is frictionless. So the flow is constrained to be that of a frictionless, incompressible fluid, which admits no splitting of phase tubes or trajectories. Please look at Liouville's theorem (Hamiltonian). PAR (talk) 17:48, 29 December 2020 (UTC)Reply
Quoting: “there is no such thing as "non-Hamiltonian" physics.” I can't reconcile that and “Boltzmann's ergodic assumption” with Smale's view in blue font above in the subsection headed 'reasoning'.Chjoaygame (talk) 00:31, 30 December 2020 (UTC)Reply

How many space filling phase tubes? (or trajectories?)

edit
Let X represent the phase space of the system and define a measure µ of a subset of X as the fractional volume of that subset, so that µ(X)=1. The idea that "each microstate is equally likely" can be expressed as "the probability of finding the system in a subset A of X is µ(A)". For an ergodic system, this can be expressed without reference to probability as "in the limit of infinite time, the fraction of time the system trajectory spends in subset A is also µ(A)." In other words the spatial average of A (the fractional volume µ(A)) is equal to the time average of A (the fractional time spent by any trajectory in A).
For an ergodic system, there cannot be two or more disjoint classes of trajectories, such that each is confined to its "own" subset of X (e.g. subsets A and B) and such that each has a measure greater than zero (0<µ(A)<1 and 0<µ(B)<1). If that were true, then the fraction of time trajectory A spent in subset A equals 1, but the volume of A, which is µ(A) is less than 1, which is impossible for an ergodic system.
You could say that the system has a single "filling" trajectory that, after an infinite amount of time, fills the entire phase space X, except for a subset of X of measure zero. The system "filling" trajectory is therefore infinite in length. This statement requires a careful explanation of infinity to be correctly understood. PAR (talk) 17:03, 28 December 2020 (UTC)Reply
I am not sure about "filling" trajectories. In a simple picture, trajectories confine themselves to some hyper-surface in phase space. It can be, for example, a hyper-surface of constant thermodynamic entropy, volume, and internal energy, with  , and fluctuating inhomogeneities in local temperature and pressure. I am not sure whether a single trajectory with one colour of 'thread', or even many different coloured trajectories, will "fill" the hyper-surface. I think that there are fractional 'dimensions' to measure that. I think that is a topic that describes the fluctuations that may occur, for example, in a system of constant internal energy and thermodynamic entropy. I think this is where ergodic or some closely related theory may come in. I think this is where your interest focuses.Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
That means that there is no subset of X with non-zero measure that is "off limits" to the system filling trajectory. The PRT says that the system filling trajectory revisits infinitely often any neighborhood it was in at any particular time. Ergodicity says the system filling trajectory visits every neighborhood A in X and the fraction of time spent in that neighborhood is µ(A), the fractional volume of A.
There is no unique "equilibrium" trajectory. PAR (talk) 17:03, 28 December 2020 (UTC)Reply
I agree that "there is no subset of X with non-zero measure that is "off limits" to the system filling trajectory."
In my view, whether or not there is a unique "equilibrium" trajectory is decided as just above by whether or not there is just a one-coloured thread that "fills" the hyper-surface. Nevertheless, my view is that, in an isolated system (time-invariant internal energy and thermodynamic entropy), every trajectory (whether there be just one, or many, that "fill" the hyper-surface) is an "equilibrium" trajectory. My view is that, in an isolated system, a one-dimensional thread does not, in the time course of its travels, at some occasional instant of time, at the corresponding point along its length, switch between black (equilibrium) and white (non-equilibrium), and at a later occasional time switch back. My view is that such switching would need to be arbitrarily defined, and I would protest against such arbitrariness. My understanding of your view is that occasionally in the time course of a single trajectory, such switches occur, and that their time rate of occurrence comes up for Poincaré recurrence. I accept that, supposing a switch level has (under protest) been stipulated, Poincaré would come up.Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
As I described above, there is only one colored thread that fills the space. Given the assumptions about the phase space, including the ergodic assumption, this conclusion is certain and inescapable. There is no "equilibrium" trajectory.
There are other colored threads, but they have measure zero and are not worth worrying about. I was thinking a particular color of thread specified a particular trajectory, not as an indicator of equilibrium. Yes, you are right, the question of whether a system is in equilibrium is a question of whether the microstate is in the equilibrium subset of phase space. The system trajectory takes the system thru equilibrium regions and bizarre regions. When the trajectory is in the equilibrium region, the system is in equilibrium. When it is in a bizarre region, it is bizarre.PAR (talk) 10:73, 29 December 2020 (UTC)Reply
I think that the ergodic assumption is a mathematical artifice, and I have an idea that it is wrong in physics. This fits my view that we should not try to tell the novice about ergodicity. I understand that you believe that the ergodic assumption is valid, though I am not sure that you have distinguished its mathematical convenience from its physical implications. My impression is that the top experts can't do that.
I understand that you believe that there is only one coloured thread, and that all the other coloured ones have measure zero, but I don't think you have proved that, and I am still inclined to believe that there are likely to be countless other coloured threads, all of them occupying the same 'volume', likely to be of fractional dimension. Each thread is of dimension one, but taken over its whole length, it occupies some fractional-dimensioned 'volume'. Each thread goes through all accessible regions, but does not fill any region completely.
The accessible region is precisely the 'equilibrium' region. No trajectory that has points in the 'equilibrium' region can ever venture outside it; such a trajectory can't actually 'enter' or 'leave' the 'equilibrium' region, because it is always in it. The 'non-equilibrium' region is forever out of bounds for such a trajectory. When the trajectory enters a bizarre region, the system is showing transient fluctuations into inhomogeneity. The definition of a bizarre region is arbitrary and leads to endless unproductive confusion.
Yes, you were thinking of each single thread having one colour, but I think your reasoning amounts to assigning it colour changes, from black in equilibrium to white in non-equilibrium. Perhaps we could imagine a thread having two one-dimensional tightly braided strands, one the unique rainbow colour that identifies it, the other changing from time to time between black and white as chosen by the arbiter of bizarreness.
I have tried to put your signatures where I need them to mark the way. I hope I haven't made mistakes in that.Chjoaygame (talk) 11:57, 29 December 2020 (UTC)Reply
Using the ergodic assumption it is clear that there are no µ>0 regions out of bounds for a trajectory and that therefore there is only one trajectory that covers the entire space, with µ=0 exceptions mentioned. Your point of view appears to be that entropy is the property of a trajectory, and you appear willing to go to any length to protect it. If you want to rescue your point of view by denying Boltzmann's ergodic assumption and all that follows, then we will have to agree to disagree. I am willing to accept the ergodic assumption without investigating all the fine details and possible violations, since statistical mechanics, in its basic form, rests upon it. If you want to rescue your point of view by saying that the entropy of a finite isolated system is a fixed constant, regardless of Poincare recurrence and the flows that follow a bizarre state which recurs infinitely often, then we will have to agree to disagree. I'm not saying you are wrong, you are just basically discarding a large portion of statistical mechanics which makes a lot of sense in my mind and I cannot clarify what equally consistent theory you wish to replace it with. PAR (talk) 22:49, 29 December 2020 (UTC)Reply
Good call. Thank you for your care and patience. I have gained a lot from this.Chjoaygame (talk) 23:56, 29 December 2020 (UTC)Reply

The Second Law

edit
You say: Quoting: "This fact is enshrined in the Second Law of thermodynamics, which, roughly, states that transitions from equilibrium to non-equilibrium states cannot occur in isolated systems, which is the same as saying that entropy cannot decrease in isolated systems (where a system is isolated if it has no interaction with its environment: there is no heat exchange, no one is compressing the gas, etc.)." Not too sure what they have in mind here. Need to attend to this question. Why did they write "approached"? Are they denying Poincaré recurrence? Is it redundant that they add that "entropy cannot decrease" when they have just said that "transitions from equilibrium to non-equilibrium states cannot occur in isolated systems"?"
The laws of thermodynamics hold in the thermodynamic limit of, what stat mech would describe as, an infinite number of particles. In thermodynamics, entropy never decreases, and there is no Poincare recurrence. Stat mech says that for finite systems, entropy may momentarily decrease, and there is a finite Poincare recurrence time. PAR (talk) 17:34, 28 December 2020 (UTC)Reply
It is your view that "The laws of thermodynamics hold in the thermodynamic limit of, what stat mech would describe as, an infinite number of particles. In thermodynamics, entropy never decreases, and there is no Poincare recurrence."
That is not my view. I prefer to stick to finite systems.Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
Then you will be stuck with severe violations of the second law via a finite Poincare recurrence time, unless you wish to disagree with Boltzmann about the definition of statistical mechanical entropy. PAR (talk) 02:46, 29 December 2020 (UTC)Reply
I recognise that is your view, but mine is at odds with it.
I don't know exactly how Boltzmann distinguishes thermodynamic from statistical mechanical entropy, not exactly what you mean here. I recognise that most people think that Poincaré recurrence violates the second law, but I don't think so.
The second law talks about thermodynamic entropy, not about statistical mechanical entropy. Poincaré recurrence makes no use of the concept of entropy, and I think says nothing about it. Poincaré recurrence is just a feature of all trajectories. It would be detected experimentally by some kind of local measurements of density, pressure, or temperature. These are time-dependent local variables. They are not state variables of a thermodynamic system in its own state of internal thermodynamic equilibrium. Thermodynamic state variables are global properties of the system as a whole, considered as lasting for practically infinite time. It makes people feel good to think that they can measure thermodynamic entropy at an instant, but my feeling is that they are actually measuring properties that depend entirely on features of a trajectory at an instant, or over a short time, depending on the time-resolution of the their instruments.
If you mean that Boltzmann thought that his H-function is a thermodynamic entropy, perhaps you know what he thought, but I don't. I guess I could read up on that. The paper Jaynes (1971) that I cited in my walls of text above, that was quoted by Grandy, already rendered into green font, says that the H-function does not pass as an ordinary thermodynamic entropy.
So, no, I don't feel at all stuck with thoughts of violation of the second law. Indeed, I think my view is the proper one to ensure that there are no violations of it.Chjoaygame (talk) 10:57, 29 December 2020 (UTC)Reply
I think that your view is an artifice generated by a cast of mind that gives priority to statistical mechanics over macroscopic thermodynamics. I accept that your view and cast of mind are not peculiar to you. I guess that many share them. I just think that they are artificial, and I think that they would unhelpfully complicate a picture for a novice.Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
You may be right, and I am ready to get rid of artifices if they make no sense, even if they are "palatable". Are you of the same mind?PAR (talk) 02:46, 29 December 2020 (UTC)Reply
Yes, I am of the mind that we want “to get rid of artifices if they make no sense, even if they are "palatable".” but for this article I would go quite a lot further, and say that we should try to bypass them merely if they are unlikely to help our novice readers. For example, I think it fair to say that the article on ergodic theory that you indicated is scarcely likely to help our novice readers.Chjoaygame (talk) 10:57, 29 December 2020 (UTC)Reply
I agree that in thermodynamics, for an isolated system, thermodynamic entropy doesn't decrease, but I hold that it doesn't increase either. An isolated system does not suffer transfer of energy as heat, nor in association with transfer of matter. A thermodynamic system in diathermal connection with a constant temperature heat bath, and thereby not isolated, can gain or lose small fluctuative quantities of energy as heat, and can thereby gain or lose small fluctuative quantities of thermodynamic entropy, generating small fluctuations of Helmholtz free energy and of entropy. I have tried to indicate this in my walls of text above.
In my view, an isolated thermodynamic system is susceptible of Poincaré recurrence. In my view, Poincaré recurrence in an isolated system relates to local fluctuation of temperature and pressure, as indicated by statistics of instantaneous microstates, considered in statistical mechanics, but does not relate to fluctuations of thermodynamic entropy, internal energy or volume, which are globally, not locally, defined macroscopic quantities. In my view, for statistical mechanics, for an isolated system with  , thermodynamic entropy, volume, and internal energy, should be considered as globally defined macroscopic parameters, prescribed in advance of any statistical mechanical calculation based on microscopic considerations.Chjoaygame (talk) 00:21, 29 December 2020 (UTC)Reply
So, in the mixing example, is it your view that the instant after the wall is removed, the system's entropy is equal to its equilibrium entropy? PAR (talk) 02:46, 29 December 2020 (UTC)Reply
No, that is rather different from my view in general. That example doesn't give a good general idea of my view. I have tried to set out my view on that matter above, shortly to be put into magenta font. In the specific (but slightly exceptional) example that you name, indeed the partition is not replaced, and that makes it slightly exceptional. In the general case that proceeds to isolation, the permeable wall is eventually replaced with a fully isolative wall. And, yes, in that special case, I do think that at “the instant after the wall is removed, the system's entropy is equal to its equilibrium entropy,” modulo my view that the thermodynamic entropy is the only one that I recognise, so that it is necessarily 'equilibrium'. That instantaneous state will recur according to Poincaré. That is why I want it to be counted as a point on the trajectory. If we excluded it, we wouldn't know that it had recurred; we wouldn't know what to look out for. As you have observed, we will need a good supply of popcorn to watch for the recurrence, but we have research grants for that. There will be lots of other bizarre states that also recur, but again we have plenty of popcorn.Chjoaygame (talk) 10:57, 29 December 2020 (UTC)Reply
But an instant after the wall is removed the system is in a state of flux, in other words not in equilibrium. Yet you say the instant the wall is removed the system has its equilibrium entropy? PAR (talk) 20:02, 29 December 2020 (UTC)Reply
Yes.
When you say "equilibrium" and “in a state of flux”, you are not explicitly distinguishing between macroscopic and microscopic 'flux', and between global external and local internal 'flux', and between instantaneous and eternal time scales. I think your default meaning refers to local instantaneous microscopic flux, from a viewpoint like that of our professor friend who prioritises the microscopic over the macroscopic and the mathematical over the physical. He defines 'equilibrium' in terms of local instantaneous microscopic states, objects that are unknown to thermodynamics. On the one hand, we are talking about thermodynamic equilibrium that refers to an infinitely enduring state characterised by, say, three real number variables, as in  , which we are going to observe over time scales that cover many Poincaré recurrence times (which our professor friend, and, if I remember aright Boltzmann, dismiss as even longer than infinite), and over space scales that measure only over the whole-system walls, with no 'fluxes' between system and surroundings. And on the other hand we are talking about instantaneous microscopic internal 'fluxes' that we have not defined with respect to the size and time-response of our measuring instruments, that can perhaps measure finely enough to see every molecule and quickly enough to practically by-pass Heisenberg uncertainty, à la Boltzmann phase space, with, say, 1024 real number variables (or are we considering infinitely many real number variables?).
When I say 'yes', I am referring to thermodynamic equilibrium.
If one considers the possibility of Poincaré recurrence, one will consider the bizarre initial state as just one of many different practically ignorable transient fluctuations. During the recurrence time for that particular bizarre state, there are likely to occur many other quite different bizarre states. Who decides which are bizarre enough to deserve recognition? Chjoaygame (talk) 23:49, 29 December 2020 (UTC)Reply
If I intend to work with an infinitely numerous system, nothing needs to be "snipped out". The bizarre states simply have measure zero. I don't need to ignore them. By virtue of their being measure zero, they are, by their nature, ignorable, and yes, the Poincare recurrence time is infinite, so yes, there will be nothing to see of Poincare recurrence, and there are no exceptions to the second law, entropy always increases. This does not mean that a system cannot be prepared to be in a bizarre state. E.g. remove the wall in the mixing example. For an infinite system, the trajectory will take it into the µ=1 equilibrium state and it stays there. PAR (talk) 01:35, 1 January 2021 (UTC)Reply
If you are happy to ignore the bizarre states in general, it seems to me that you would be happy to ignore the initial bizarre state along with the others. I am confused now. I seem to get the impression that you intend to entirely ignore Poincaré recurrence; if so, we don't need to talk about it? Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
No, I do not ignore the initial bizarre state. For an infinitely large system, it has measure zero, which means it will not be RE-visited. There is nothing to prevent it from being prepared as an initial state. I have tried to explain this in the example below. In the example, the angle zero has measure zero, but you can begin at angle zero. The fact that it is measure zero means it will never be revisited. In the mixing example, you prepare a bizarre state by removing the wall. For a finite system, that bizarre state has a measure greater than zero. For a large system, that measure is extremely small. That state will be revisited in finite time. For a large system, that time will be extremely long. There IS Poincare recurrence. For an infinite system, that bizarre state has measure zero. It will never be revisited, Poincare recurrence time is infinite, or, equivalently, it never happens. PAR (talk) 04:55, 3 January 2021 (UTC)Reply
In my view, sometimes it is your whim give prominence to a bizarre state, and sometimes it is your whim to ignore it on the grounds that it has measure zero. I would say that it would be practically impossible to prepare one and the same instantaneous microstate more than once. And, as I have mentioned somewhere here below, I have failed to get research grants to do experiments on infinite systems. Till now, I have always assumed that the whole point of Poincaré recurrence in a finite system was that the initial bizarre state would eventually, after a finite time, be arbitrarily nearly approached. At least we agree that for a finite system, Poincaré recurrence happens. But I am confused to find you actually admitting that a finite system has any place or status in results, experimental or theoretical. Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
I should have clarified that it is a bizarre macrostate that is prepared, which constitute an infinite number of instances of a bizarre microstate, none of which can, in general, be reproduced. The difference between a prepared bizarre macrostate and the revisiting of that macrostate is certainly not whimsical. The distinction is obvious and clear. I don't know why you are suprised that I say that a finite system has any place or status in results, experimental or theoretical. A finite system is the only object available for experimentation. Statistical mechanics theoretically can explain an infinite system (more correctly, it can take the limit as a finite system approaches infinity). It can also quantify the fuzziness of a finite system as expressed by fluctuations, finite Poincare recurrence, etc. It can also show that the fuzziness for large systems is quite small, usually ignorable, in which case the thermodynamic limit may be applied to give excellent results. PAR (talk) 17:26, 4 January 2021 (UTC)Reply
As a simple example, consider a circle, with a point on the circle specified by angle θ where 0<=θ<360; (degrees). Lets consider a deterministic process where, for each time step, θ increments by the irrational value of   degrees. This trajectory will cover the circle, yet will never yield a rational value of θ. Since it covers the circle, it has a measure of 360, or, if we define a measure to be the length covered in degrees divided by 360, It will have measure 1. Over a huge amount of time, the fraction of time a trajectory spends in some finite region is equal to the length (or, more generally, the measure) of that region, so the system is ergodic. There cannot be two possible sets of trajectories which never intersect, while both cover the space of points on the circle. Lets call rational values of θ bizarre. They will have measure zero. We may prepare a state that begins at the rational angle 0, but the trajectory will never return to zero, no matter how long you wait, nor will it land on any other rational value of θ. Poincare recurrence will not be exact. The trajectory will never land on a rational value of the angle, but that doesn't mean we cannot prepare a state which begins at zero. If we declare all irrational values to be "equilibrium", then the trajectory will enter the equilibrium subset of the circle in one step, and never leave it. PAR (talk) 01:35, 1 January 2021 (UTC)Reply
You have defeated me. Sometimes you seem to think that 1024 particles are not numerous enough, and now you are talking about a one-particle system.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
The above was not intended to portray any physical system. It was a mathematical example, an analogy, intended to simplify things enormously while pointing out that an ergodic system cannot have two or more independent sets of trajectories, each with a measure greater than zero. Also it gives a simple example of the clear and obvious difference between starting at a bizarre state (0 degrees) and revisiting a bizarre state. PAR (talk) 17:26, 4 January 2021 (UTC)Reply
If, on the other hand, we specify a finite number of "microstates" on the circle, say, for example, only the integer values of θ, and a deterministic process which increments θ by one, then that too will cover the 360 points on the circle. The thing is, each point now has a measure of 1/360, and there are no measure-zero subsets. Defining equilibrium states becomes problematic, and there is exact Poincare recurrence. Still, the system is ergodic. Over a huge amount of time, the amount of time spent in any non-zero subset of the points will be proportional to the number of points (i.e. the measure of those points). There cannot be two possible sets of trajectories which never intersect, while both cover the space of points on the circle.
The analogy is not perfect. For a statistical mechanical description of a finite system, there are an infinite number of microstates, but the point is that the bizarre states will not have measure zero, and there is a finite possibility that, by Poincare recurrence, they will be visited . As you say, the question of what constitutes an equilibrium state becomes fuzzy. The solution to the fuzziness is not to deny ergodicity, etc. but to simply realize that thermodynamics only holds true in the limit of an infinite system. The reason that thermodynamics is so useful is because, for many purposes, a very large system, like a glass of water, the error in assuming that system is in fact infinite introduces negligible errors. The error between an infinite recurrence time and a recurrence time equal to many times the age of the universe is negligible. The fluctuations are, in practice, always so small as to be negligible, and the fluctuations that are not small never happen in practice. Thermodynamics far and away gives the correct answers for a large system. For something like Brownian motion, you have to admit that the system is not infinite, and that thermodynamics fails. To repeat, the answer is not to develop some theory in which the second law is true for finite systems. It's like the theory of epicycles to describe the motion of the planets, sun, and stars around the earth. Yes, such a theory can be developed, but occam's razor practically demands Newtons law of gravitation, and then, to be very precise, Einsteins theory of general relativity. PAR (talk) 01:35, 1 January 2021 (UTC)Reply
I don't deny ergodicity as a brilliant mathematical artifice. I just think that it is regrettable to make the concept of equilibrium fuzzy. I think your approach is a perpetual engine for creating fuzziness, with no pay-off except to satisfy an addiction to fuzziness.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
I am not addicted to fuzziness, it is a simple fact of reality. If I flip a coin, I cannot predict the outcome without extremely detailed knowledge of the forces of my finger and position of the coin before the flip. If I relinquish that knowledge, as we relinquish knowledge of the microstate of a system, then the outcome is fuzzy and uncertain. However, if I flip that coin 10^24 times, I can say that the ratio will be within 0.00000001 percent of 50-50 99.999999 percent of the time. That is a serious lack of fuzziness. In the limit of an infinite number of flips, it will be exactly 50-50, no fuzziness. For practical purposes, we can say that the results for 10^24 flips is 50-50. Worrying about the difference between 10^24 flips, and an infinite number of flips is pointless. PAR (talk) 17:26, 4 January 2021 (UTC)Reply
Also, when I say that a bizarre state is in a state of flux, it is not microscopic flux. Removing the wall in the mixing example causes macroscopic flux as the two gases diffuse into each other. It is a measurable macroscopic flux. PAR (talk) 17:44, 1 January 2021 (UTC)Reply
As I read you here, the boundary between 'macroscopic' and 'microscopic' has blurred out of visibility, and the words have only arbitrary meaning. You talk of 'measurable macroscopic flux'. I say that you will confuse the novice because you have not said 'assuming local thermodynamic equilibrium and small and rapidly responding thermometers and manometers'. This is practically the same problem as the one I see in your policy of distinguishing 'equilibrium' versus 'non-equilibrium' instantaneous states in a single trajectory.Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
I have not blurred the microscopic and the macroscopic. In the mixing example, beginning with molecules A and B on each side of a wall, suppose molecule A absorbs light at some frequency, while B does not. By shining a laser beam at the absorption frequency, you can measure the relative densities of A and B along any path of the light beam. In the beginning, after the wall is removed, the one side containing A molecules will show high absorption, the B side none. As time goes on the absorption will drop on the A side, rise on the B side as the two gases diffuse into each other, until after some time the measured absorptions become practically equal. This is a macroscopic process and happens in minutes, hours, days, whatever. The light beams measure an average along the light path. For a finite system, there will be fluctuations in the absorption but the larger the system, the larger the path of the light beam thru the medium. This will result in smaller and smaller fluctuations, and in the limit of an infinite system, there will be none. PAR (talk) 04:55, 3 January 2021 (UTC)Reply
The laser beam is narrow and samples its locality. It does not measure thermodynamically recognised macroscopic variables. My feeling is that you are committed or habituated to confusion, and that is part of the reason why I am happy to agree to disagree.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
I agree, the laser beam is narrow and samples its locality. It does not measure thermodynamically recognised macroscopic variables. It does quantify an inhomogeneous or bizarre macrostate and its change in time as it approaches equilibrium. This is not confusing. I am not committed to confusion, I recognize it as an unavoidable consequence of trying to deal with a small system using incomplete information. I might accuse you of being committed and habituated to certainty, even though it does not exist in a system which is only partially described by macroscopic variables. PAR (talk) 17:26, 4 January 2021 (UTC)Reply
You propose that "thermodynamics fails" for Brownian motion because you confound the macroscopic with the microscopic, and you use the word 'entropy' for the time-dependent concept that I prefer to call 'inhomogeneity'; you use that word when the second law is about thermodynamic entropy. Again, the perpetual engine for the generation of confusion. Einstein wrote about Brownian motion, but didn't think that it made thermodynamics fail. Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
I never mentioned entropy in regard to the system when it was in a state of flux. Brownian motion does not make thermodynamics fail, I should not have said it like that. Thermodynamics simply cannot address the problem of Brownian motion. It is outside the scope of thermodynamics. The system considered is quite small - a dust particle and its immediate small neighborhood. The thermodynamic limit of an infinitely large system is severely violated, and thermodynamics therefore cannot address the problem. PAR (talk) 04:55, 3 January 2021 (UTC)Reply
I will not pursue this here.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
Regarding your statement by Smale and Sinai, they may well be correct, but it doesn't matter. While, strictly speaking, Hamiltonian systems may not be strictly ergodic, Smale and Sinai say that they are "essentially ergodic". I take this to mean that the ergodic assumption is valid enough in practice to give results in which the error is extremely small. It does not justify the massive violation of ergodicity that would result in declaring that there are two or more distinct classes of trajectory (e.g. "equilibrium and bizarre trajectories) each of which has a measure significantly larger than zero.
Smale and Sinai may have a number, a measure of how much a given Hamiltonian system violates ergodicity. I would be curious to know how such a measure of non-ergodicity behaves in the thermodynamic limit. I wonder if it decreases as the system grows larger, tending to zero in the thermodynamic limit? PAR (talk) 02:12, 2 January 2021 (UTC)Reply
The same issue as above. For me, in an isolated system, each trajectory is entirely an equilibrium trajectory, by definition; likewise, there are no 'bizarre' trajectories. Each equilibrium trajectory will at Poincaré time occasions transiently pass through bizarre instantaneous microstates. As for the number of equilibrium trajectories, I reserve my thoughts; I am not sure. I would allow for the possibility, without prejudice, that there might be infinitely many of them, all of equal status and kind, probably wriggling around each other, and each perhaps of zero measure, or having some kind of fractal dimension. Jointly they would fill the energy hyper-surface.Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
Assuming ergodicity, this is provably false. Assuming ergodicity or "essential ergodicity", there cannot be two or more independent trajectories of non-zero measure. As for measure zero trajectories, jointly or individually, they contibute exactly nothing, a perfect zero, when it comes to filling phase space. PAR (talk) 04:55, 3 January 2021 (UTC)Reply
I am not convinced that ergodicity or quasi-ergodicity are the only hypotheses that can generate experimentally verified predictions. I am open to the idea that some other hypotheses might also do the trick. I continue to reserve my position on this.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
You may be right, but ergodicity is a fundamental assumption of statistical mechanics and the question of whether it is precisely true is an abstruse subject which I am not interested in at present. The extreme violations of ergodicity that you propose to rescue your theory are not true. PAR (talk) 17:26, 4 January 2021 (UTC)Reply
You write of the 'thermodynamic limit'. I would be happy if instead you wrote of the 'statistical-mechanical infinite particle-number limit'. Statistical mechanics works with the ergodic, or with the quasi-ergodic, hypothesis. Statistical mechanics works with the   limit at  . It is a sophisticated mathematical theory, and is very successful; but its physical meaning is not easy to see. Because it is mathematical and sophisticated, I think it is not the best basis for a non-mathematical article intended for novices.Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
Statistical mechanics is not restricted to a temperature of zero (which is what I assume you mean by T=0.) nor to an infinite system. I cannot believe that is what you meant, so I must have misinterpreted that. PAR (talk) 04:55, 3 January 2021 (UTC)Reply
Sorry, I was lazy. I meant time, not temperature. I was pointing to an instantaneous state. I don't know why you talk about the limit of infinitely many particles if you aren't referring to a mathematical adventure, namely statistical mechanics.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
I am referring to statistical mechanics. The statistical mechanics description in the limit of infinite number of particles matches the predictions of thermodynamics perfectly. For large systems, it matches extremely well, much better than experimental error.PAR (talk) 17:26, 4 January 2021 (UTC)Reply
For me, the physics tries to think in terms of the   limit of trajectories with finite  , without thoughts of ergodicity, because it considers trajectories as primary. For an isolated system, this point of view makes the second law come true by defining entropy as a timeless constant. It allows the logical possibility of locally defined intensive variables showing fluctuations, which for many cases are too small to observe, but which can show Poincaré bizarre states as extremes. For non-isolated systems, the statements need trimming.
Quoting: "valid enough in practice". You mean 'mathematical practice', not 'experimental practice'.Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
No, I mean 'experimental practice', not 'mathematical practice'. If the predictions of statistical mechanics, including ergodicity, did not give excellent results (i.e. agree with thermodynamics) for large systems, then statistical mechanics would be wrong. PAR (talk) 04:55, 3 January 2021 (UTC)Reply
Experiment can directly check some of the consequences of the ergodic and quasi-ergodic hypotheses. But it cannot directly check the hypotheses themselves. They are mathematical artifices, not physical phenomena.
I have said that I think statistical mechanics is very successful, meaning that it gives accurate predictions of experimental results. But the ergodic hypothesis and the quasi-ergodic hypothesis themselves cannot be directly checked by experiment.   does not imply  Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
Agreed.PAR (talk) 17:26, 4 January 2021 (UTC)Reply
As for Smale and Sinai on "essentially ergodic", I would need to check the details, but not now. Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
With regard to your magenta statement "and starting our final macroscopic thermodynamic equilibrium. Who is to say how far the minglings, dispersals, and diversifications had progressed at  ? I claim that thermodynamics doesn't try to say.". I have a problem with "starting our final macroscopic thermodynamic equilibrium". When the wall is made impermeable, the system is in a bizarre state, assuming it was not made impermeable when the system reached equilibrium. When it is made impermeable, macroscopic changes will commence, and the two systems each start to approach equilibrium, starting from that bizarre state.PAR (talk) 02:12, 2 January 2021 (UTC)Reply
Your assumption, that the wall was not made impermeable till the system had satisfied the criteria of non-bizarreness chosen by your arbiter, is arbitrary.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
This is the same issue again. You like to distinguish 'equilibrium' versus 'non-equilibrium' instantaneous states in a single trajectory. I think it is arbitrary to do so, and I think it is consequently a perpetual engine for the generation of confusion. Again, I see you as blurring or obliterating the macro/microscopic distinction.Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
Yes, the distinction becomes more arbitrary the smaller the system. For an infinite system there is zero arbitrariness. For an infinite system the equilibrium region of phase space has measure 1 and there is no arbitrariness to the concept of equilibrium. All other states are bizarre, or what I prefer to call non-equilibrium, and they have measure zero. You can begin in a bizarre state, but it will NEVER occur except as an initial condition in an infinite isolated system. This is why the second law unambiguously states that entropy always increases. PAR (talk) 04:55, 3 January 2021 (UTC)Reply
That is how you like to see things. I have not had an opportunity to do an experiment on an infinite system. I regard infinite systems as mathematical abstractions, not physical objects.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
I think you view your 'bizarre/non-bizarre' // 'equilibrium/non-equilibrium' distinction as non-arbitrary because you think you can quantify it by use of a discriminative quantity that you calculate on a probabilistic basis, that you like to call 'entropy', that you attribute to Boltzmann. For me, that is too close to thinking that thermodynamic entropy is defined for non-equilibrium macroscopic processes, or for instantaneous states. Again, I think this is a perpetual engine for the generation of confusion. If you are willing to call your discriminative quantity 'inhomogeneity', or some other name that you may choose, then my objections will vanish. Till now, for the sake of temporary argument, we have been using the word 'bizarre' for this purpose. For me, subject to revision, I see 'bizarre' as meaning 'inhomogeneous'.Chjoaygame (talk) 01:06, 3 January 2021 (UTC)Reply
I do not calculate entropy on a probabilistic basis. It is proportional to the log of the volume of the equilibrium microstates in an infinite system, which is equal to the volume of the entire phase space. This loudly denies the idea that a microstate has entropy, and loudly denies the idea that an entropy can be attributed to anything but an equilibrium macrostate. For finite systems, things get progressively fuzzier as the system size decreases. For large, but not infinite, systems, the assumption that we are dealing with an infinitely large system is an extremely good approximation. If 'bizarre' means 'inhomogeneous', then for an isolated system that unavoidably means "in a state of flux" which is not the definition of equilibrium. PAR (talk) 04:55, 3 January 2021 (UTC)Reply
I thought that the equilibrium microstates were equiprobable. When you write "entire phase space", I guess that you mean entire energy hyper-surface.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
I think we agreed that, for a finite system, the subset of the 6N dimensional space corresponding to a fixed energy and fixed volume would be called the "phase space". All points (microstates) in the phase space will have the same fixed energy and be contained in the fixed volume of the system. To be mathematically correct, when we speak of an infinite system, we are referring to the limit of any quantity of a finite system as N grows without an upper bound. It is mathematically incorrect to speak of a system with an infinite number of particles because infinity is not a real number. When we do speak of an infinite system, it is "code" for the mathematical limiting process. Likewise, the statement that "all microstates are equiprobable" is mathematically improper. It is "code" for the correct statement that in the limit of an infinite time interval, the fractional amount of time spent in any subset of phase space is equal to the fractional volume of that subset (fractional volume being the volume of the subset divided by the volume of the entire phase space). PAR (talk) 17:26, 4 January 2021 (UTC)Reply
For me, what you have just written is an example of the confusion created by obliterating the macroscopic/microscopic distinction. At least we can agree that in some matters, in your view, "things get progressively fuzzier". In my view, your view is committed to such fuzziness. I think such fuzziness is an arbitrary and ineluctable artefact of your view, with no pay-off. That is part of the reason why I was and still am happy to agree to disagree.Chjoaygame (talk) 10:40, 3 January 2021 (UTC)Reply
I repeat, I have not obliterated the macroscopic/microscopic distinction. Can you give a clear example of how I have done this? I am not committed to fuzziness, I accept that it happens in finite systems and deal with it. The fuzziness disappears in the limit of an infinitely large system, and I appreciate that clarity. That clarity is what gives thermodynamics its clarity, since thermodynamics is very clear about entropy in the second law - it ALWAYS increases, unless the system is already in thermodynamic equilibrium. I could just as well say that you are trying to impose an unrealistic lack of fuzziness for finite systems and seem to reject the clarity of an infinitely large system. You are led to reject ergodicity, a fundamental assumption of statistical mechanics, in your quest for clarity. You are led to reject the statistical mechanical definition of entropy as the log of the volume of a macrostate in phase space. You refuse to consider the thermodynamic limit in statistical mechanics, which is the only situation in which statistical mechanics mirrors the results of thermodynamics. I'm not saying you are wrong, but every time you object to my point of view it is usually born of a misunderstanding of my point of view. PAR (talk) 17:26, 4 January 2021 (UTC)Reply

some thoughts

edit

Perhaps I am mistaken in the following, but it is my best belief. Quite likely I may learn something here.

I imagine a finite system of material particles, atoms, molecules, and suchlike. The isolated system is defined by its thermodynamic entropy   and its volume  . Also its internal energy is a definite function   of those two defining quantities. The system is modeled microscopically by a phase space referring to   particles. For an example with convenient simplicity, we may consider the particles to be points that cannot spin or rotate. Each particle, numbered   is fully described at time   by a point in its private 6-dimensional phase space  .The particles move under the law specified by the Hamiltonian  . Starting at time   and continuing till time  , an instance numbered   of the system traces out a trajectory  .Chjoaygame (talk) 14:17, 3 January 2021 (UTC)Reply

I have to clarify here: An isolated system is defined by the number of particles (N), the fixed energy (U) and the fixed volume (V). In statistical mechanics, this defines the phase space of the system. A single point in the phase space specifies the position and momentum (or velocity) of every particle. By declaring that the entropy (S) is fixed you have further declared that the system is in thermodynamic equilibrium. There are no macroscopic variations in any thermodynamic variable, and it is not in a bizarre macrostate, which may be prepared, but does not occur spontaneously for an infinite system, and therefore does not occur spontaneously in thermodynamics.PAR (talk) 06:24, 4 January 2021 (UTC)Reply

I imagine that the trajectory   uniformly 'explores' the hyper-surface   defined by  . I imagine that its exploration takes it uniformly near every point of   over the time interval  , where   is several Poincaré recurrence times. I imagine that this finite exploration does not come close to actually visiting every point of  .

Now I imagine that  . The trajectory   will then do its darnedest to reach every point of   of dimension  . Whether it does so, I imagine, will depend on  . I imagine that in general it will actually reach only a set of some kind of fractal dimension  . Perhaps   will have a nice nature, so that   will reach a set of full dimension  , and the job will be done by just  . That would be the case that you propose. For all I know, that may be all. But if not,   will need to take some kind of infinity of values, enough to do a better job of reaching every point of the set   of dimension  ; I have an open mind about such possibilities. Thermodynamic entropy will need to be defined in terms that will deal with the dimensionality of the 'covering' of   by the trajectories such as  .

For me, whether   does the job, or some infinity of   values is needed, the important thing will be that each   will 'explore' or 'cover'   in some sense uniformly. That will express what I regard as of essential importance in the job description that is usually called 'ergodicity', or somesuch. Moreover, I imagine that, apart perhaps from some negligible exceptions, the many   will each do a uniform job, with all the   the same.Chjoaygame (talk) 14:17, 3 January 2021 (UTC)Reply

No. I have repeated this a number of times. To be very clear, if you assume ergodicity, then there cannot be two or more independent, forever separate trajectories each of which cover the phase space. This is a mathematically provable fact. If you reject it, you reject ergodicity in a massive way. PAR (talk) 06:24, 4 January 2021 (UTC)Reply

For me, there won't be a privileged 'brahmin' region of 'equilibrium' points and an unprivileged 'untouchable' region of 'non-equilibrium' points in  . True, there will be some relatively bizarre regions in  . And perhaps it may be possible to quantitate their degree of bizarreness by something like Boltzmann's H-function, but I think it unlikely in general. Bizarreness will need to have some likeness to inhomogeneity.Chjoaygame (talk) 14:17, 3 January 2021 (UTC)Reply

Yes, bizarreness will need to have some likeness to inhomogeneity. This inhomogeneity cannot persist. There will unavoidably be changes in the system as time goes on, which result from the inhomogeneity. The existence of these temporal changes explicitly deny the idea that the system is in thermodynamic equilibrium.PAR (talk) 06:24, 4 January 2021 (UTC)Reply

As I now understand you, you hold that just the unique   will, as  , actually reach every point of the full   dimensional set  . That would be very nice, and, for all I know, perhaps it may be the true case, duly called ergodicity.

That's my thinking. Perhaps it's nonsense.Chjoaygame (talk) 14:17, 3 January 2021 (UTC)Reply

To be mathematically rigorous, we cannot say that a trajectory will visit every point. It is "code" for the idea that every phase tube will visit a neighborhood of every point, no matter how small the cross-sectional volume of the phase tube, and no matter how small the neighborhood of the point. So, unrigorously, yes, as time -> infinity, a trajectory will reach every point of the full 6N-1 dimensional phase space. PAR (talk) 06:24, 4 January 2021 (UTC)Reply

For ease of editing, copy-and-pasted from above:

I think it is special pleading that you make the initial bizarre state of a kind with measure zero, and thereby give yourself a way of starting in a state with no measurable neighbourhood to which the system will recur. I think that there are many states with complete separation of the gases, any one of which could happen to be the starting state; I think that class of states is not of measure zero. Yes, any one of them will not be exactly revisited, but every one of its non-zero–measure neighbourhoods, however small, will eventually be visited again. I think that the system will eventually recur to every neighbourhood of non-zero measure.Chjoaygame (talk) 01:17, 4 January 2021 (UTC)Reply

That class of macrostates is not of measure zero for a finite system, and will therefore be revisited. For an infinite system, that class of macrostates is of measure zero and will not be revisited.PAR (talk) 06:24, 4 January 2021 (UTC)Reply

In other words, I think that even in a finite system, the 'equilibrium' states form a set of full measure in the energy hyper-surface. The set of bizarre states that have no returned-to measurable neighbourhood is not merely bizarre: it is ultra-bizarre, has measure zero, and will practically never occur as an initial bizarre state. You are pleading to start in an ultra-bizarre state. This doesn't depend on making the system infinitely large.Chjoaygame (talk) 01:17, 4 January 2021 (UTC)Reply

For a finite system, bizarre macrostates do not have measure zero. Also, for a finite system, the dividing line between equilibrium and bizarre is somewhat fuzzy. For a system of 3 particles, what is an equilibrium macrostate and what is a bizarre macrostate? Very hard to say. For a system of 10^23 particles it's much much clearer, but not perfectly clear. For an infinite system it is perfectly clear.
For a system of 3 particles, it is difficult to define a meaningful or useful temperature or pressure. Poincare recurrence time is short. For a system of 10^23 particles, all particles in the left half of the volume is certainly bizarre, and a system which measures a pressure and temperature as a time-average over a minute, and finds them constant, to within experimental error, is certainly an equilibrium macrostate. A system can be prepared in a bizarre macrostate (remove the wall in the mixing example), and it will be revisited, but the recurrence time for that macrostate is too enormous to worry about. For an infinite system, bizarre states exist and the system can be prepared in a bizarre state, but that bizarre state is never revisited, because it has measure zero. Poincare recurrence time is infinite.PAR (talk) 06:24, 4 January 2021 (UTC)Reply

The above paragraphs in grey font express something that had not occurred to me throughout our conversation. It reveals that I have been far misreading you all the while. Now I feel that I need to check with you about this, to see how you think.

Now I see that I have probably been guessing, imagining, or assuming wrongly, about how you look at the case of the abrupt initial-condition-ending process-commencing removal of the initial partition, with no process-terminating final-condition-starting replacement of a rigid impermeable isolative one. What woke me up just now was this: I never mentioned entropy in regard to the system when it was in a state of flux. I have been probably quite wrongly imagining that you think that the final-condition-starting ultra-bizarre instantaneous state has a high entropy, and that, by internal 'fluxes' and drifts, it rapidly and then gradually evolves into the final 'equilibrium' state that has the equilibrium entropy. I will stop at this point, and await a check from you about this.Chjoaygame (talk) 01:17, 4 January 2021 (UTC)Reply

I think you meant to say "... final-condition-starting ultra-bizarre instantaneous MACROstate has a LOW entropy", which increases to the equilibrium entropy. (Microstates do not posess entropy). No, I am being careful about not attributing a low thermodynamic entropy to the bizarre initial macrostate. Statistical mechanics says that if you can characterize that macrostate by perhaps dividing the volume up into small pieces, each one of which can be treated as a thermodynamic system (AKA LTE), then you can assign an entropy to the entire system, but I am not going there right now. PAR (talk) 06:24, 4 January 2021 (UTC)Reply
My careless mistake. Quoting “I think you meant to say "... final-condition-starting ultra-bizarre instantaneous MACROstate has a LOW entropy".” Yes, I was careless in writing “you think that the final-condition-starting ultra-bizarre instantaneous state has a high entropy.” I meant 'you think that the final-condition-starting ultra-bizarre instantaneous MICROstate has a LOW entropy.' I hold that a macrostate cannot be bizarre or non-bizarre, nor can it be instantaneous. For me, to specify bizarreness, one needs instantaneous microscopic data. And for me, a macrostate endures for some longish time.
For me, a macrostate of an isolated system is specified by the timeless thermodynamic state variables   In principle, it admits time-dependent global fluctuations in  , though generally these will be too small, and perhaps practically impossible, to observe, unless the system itself is very small. For me, whatever the size of the system, local time-dependent fluctuations in those quantities will occur continuously, and are not macroscopic.Chjoaygame (talk) 08:18, 4 January 2021 (UTC)Reply
No. You have declared S to be a constant, and so you have snipped out the bizarre macrostates. Bizarre macrostates are characterized by flux, so their entropy is undefined. For a large or infinite system you have removed the possibility of a bizarre state being an initial state. You have removed the possibility of a large system revisiting a bizarre state. The macrostate of an isolated system is defined in terms of phase space as a subset of the phase space. The distinction is fairly clear for a large system, crystal clear for an infinite system. For a finite system, fluctuations are the result of Poincare recurrence. For a large finite system, they will be very small, although in principle a large fluctuation may put the system into a bizarre state, but that is a very, very rare event. For an infinite system, fluctuations are absent, and the equiilibrium macrostate is an unvarying certainty. PAR (talk) 17:39, 4 January 2021 (UTC)Reply
It is good that we have this in common: “(Microstates do not posess entropy). No, I am being careful about not attributing a low thermodynamic entropy to the bizarre initial macrostate.” I have to admit that till very lately, I didn't know that such is your view. We agree. I have mistakenly been supposing otherwise. This has likely been a significant cause of grave confusion for me in this conversation.
I am happy to read “... but I am not going there right now.” At present I want to avoid considering LTE (aka local thermodynamic equilibrium). For me, to consider local thermodynamic equilibrium in the present context would be confusing. For me, it is a default part of the essence of local thermodynamic equilibrium that each small spatial region carries a thoroughly unpartitioned open system, unless otherwise very particularly specified. In the present conversation, we are talking about isolated systems unless otherwise specified.Chjoaygame (talk) 08:18, 4 January 2021 (UTC)Reply
We seem unable to agree on basic definitions. I am now happy to agree to disagree.Chjoaygame (talk) 22:21, 4 January 2021 (UTC)Reply
Lack of agreement on basic definitions is totally fixable. I think it is more than that. For example, what is your opinion about the fact that, assuming ergodicity, there cannot be two or more separate trajectories which cover the phase space? If you agree to that and are willing to assume at least "approximate ergodicity", then you simply must give up the idea that there are "equilibrium trajectories" and "bizarre trajectories". PAR (talk) 04:11, 5 January 2021 (UTC)Reply
Perhaps in all this, I have made some slips. Without going back to check for those, here is my view. I am considering only the case of an isolated system for the rest of this post. I think that there is no such thing as a "bizarre trajectory". Or at least, if such exist, they are not merely bizarre but are ultra-ultra-bizarre practically purely mathematical objects that I am totally happy to completely ignore from a physical point of view. In my view, there are countless bizarre instantaneous microstates, without prejudice as to whether or not they constitute a set or family of sets of measure zero or non-zero, or an unmeasurable set. I think that every ordinary or physically interesting bizarre instantaneous microstate is a point on an ordinary "non-bizarre" trajectory, such as you like to call 'an equilibrium trajectory'. I think that every trajectory in the accessible region of phase space is an equilibrium trajectory; I cannot imagine considering even an ultra-ultra-bizarre trajectory that could be usefully called 'non-equilibrium', though I guess it will be so extremely bizarre that I have no definite thoughts about it. I think that the 'volume' or 'quasi-volume' or fractional dimensional 'hyper-area' that is occupied by a trajectory is a matter for further analysis.
I say that the measure of the set of bizarre instantaneous microstates is arbitrarily prescribed at the whim of the arbiter of bizarreness, being possibly of measure zero at his whim, or at finite non-zero measure at his whim; or unmeasurable at his whim. I see no necessity that the arbiter will make the set of bizarre instantaneous microstates measurable or not measurable. This problem is not nearly dealt with by taking the limit of infinitely many particles. Loosely thinking, I suppose that there will be countless bizarre instantaneous microstates, likely of finite non-zero measure.
So far as I understand, the concepts of 'assumed ergodicity' and of 'at least "approximate ergodicity"' are, amongst experts, subject to reams of debate that is far from conclusive, or at least is concluded with important technicalities that are at present beyond me. I see in textbooks talk of 'quasi-ergodicity' since the days of the Ehrenfests. I think this topic is nowadays dealt with in terms of various concepts of 'fractional dimension', that cover things such as trajectories that are not adequately described by an integer-dimensioned 'occupation', having instead, one might say, 'quasi-volume'.
I think this is not the main problem that we have. I think our main problem lies in ideas of physics, not in mathematical questions. My view still differs significantly from yours, even when, without prejudice, for the sake of temporary argument, I accept your view that ergodicity is ergodicity and that there is only one trajectory that effectively 'fills', with full measure, the accessible region of phase space with full integer dimension, and that all other conceivable trajectories add up to zero measure in the that accessible region.
I have looked over this post and not found slips, but perhaps some remain.Chjoaygame (talk) 05:45, 5 January 2021 (UTC)Reply
Thinking it over, I see that I conceded more to you than I needed to.
You like to consider the case when no final partition is actually replaced: the initial state is ended by the removal of the partition. The question remains, when does that process end so as to start the final equilibrium state? I carelessly allowed it, without comment, to end at the moment of the removal of the partition. That was careless and arbitrary of me. It gave you plenty of scope, a free lunch.
But for the usual case, there is a finite time from the removal or permeability change of the partition and its final replacement with an impermeable partition. Likewise, a logical thing would be to say that after the removal of the partition, a finite time should be allowed before the final equilibrium state is determined, by a virtual or nominal replacement of the nominal or virtual but actually vacuous partition. I now say that indeed such a finite time should be allowed. During that finite time, some mingling of the gases will proceed. I now propose that the duration of that finite time is to be declared by the arbiter of bizarreness at his whim. I will allow him to make that duration as short as he pleases, even so short that there is practically no time for even a single molecule to cross the former boundary; for practical purposes I am happy to allow that this is practically the same as the case of no replacement of the partition, neither virtual nor actual. If it is so short, I will allow it still to define the start of the final equilibrium with an ordinary bizarre instantaneous microscopic state; but I will still say that such is arbitrary. Another arbiter, or the same arbiter on another day, might say that the process must last a significant finite time until the final equilibrium is declared. I wouldn't dispute that because I say the virtual 'replacement', though purely nominal and not actual, is arbitrary from the word go.Chjoaygame (talk) 10:09, 5 January 2021 (UTC)Reply
You say:

...the measure of the set of bizarre instantaneous microstates is arbitrarily prescribed at the whim of the arbiter of bizarreness, being possibly of measure zero at his whim, or at finite non-zero measure at his whim; or unmeasurable at his whim. I see no necessity that the arbiter will make the set of bizarre instantaneous microstates measurable or not measurable. This problem is not nearly dealt with by taking the limit of infinitely many particles. Loosely thinking, I suppose that there will be countless bizarre instantaneous microstates, likely of finite non-zero measure."

The "arbiter" does not decide the measure of a macrostate. Once defined, it is certain. I think you mean to say that the definition of a bizarre state is whimsical. For a small system, I agree. For a system of 10^23 particles, it is much less so. I totally disagree with the statement that "This problem is not nearly dealt with by taking the limit of infinitely many particles."
If I flip a coin twice, what outcome is bizarre, what outcome is expected? Very difficult to say. All heads has a probability of 1/4, a 50-50 ratio of heads to tails has a probability of 1/2. If you flip it 100 times, the chance of getting all heads is incredibly small, and rather "bizarre" and you will almost certainly be within fifteen percent of a 50-50 ratio of heads to tails, the "equilibrium" ratio. If you flip it 10^23 times, you can effectively forget about all heads, even though it is theoretically possible. It is extremely bizarre. You will have a 50-50 ratio of heads to tails to an incredibly accurate degree. The larger the number of flips, the less likely all heads will be and the closer the ratio of heads to tails will be to 50-50. This can be stated less rigorously by saying that in the limit of an infinite number of flips, the probability of all heads is precisely zero, and is certifiably bizarre, and the ratio of heads to tails is precisely 50-50.
This is simply the law of large numbers. For small systems, the definition of bizarre and equilibrium is whimsical. In the thermodynamic limit it is a certainty. Fuzziness is not a binary condition. There are degrees of fuzziness, which you do not appear to acknowledge. Small systems are fuzzy, large systems have very little and infinite systems have none.
Regarding the time interval after removing the wall - Experimentally, the time interval you must wait until equilibrium is the time interval it takes to be unable to measure any macroscopic change in time. Macroscopic change in time denies equilibrium. In statistical mechanics, the system moves from a bizarre state towards an equilibrium state. For a small system, this is fuzzy, there are large fluctuations, and the dividing line between bizarre and equilibrium is fuzzy. For 10^23 particles is is enormously clearer, although not perfectly clear. For an infinite system it is perfectly clear, and the time required is infinite.
I ask once again: Do you agree that for a finite ergodic system, since there is only one space-filling trajectory, and since equilibrium is not defined for every macrostate, that equilibrium is not a property of a trajectory? PAR (talk) 03:47, 7 January 2021 (UTC)Reply
No. I am inclined to agree to disagree.Chjoaygame (talk) 11:10, 7 January 2021 (UTC)Reply
Then you are denying a mathematically provable fact. Is this what you mean to do? This is a hypothetical question, I am not asking you to agree that a finite ergodic system has anything to do with reality. PAR (talk) 18:47, 7 January 2021 (UTC)Reply
I still think we differ on definitions of basic concepts. That seems to be unfixable.Chjoaygame (talk) 21:42, 7 January 2021 (UTC)Reply
My question does not depend on any difference we may have of basic concepts. It is a simple question. Unless you will at least attempt to answer it, our conversation is stalled. PAR (talk) 00:09, 8 January 2021 (UTC)Reply
Your question exercises the terms 'equilibrium' and 'macrostate'. I think we don't agree on their definitions. My feeling is that now is a good time to say that our conversation is stalled. I have gained much from it, and I thank you for your generosity with the care and work that you have put into it.Chjoaygame (talk) 04:54, 8 January 2021 (UTC)Reply
Ok. It was not a waste of time for me, I learned a lot of things that I used to take for granted and some I never knew. PAR (talk) 08:55, 8 January 2021 (UTC)Reply
For this, we can thank Jimmy Wales and those who support Wikipedia.Chjoaygame (talk) 12:06, 8 January 2021 (UTC)Reply

Missing cite in Central limit theorem for directional statistics

edit

The article cites "Rice (1995)" but no such source is listed in the bibliography. Can you please add? This issue dates back from 2011. Also, suggest installing a script to highlight such errors in the future. All you need to do is copy and paste importScript('User:Svick/HarvErrors.js'); // Backlink: [[User:Svick/HarvErrors.js]] to your common.js page. Thanks, Renata (talk) 02:20, 17 March 2021 (UTC)Reply

Done PAR (talk) 14:21, 19 March 2021 (UTC)Reply

Notice

edit

This is a standard message to notify contributors about an administrative ruling in effect. It does not imply that there are any issues with your contributions to date.

You have shown interest in COVID-19, broadly construed. Due to past disruption in this topic area, a more stringent set of rules called discretionary sanctions is in effect. Any administrator may impose sanctions on editors who do not strictly follow Wikipedia's policies, or the page-specific restrictions, when making edits related to the topic.

For additional information, please see the guidance on discretionary sanctions and the Arbitration Committee's decision here. If you have any questions, or any doubts regarding what edits are appropriate, you are welcome to discuss them with me or any other editor.

Alexbrn (talk) 17:24, 11 November 2021 (UTC)Reply

ArbCom 2021 Elections voter message

edit
 Hello! Voting in the 2021 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 6 December 2021. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2021 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:06, 23 November 2021 (UTC)Reply

edit

An automated process has detected that when you recently edited Logistic regression, you added a link pointing to the disambiguation page Deviance.

(Opt-out instructions.) --DPL bot (talk) 06:03, 22 February 2022 (UTC)Reply

Nomination for deletion of Template:GBNewYorkState

edit

 Template:GBNewYorkState has been nominated for deletion. You are invited to comment on the discussion at the entry on the Templates for discussion page. Nigej (talk) 18:00, 5 March 2022 (UTC)Reply

Major edit of Chemical equation#Matrix method

edit

Hello PAR, I would like to notify you about my major edit of the section Chemical equation#Matrix method, because you were its only significant contributor. Petr Matas 12:13, 22 June 2022 (UTC)Reply

edit

An automated process has detected that when you recently edited Darcy's law, you added a link pointing to the disambiguation page Scalar.

(Opt-out instructions.) --DPL bot (talk) 19:48, 2 July 2022 (UTC)Reply

Fixed PAR (talk) 15:12, 5 July 2022 (UTC)Reply

ArbCom 2022 Elections voter message

edit

Hello! Voting in the 2022 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 12 December 2022. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2022 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:26, 29 November 2022 (UTC)Reply

Jacobi elliptic functions

edit

I made it! In the article Jacobi elliptic functions gave very good examples for the computation of the elliptic Jacobi amplitude sine sn of the third part of the complete first kind elliptic integral K. I want you and also the user User:A1E6 to see this current form. I explained every little thing in detail so that the regular reader can understand everything in a brilliant way. I say the truth. It is no original research at all. Because the computation formula of fourth degree is well known among the mathematicians who researched the values of Jacobi elliptic functions. And this quartic equation does appear in the essays of the mathematicians very often, for example in the citated and linked Prasolov and Solovyev essay. I solved it just the way it is done. We know how to solve a quartic equation. And there are even further essays like this of the Greek Mathematician Bagis that are giving even the direct solution. Therefore I definitely did not make original research. I just explained everything in detail. Please accept my entries and please do also accept the current form of the article Jacobi elliptic functions as it is. Hopefully you like the article in my created shape and hopefully you can understand everything explained in this article. Have a nice time! Lion Emil Jann Fiedler also known as Reformbenediktiner Reformbenediktiner (talk) 09:48, 11 July 2023 (UTC)Reply

Reference for MEP constraint giving Cauchy distribution

edit

Hi, I was wondering whether you can provide me with a source or a derivation of the MEP constraint you added to the article about the Cauchy distribution, specifically  . In the literature, I can only find references for   and I am curious as to how it was derived. Regards, Jaquento (talk) 09:50, 15 July 2023 (UTC)Reply

I don't remember doing that, and I searched on "MEP" and "constraint" in the article and found no hits. 00:36, 16 July 2023 (UTC)
In my previous message I have provided the exact diff link where you added the part in question – maybe you can have a look again? It concerns the section "Entropy" on the article (maybe look for "maximum entropy probability distribution"). --Jaquento (talk) 07:45, 19 July 2023 (UTC)Reply
Ok, I think I understand now. The   statement is the MEP constraint for a *standard* Cauchy distribution in which x0=0 and γ=1. The general case is for   and if you substitute x0=0 and γ=1 into that, you get the standard expression.
A maximum entropy distribution is a distribution which has the maximum possible entropy for a given set of constraints. A simple example is the normal distribution. Out of all the possible probability distributions that are characterized by a specified mean and variance and that's ALL you know about them, then the normal distribution for that mean and variance has the greatest entropy out of all those possible distributions.
The Maximum entropy probability distribution describes the derivation, but it is rather dense. Basically, it says that for a set of functions fi(x), the maximum entropy distribution p(x) which guarantees that   for all i , where   is a known number, is of the form:
 
where C and   are to be determined. I don't remember the proof, but that means:
 
But we know from the standard Cauchy distribution that:
 
so   and when you impose   and  , you can solve for C and λ and get the MEP expression for the standard Cauchy distribution. You can follow the same steps for the general Cauchy distribution to get the MEP expression for the general Cauchy distribution. PAR (talk) 16:56, 6 October 2023 (UTC)Reply
Thank you very much! --Jaquento (talk) 22:58, 6 October 2023 (UTC)Reply
Sure. Looking at the above, the logic is kind of messed up, but I don't think it's wrong. If you have problems sorting it out, let me know. PAR (talk) 17:16, 7 October 2023 (UTC)Reply

Entropy discussion

edit

Thank you, PAR, for this educational material, from which I benefit. I would like to float with you my idea that the name 'entropy' for Shannon's function is unfortunate. I guess we are stuck with it, but it might still be useful to reconsider it. I think it unhelpful to speak of it as an 'amount of information'. I think it's an amount of something, but I think 'information' is an unhelpful term for that something. I have in mind a suggestion such as 'spread of symbols', or 'extent of items', or the like. Your thoughts?Chjoaygame (talk) 06:37, 8 October 2023 (UTC)Reply

An amount of information is a rigorous, mathematically defined concept, no vagueness involved. Boltzmann's famous equation S=k log(W) is the simple relationship between thermodynamic entropy (S) and what is essentially Shannon's information entropy (log(W)) assuming equal apriori probability of microstates. Shannon's entropy is indeed the amount of mathematically defined information lacking, or lost, when a system is described by its thermodynamic parameters, rather than a detailed particle-by-particle description. As to whether "entropy" is a good name for it, I prefer to think of "information entropy" and "thermodynamic entropy" to distinguish between the two terms in Boltzmann's equation. They are certainly not identical but they are very closely linked via the statistical theory of thermodynamics. PAR (talk) 14:18, 9 October 2023 (UTC)Reply
Thank you for your response. All that you say is accurate, practically tautologous: the mathematical formulas are right and not vague. I just think that the words are unhelpful. To call Shannon's function 'entropy' is to evade the task of finding a more informative word. And to call Shannon's notion 'quantity of information' is likewise to evade thinking of a better term for it. I think von Neumann did us a disservice. Perhaps I made a mistake in raising the question. I would be happy if you deleted my comment and this reply, as off topic.Chjoaygame (talk) 11:56, 10 October 2023 (UTC)Reply
I don't think you made a mistake, just maybe this belongs in the entropy article, that's all. When it comes to entropy, I still don't fully intuitively get it, and I agree that generally speaking, nomenclature is important, but in this case, there is a relationship between information entropy and thermodynamic entropy, but certainly not an identity. Do we downplay the similarity by choosing a different word, or do we downplay the difference by choosing a similar word? I think "information entropy" and "thermodynamic entropy" is a good compromise. It recognizes both the similarity and the difference.
I also think that "quantity of information" is a good term. If you flip two coins, you have 2 bits of information (or, if you don't know the result you have an entropy of 2 bits of information). If you flip 3 more coins, you have 3 bits of information. If you flip 2+3=5 coins, you have 2+3=5 bits of information. That sounds like a quantity to me. PAR (talk) 01:33, 11 October 2023 (UTC)Reply
Thank you for your reply. I was a little worried that I might have wasted your time. I didn't try to take it to the entropy article because I am just tentative about such thoughts.
I am glad that you think that nomenclature is important. I am not into really trying to change the nomenclature, but am just reflective about it.
As for the 'anschaulich' interpretation of entropy, I am glad to read you saying that you don't fully intuitively get it. That means that you might find this conversation perhaps a trace useful. I am sold on the original interpretation of Clausius, as 'disgregation'. Like a flock of sheep wandering over the pastures; it's not that they are disordered: it's that they search everywhere they can go. I trace the modern equivalent, the 'dispersal' or 'spread' story, to https://en.wikipedia.org/wiki/Edward_A._Guggenheim.[1] I wish I knew of some other early modern originating source comparable to that.
  1. ^ Guggenheim, E.A. (1949), Statistical basis of thermodynamics, Research: A Journal of Science and its Applications, 2: 450–454, Butterworths, London.
My worry about 'information' is that, for a more general reading, it doesn't tell about the validity of the information. Nonsense can count as 'information' in some cases. If 'information' is invalid, does it really count as information? For coding, I think it tells the length of the shortest valid encoding on some criterion of validity of coding (for example, which coding language is being used). That's why I think sometimes of talking about 'extent of symbols' or somesuch. Somehow, the various criteria that generate the different maximum entropy distributions are really criteria of spread or extent. One might argue that maximum spread would lead to uniform spread. Planck defines the scope of thermodynamics in terms of 'homogeneous bodies of matter'. Perhaps 'homogeneous' goes too far, but in some sense it's in the right direction. The homogeneity of thermodynamic bodies has to admit that they are in constant microscopic motion, endlessly exploring all possibilities, so that perfect 'homogeneity' or 'uniformity of spread' doesn't quite cut it: it's too static.
I don't want to try to push this line of thinking. I was just struck by an impulse to chat about it.Chjoaygame (talk) 14:35, 13 October 2023 (UTC)Reply
I think I should clarify what I mean when I say "nomenclature is important". If there is a question of what name tag to put on a particular theory, then I really don't care. But take for example the use of Einstein notation in tensor calculus. It is a notation with rules that concisely reflect the underlying theory. In this case, notation or nomenclature is very important. The ancient Babylonians were using   centuries before Pythagoras "published" the theorem which bears his name. This doesn't bother me.   is what concerns me and I really don't care what it's called. I don't even care if I fail to remember the name of the theorem, as long as I understand it and am capable of using it. When communicating with others, it is important for me to know the name of the theorem a shorthand for   where a, b, and c form a right triangle, etc. etc.
Shannon's definition of information is not the same as the colloquial meaning, any more than "energy" or "momentum" in physics is equivalent to the colloquial meaning. They are suggestive, though, and that is good, just as "entropy" is a suggestive word. "2+2=5" contains some Shannon information, and the question of whether it is valid or not valid mathematically is outside the definition of Shannon information. As long as we understand that, then worrying over what name tags to put on things is, to a certain extent, a waste of time. However, again, it is important, for the sake of communication, that we agree on the nomenclature. PAR (talk) 06:21, 17 October 2023 (UTC)Reply
It is unlikely that we can change Shannon's (von Neumann's) name 'entropy' for his 'quantity of information'.
Thinking it over, I would like to clarify how we understand Shannon's function. I see it as a measure of 'spread'. I guess there are many measures of 'spread', for example the standard deviation and the variance. In a sense, standard deviations don't add, while under suitable conditions (statistically independent variables, certain kinds of distribution), variances do add. What is it about Shannon's function that makes it of interest? In a sense, you have above answered that question. I am not as much a mathematician as you are, but here is my present stab at an answer, for your comment. I think that Shannon's function is a distinguishedly (uniquely?) general additive measure of 'spread' (Clausius said 'disgregation'). For statistically independent variables, Shannon's function adds for practically all distributions? That's why we like it. I seem to remember E.T. Jaynes proving something like that in his posthumous book Probability Theory: the logic of Science. Do I have that right?Chjoaygame (talk) 12:58, 17 October 2023 (UTC)Reply
The problem I have with the "spread" idea is that it is a spread in the probability space which in the case of the statistical theory of thermodynamics may or may not amount to some sort of spread in physical space.
The mathematical tractability of information entropies are certainly a reason to like it, but it's the insights that are really helpful. Entropy is a measure of the difference between what is actually true, and what we know or have measured to be true.
Classically speaking, the complete description of a container of gas is represented by its microstate, the enumeration of all the positions and momenta of each particle in the gas. What we know about the gas is its macrostate: The thermodynamic parameters, pressure, temperature, volume, etc. Entropy is a measure of the difference in information between the two. To be very explicit, the thermodynamic entropy of the gas is Boltzmann's constant times the minimum number of yes/no questions we must answer in order to determine the microstate, given that we know the macrostate.
Jaynes had a deep understanding of entropy. He developed this thought-experiment: We know that if you have two experimentally identical gases separated by a partition, and remove the partition, there wll be no change in entropy. Suppose the gas molecules on one side all contained one isotope of one of the atoms in the molecule, while the other side contained a different isotope. Suppose that our experimental apparatus is not able to distinguish between the two. Then we will measure no entropy change upon mixing, and, most importantly, the entire body of thermodynamic and statistical mechanics theories will give experimentally valid predictions. If at some point our experimental technique advances such that it can detect isotopic differences, then we will detect a non-zero entropy of mixing when the partition is removed, and again, the entire body of thermodynamic and statistical mechanics theories will give experimentally valid predictions. If it were the case that our equipment could not detect the isotopic differences, but there was some problem with our thermodynamic calculations, then we may be sure that we have found an alternate means of detecting isotopic difference. Nothing in the real world has changed, only our knowledge of the macrostates. The amount of information gained by our advanced techniques is reflected in the change in entropy in the two cases. Thermodynamic entropy is, in this sense, not a completely objective quantity, but a bookkeeping method to keep track of our lack of knowledge, specifically, our lack of Shannon information about the system.
If you have a statistical process that produces real numbers, and you know only the mean and standard deviation of those numbers, and you wish to posit an underlying probability distribution, what distribution do you assume? You assume the distribution which adds the least amount of information possible, since any information amounts to an assumption about the probability distribution. The probability distribution, given a particular mean and variance, which contains the least amount of information is the probability distribution which has the greatest entropy out of all probability distributions which have a given mean and variance. You choose the normal distribution. It may not be right, but the minute you know that it's not right is the minute you know more about the distribution than simply its mean and variance. PAR (talk) 05:06, 18 October 2023 (UTC)Reply
Let me think it over.Chjoaygame (talk) 10:50, 18 October 2023 (UTC)Reply
edit

An automated process has detected that when you recently edited Transmittance, you added a link pointing to the disambiguation page Photometry.

(Opt-out instructions.) --DPL bot (talk) 18:08, 22 March 2024 (UTC)Reply

Fixed. PAR (talk) 19:05, 22 March 2024 (UTC)Reply

Invitation to participate in a research

edit

Hello,

The Wikimedia Foundation is conducting a survey of Wikipedians to better understand what draws administrators to contribute to Wikipedia, and what affects administrator retention. We will use this research to improve experiences for Wikipedians, and address common problems and needs. We have identified you as a good candidate for this research, and would greatly appreciate your participation in this anonymous survey.

You do not have to be an Administrator to participate.

The survey should take around 10-15 minutes to complete. You may read more about the study on its Meta page and view its privacy statement .

Please find our contact on the project Meta page if you have any questions or concerns.

Kind Regards,

WMF Research Team

BGerdemann (WMF) (talk) 19:28, 23 October 2024 (UTC) Reply

A miracle

edit

Dear PAR, do you already believe in miracles? If you don't, here is one to convert you !

I wanted your help to restore the proper definition of heat in thermodynamics because I felt that the page was controlled by drive-by shooters, instant experts, and suchlike, and that I couldn't fix it by myself. So I just waited and waited. Eventually, a day or so ago, I made the restoration. Here's the miracle: my edit has survived !!! (at least till now.)Chjoaygame (talk) 20:58, 23 October 2024 (UTC)Reply

Reminder to participate in Wikipedia research

edit

Hello,

I recently invited you to take a survey about administration on Wikipedia. If you haven’t yet had a chance, there is still time to participate– we’d truly appreciate your feedback. The survey is anonymous and should take about 10-15 minutes to complete. You may read more about the study on its Meta page and view its privacy statement.

Take the survey here.

Kind Regards,

WMF Research Team

BGerdemann (WMF) (talk) 00:40, 13 November 2024 (UTC) Reply

ArbCom 2024 Elections voter message

edit

Hello! Voting in the 2024 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 2 December 2024. All eligible users are allowed to vote. Users with alternate accounts may only vote once.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2024 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}} to your user talk page. MediaWiki message delivery (talk) 00:07, 19 November 2024 (UTC)Reply