This was really great. It's how I was taught entropy at college (biophysics and molecular biology) though without the sheep. "Statistical Mechanics" was the name our professor used.
The only tiny change I'd like to make is to add a line or two near the end, something along the following lines:
There's a lot fewer ways to arrange water molecules so that they form an ice cube than there are to arrange them as a liquid. Most arrangements of water molecules look like a liquid, and so that's the likely endpoint even if they start arranged as an ice cube.
The same is true of more or less any macroscopic object: the thing that we recognize and name ("chair", "table", "pen", "apple") requires the atoms to remain in one of a fairly small set of particular arrangements. Compared to the vast number of other arrangements of the same atoms ("dust"), the ones where the atoms form the "thing" are quite unlikely. Hence over time it's more likely that we'll find the atoms in one of the other ("random", or "dust-like") arrangements than the one we have a name for. The reason things "fall apart" isn't that there's some sort of preference for it - it's that there are vastly more ways for atoms to be in a "fallen apart" state than arranged as a "thing".
"Students who believe that spontaneous processes always yield greater disorder could be somewhat surprised when shown a demonstration of supercooled liquid water at many degrees below 00 C. The students have been taught that liquid water is disorderly compared to solid ice. When a seed of ice or a speck of dust is added, crystallization of some of the liquid is immediate. Orderly solid ice has spontaneously formed from the disorderly liquid.
"Of course, thermal energy is evolved in the process of this thermodynamically metastable state changing to one that is stable. Energy is dispersed from the crystals, as they form, to the solution and thus the final temperature of the crystals of ice and liquid water are higher than originally. This, the instructor ordinarily would point out as a system-surroundings energy transfer. However, the dramatic visible result of this spontaneous process is in conflict with what the student has learned about the trend toward disorder as a test of spontaneity.
"Such a picture might not take a thousand words of interpretation from an instructor to be correctly understood by a student, but they would not be needed at all if the misleading relation of disorder with entropy had not been mentioned."
> There's a lot fewer ways to arrange water molecules so that they form an ice cube than there are to arrange them as a liquid. Most arrangements of water molecules look like a liquid, and so that's the likely endpoint even if they start arranged as an ice cube.
Except this is, as an insight, obviously wrong. The arrangement you get is determined by temperature: cold water will spontaneously freeze, and hot ice will spontaneously melt. The model you state predicts that
In keeping with the level of explanation in the article, I was omitting the role of energy in describing the exploration of possible microstates by the system.
A system with zero energy is highly constrained in its exploration of possible microstates, and thus is unlikely to undergo a change in its macrostate.
A system with a lot of energy is much less constrained, and is more likely to undergo macrostate changes.
This wasn't really covered in the article, so I didn't want to put it into my (perhaps foolish) add on sentences.
So one thing I've never understood is how you can "count" microstates, or bits required to describe them, when the relevant physical parameters all seem to be real numbers. For instance, a gas of N atoms is described by 6N real numbers (3d position and velocity) regardless of how hot it is. The article talks about quanta of energy, but that seems like a simplification at best: a given interaction might be quantized, but quanta in general (e.g. photons) exist on a real-number spectrum, so it's possible to have an uncountable infinity of energy packet sizes right? (That's the point of a black body, right?)
What am I missing? Is this a weird measure theory thing, where hot objects have even bigger uncountable infinities of states and we get rid them all with something like a change of variables? If you told me spacetime was secretly a cellular automaton I could deal with it, but real numbers ruin everything.
I asked the same question to my professor when I took thermodynamics as an undergrad. In that class we are told that a particle in a box of size 2 can be in twice as many places as a particle in a box of size 1. But in real analysis we learn there are just as many numbers between 0 and 1 as there are between 0 and 2. The answer I was given, in true physicists form, is hand-wave it. There's an intuitive notion that twice as big means twice as many places to be, therefore just accept it and let the mathematicians cry over our abuse of the reals.
The true answer is that "quanta of energy" is not a simplification. The idea that physical variables like energy and position come in discrete units is the quant in quantum physics. If you imagine the position of a particle in a box of size 1 to be discretized into n states, then a box of size 2 really would have 2n states. So all of your concerns are moot because quantum mechanics replaces all the uncountable sets with countable ones.
But this still leaves us with the issue that Boltzmann did all this work before quantum mechanics existed so there must be some useful notion of "bigger uncountable infinities". The answer, as far as I know, is that you can always approximate classical physics with finite precision variables so long as the precision is high enough (replace the reals with floats). The idea of counting states works for any arbitrarily precise (but still finite) discretized variables, and as far as physicists care an arbitrarily precise approximation is the same as the real thing.
This is not true; position/time are not quantized in the standard model. String theory is not canonical. I think a better way to think about it is not in terms of size, but in terms of time. A particle in a bigger box, on average, can go on a random walk for longer without hitting a wall. It will take longer for a particle to sufficiently (arbitrarily close) exhaust the phase space of a bigger box.
This is an excellent, and puzzling, question! Let me try to provide some insight into how physicists think about such paradoxes, by addressing the specific example you mention, of the presumed uncountable infinity of different possible photon energies in a finite range of frequencies. In the case of blackbody radiation, when physicists analyze the set of possible photon energies more carefully, they find that there is really only an infinite number of different possible photon energies (for a finite range of frequencies) if the volume of space containing the photons is infinite. In any finite volume of space, if we allow ourselves to place boundary conditions on the electromagnetic fields at the edges of that volume (for example, suppose we think of our volume as a cube with mirrored walls), we find that there are only a finite number of oscillating modes of the electromagnetic field in any finite range of frequency. In the case of a cube, there is a very lowest frequency of radiation whose wavelength will allow it to form a standing wave in the box, and all the other allowed modes are multiples of that lowest frequency. So the density of possible photon states per unit of frequency is actually proportional to the volume of space we allow to hold the photons. (By the way, this is also precisely related to the quantum complementarity of uncertainty between momentum and position. To confine a photon to a volume of space, the uncertainty in its momentum must be the same order as that carried by the lowest frequency standing waves which would be compatible with the container.)
I think part of my mistake was not thinking of the thermal energy packets (phonons?) as waves in boxes, with the box being the boundary of whatever thing has the energy. Which is still weird for a gas expanding into a vacuum I guess, but works for a hot solid object or a confined gas.
This is a very good question, which goes to the heart of statistical Physics. We use phase spaces for this (typically a 6N-dimensional vector space in which each microstate is represented by a point). The system has a probability of being in (or rather very close to) each microstate, which depends on several factors, like the conditions (isolated system, constant pressure, temperature, number of particles, etc). Counting microstates is “just” calculating integrals of that probability weight in the phase space. Of course, most of the time it is impossible, so we have tools to approximate these integrals. There are a lot of subtleties, but that’s the general idea.
The phase space does not change depending on temperature, so there’s nothing weird like the space getting bigger. But the probability of each microstate might, as high-energy states become more accessible.
So would it make sense to think of a microstate as a region of phase space, a point and those points "very close to" it? And "increasing number of microstates" just means a larger number of these regions have non-negligible probabilities? In continuous terms you would see this as the distribution flattening out. I might be having trouble visualising what we're integrating, since if it's a probability the integral over the whole phase space can only be 1, right?
Good point. I think the what's missing is how electron energy states work in quantum mechanics. The wikipedia paged (linked below) has a pretty good explanation:
"A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized". (https://en.wikipedia.org/wiki/Energy_level)
To put things another way: while there could theoretically be infinite sizes of energy quanta, the permutations of energy states for matter are in fact discrete.
What you say is true, but also incomplete. We are perfectly able to quantify the accessible states in purely classical systems, such as ideal gases, without requiring discrete energy levels. The trick is to think of a continuous probability density instead of discrete probabilities. This framework is very general and does not depend on the quantum-ness of what you look at.
Even in some systems that actually follow quantum mechanics (such as phonons or electrons in a material, or photons in a black body), we often use continuous probabilities (densities of states) because it’s much more convenient when you have lots of particles.
IANAP but there's two different things that come to mind.
Even if the number of states were infinite, as long as there's a reasonable probability distribution you can know you can integrate over the probabilities that have some property vs another (say: solid vs not).
Secondly, energy is quantized (on a very very very small level) in reality. Ymmv though, I tried googling this and read some stuff about waves being quantized and particles not and about how it depends on what system you're looking at and I had to drop out of quantum physics when I took it :-)
For quantum objects, the distinction between a wave and a particle is not very meaningful. The energy levels of an electron around a nucleus are discrete, regardless of whether the electron behaves more like a wave or more like a particle in the specific experiment you’re doing.
Otherwise, you’re right: we count states by integrating (at times discrete, continuous, and often very complex) probability distributions.
As a physicist, this is such a great explanation that's actually correct for a change. Entropy must be one of the if not the most misunderstood physical concept out there (along with Planck metrics like Planck energy or Planck length). Entropy is commonly used and written about by so many people that clearly lack the understanding of it that this blog post is a refreshing change.
I recommend you the story "The Last Question", by Isaac Asimov[0], one of my favourite stories of all time. It's about how the question about reverting the direction of the entropy keeps rising on people, throughout the life of the universe.
That still doesn’t answer the question how, if the laws of physics are time-symmetric, the universe as a whole can have a time-asymmetric evolution of entropy. I.e., if something forces entropy to increase in the long run, then that should hold in both directions of time. So what is it that causes entropy to only increase in the direction of the future, but not in the direction of the past, given that the laws pf physics do not distinguish between both directions?
The answer I've provided elsewhere in this thread is along the lines of: Other than melting icecubes and scrambling eggs, the only other difference you notice between the past and the future is that you can remember the past, but you cannot remember the future. If you could remember the future just as well as you remember the past, you probably wouldn't have strong opinions about which way time goes (or which direction is "clockwise"). If you, Merlin-like, could only remember the future then you'd probably be here asking why you always observe entropy decreasing in closed systems.
But memory operates on systems of increasing entropy, so you'll always only remember the past having less entropy. [1]
We don’t have an answer to this question. I don’t want to discuss metaphysics here, but there is a very interesting discussion on that subject here: https://youtube.com/watch?v=-6rWqJhDv7M
Let me rephrase maybe: Given the state of affairs I described above, I don’t understand what is the convincing argument that entropy does indeed increase in the long run. Any argument given should also work in the reverse direction, given the symmetry of time, shouldn’t it? (And thereby create a kind of reductio ad absurdum.) If not, why not?
Thermodynamic entropy is not really a physical property of a system, it is a property of our description of the system subject to some macroscopic constraints.
Loved this article. There are so many applications of entropy and statistical physics in computer science, and I find it fascinating that the same general properties are useful in such different contexts.
For example, there's a well-known phenomenon in probability called concentration of measure. One of the most important examples in computer science is if you flip n coins independently, then the number of heads concentrates very tightly. In particular, the probability that you are more than an epsilon-fraction away from 0.5n heads is at most around e^{-epsilon^2 n}. This is exactly the setting described in the article with the sheep in two pens, and this inequality is used in the design of many important randomized algorithms! A classic example is in load balancing, where random assignment sometimes produces 'nice' configurations that look very much like the high-entropy states described in this article (but unfortunately, many times random assignments don't behave very well, see e.g. the birthday paradox).
The sum of independent random variables is well known to have concentration properties. An interesting question to me is what sorts of other statistics will exhibit these concentration phenomena. An important finding in this area is the bounded differences inequality (https://web.eecs.umich.edu/~cscott/past_courses/eecs598w14/n...), which generally states that any function that doesn't "depend too much" on each individual random argument (and the sum of bounded variables satisfies this assumption) exhibits the same concentration phenomenon. There are some applications in statistical learning, where we can bound the estimation error using certain learning complexity measures that rely on the bounded differences inequality. In the context of this article, that means there's a whole class of statistics that will concentrate similarly, and perhaps exhibit irreversibility at a macroscopic level.
There's a related anecdote about John von Neumann: he used to joke that he has superpowers and can easily tell truly random and pseudo random sequences apart. He asked people to sit down in another room and generate a 0/1 sequence via coin flips, and record it. Then, generate another sequence by heart, trying to mimick randomness as much as possible. When people finally showed the two sequences to him, Neumann could instantly declare which one was which.
People were amazed.
The trick he used was based on the "burstiness" rule you describe: a long enough random sequence will likely contain a long homogeneous block. While humans tend to avoid long streaks of the same digit, as it does not feel random enough.
So, all he did was he quickly checked with a glimpse, which of the two sequences contained the longest homogeneous block, and recognized that as the one generated via the coin flips.
That's a cool anecdote :-) I wouldn't say it uses concentration of measure exactly, but I see how it is related. The anecdote is about asymptotic properties of random sequences, and concentration of measure is about the same too. In this case, I think you can show that homogenous blocks of length log(n) - log log (n) occur at least with constant probability as n gets large. In other words, the length of homogenous blocks is basically guaranteed to grow with n. I suppose a human trying to generate a random sequence will prevent homogenous blocks above a certain constant length from appearing regardless of the length of the sequence, which would make distinguishing the sequences for large n quite easy!
I think there is also a quite strong connection in this anecdote to the information-theoretic notion of entropy, which takes us all the way back to the idea of entropy as in the article :-) Information-theoretically, the entropy of a long random sequence concentrates as well (it concentrates around the entropy of the underlying random variable). The implication is that with high probability, a sampled long random sequence will have an entropy close to a specific value.
Human intuition actually is somewhat correct in the anecdote, though! The longer the homogenous substring, the less entropy the sequence has, and the less likely it is to appear (as a limiting example, the sequence of all 0s or all 1s is extremely ordered, but extremely unlikely to appear). I think where it breaks down is that there are sequences with relatively long homogenous substrings with entropy close to the specific values (in the sense that the length is e.g. log (n) - log log (n) as in the calculation before), where the human intuition of the entropy of the sequence is based on local factors (have I generated 'too many' 0s in a row?) and leads us astray.
In the video above you will witness a metal wire in a disorderly shape spontaneously form into an organized spring shape when entropy is increased (the wire is heated). There are no special effects or video rewinding trickery here, the phenomenon is very real.
It is as if you were looking at water spontaneously form into ice cubes but entropy is NOT reversing! It is moving forward.
The video is a really good example of entropy. It's good in the sense that if you understand why entropy is increasing when the metal is heated than you truly understand what entropy is... as the video gets rid of the notion of order and disorder all together.
That's right. There are many cases of increasing entropy where the system spontaneously progresses from a greater disorder to more organization.
Many people say that the when some system becomes more organized that means entropy is leaving the local system and increasing overall in the global system. This is not what's happening here. The wire is being heated. Atoms are becoming more organized and less disorderly by virtue of MORE entropy entering the system. If you understand this concept then you truly understand entropy. If not you still don't get it.
If the entropy of the universe were to suddenly go in reverse would we be able to detect it or would our memory formation and perceptions being reversed make it indistinguishable from what we experience now?
Yes, time reversal would also reverse the process of our perception and memory, so we would experience time "moving forward" even if time were "moving" backward. We would experience entropy increasing even if entropy were "decreasing with time". (And it's perfectly valid to call the past "+t" and the future "-t"; the laws of physics don't care; if you do that, you'll see that entropy decreases as "t" increases, but despite changing definitions you'll still only have memories of a universe with lower entropy than the present).
This is one reason why it might be more correct to say that time doesn't move at all. We just perceive a direction of time wherever the universe has an entropy gradient along "the time axis".
Entropy decreasing is not equivalent to time reversal.
[Edit: to expand a bit: time reversal requires that some previous macrostate is achieved again. Entropy decreasing merely requires that the system enters any macrostate represented by less microstates than the current one.
Put in more simple terms, the ice cube could reform but in a different shape. Entropy would have decreased, but it would not "look like" time reversal - it would just look like something very strange had happened. ]
The only tiny change I'd like to make is to add a line or two near the end, something along the following lines:
"Students who believe that spontaneous processes always yield greater disorder could be somewhat surprised when shown a demonstration of supercooled liquid water at many degrees below 00 C. The students have been taught that liquid water is disorderly compared to solid ice. When a seed of ice or a speck of dust is added, crystallization of some of the liquid is immediate. Orderly solid ice has spontaneously formed from the disorderly liquid.
"Of course, thermal energy is evolved in the process of this thermodynamically metastable state changing to one that is stable. Energy is dispersed from the crystals, as they form, to the solution and thus the final temperature of the crystals of ice and liquid water are higher than originally. This, the instructor ordinarily would point out as a system-surroundings energy transfer. However, the dramatic visible result of this spontaneous process is in conflict with what the student has learned about the trend toward disorder as a test of spontaneity.
"Such a picture might not take a thousand words of interpretation from an instructor to be correctly understood by a student, but they would not be needed at all if the misleading relation of disorder with entropy had not been mentioned."
http://entropysite.oxy.edu/cracked_crutch.html
Except this is, as an insight, obviously wrong. The arrangement you get is determined by temperature: cold water will spontaneously freeze, and hot ice will spontaneously melt. The model you state predicts that
- Hot ice will spontaneously melt (correct)
- Cold ice will spontaneously melt (nope)
- Cold water will not spontaneously freeze (nope)
A system with zero energy is highly constrained in its exploration of possible microstates, and thus is unlikely to undergo a change in its macrostate.
A system with a lot of energy is much less constrained, and is more likely to undergo macrostate changes.
This wasn't really covered in the article, so I didn't want to put it into my (perhaps foolish) add on sentences.
Deleted Comment
What am I missing? Is this a weird measure theory thing, where hot objects have even bigger uncountable infinities of states and we get rid them all with something like a change of variables? If you told me spacetime was secretly a cellular automaton I could deal with it, but real numbers ruin everything.
The true answer is that "quanta of energy" is not a simplification. The idea that physical variables like energy and position come in discrete units is the quant in quantum physics. If you imagine the position of a particle in a box of size 1 to be discretized into n states, then a box of size 2 really would have 2n states. So all of your concerns are moot because quantum mechanics replaces all the uncountable sets with countable ones.
But this still leaves us with the issue that Boltzmann did all this work before quantum mechanics existed so there must be some useful notion of "bigger uncountable infinities". The answer, as far as I know, is that you can always approximate classical physics with finite precision variables so long as the precision is high enough (replace the reals with floats). The idea of counting states works for any arbitrarily precise (but still finite) discretized variables, and as far as physicists care an arbitrarily precise approximation is the same as the real thing.
This is not true; position/time are not quantized in the standard model. String theory is not canonical. I think a better way to think about it is not in terms of size, but in terms of time. A particle in a bigger box, on average, can go on a random walk for longer without hitting a wall. It will take longer for a particle to sufficiently (arbitrarily close) exhaust the phase space of a bigger box.
The phase space does not change depending on temperature, so there’s nothing weird like the space getting bigger. But the probability of each microstate might, as high-energy states become more accessible.
"A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized". (https://en.wikipedia.org/wiki/Energy_level)
To put things another way: while there could theoretically be infinite sizes of energy quanta, the permutations of energy states for matter are in fact discrete.
Disclaimer: I am an engineer, not a physicist.
Even in some systems that actually follow quantum mechanics (such as phonons or electrons in a material, or photons in a black body), we often use continuous probabilities (densities of states) because it’s much more convenient when you have lots of particles.
Even if the number of states were infinite, as long as there's a reasonable probability distribution you can know you can integrate over the probabilities that have some property vs another (say: solid vs not).
Secondly, energy is quantized (on a very very very small level) in reality. Ymmv though, I tried googling this and read some stuff about waves being quantized and particles not and about how it depends on what system you're looking at and I had to drop out of quantum physics when I took it :-)
Otherwise, you’re right: we count states by integrating (at times discrete, continuous, and often very complex) probability distributions.
Deleted Comment
https://mobile.twitter.com/aatishb
[0] https://en.wikipedia.org/wiki/The_Last_Question
But memory operates on systems of increasing entropy, so you'll always only remember the past having less entropy. [1]
(Edit: citation)
[1] https://phys.org/news/2009-08-physicist-solution-arrow-of-ti...
https://en.wikipedia.org/wiki/Loschmidt%27s_paradox
The only convincing-seeming resolution I've heard is that our universe started in an exceptionally low-entropy state at the Big Bang.
Deleted Comment
For example, there's a well-known phenomenon in probability called concentration of measure. One of the most important examples in computer science is if you flip n coins independently, then the number of heads concentrates very tightly. In particular, the probability that you are more than an epsilon-fraction away from 0.5n heads is at most around e^{-epsilon^2 n}. This is exactly the setting described in the article with the sheep in two pens, and this inequality is used in the design of many important randomized algorithms! A classic example is in load balancing, where random assignment sometimes produces 'nice' configurations that look very much like the high-entropy states described in this article (but unfortunately, many times random assignments don't behave very well, see e.g. the birthday paradox).
The sum of independent random variables is well known to have concentration properties. An interesting question to me is what sorts of other statistics will exhibit these concentration phenomena. An important finding in this area is the bounded differences inequality (https://web.eecs.umich.edu/~cscott/past_courses/eecs598w14/n...), which generally states that any function that doesn't "depend too much" on each individual random argument (and the sum of bounded variables satisfies this assumption) exhibits the same concentration phenomenon. There are some applications in statistical learning, where we can bound the estimation error using certain learning complexity measures that rely on the bounded differences inequality. In the context of this article, that means there's a whole class of statistics that will concentrate similarly, and perhaps exhibit irreversibility at a macroscopic level.
People were amazed.
The trick he used was based on the "burstiness" rule you describe: a long enough random sequence will likely contain a long homogeneous block. While humans tend to avoid long streaks of the same digit, as it does not feel random enough.
So, all he did was he quickly checked with a glimpse, which of the two sequences contained the longest homogeneous block, and recognized that as the one generated via the coin flips.
I think there is also a quite strong connection in this anecdote to the information-theoretic notion of entropy, which takes us all the way back to the idea of entropy as in the article :-) Information-theoretically, the entropy of a long random sequence concentrates as well (it concentrates around the entropy of the underlying random variable). The implication is that with high probability, a sampled long random sequence will have an entropy close to a specific value.
Human intuition actually is somewhat correct in the anecdote, though! The longer the homogenous substring, the less entropy the sequence has, and the less likely it is to appear (as a limiting example, the sequence of all 0s or all 1s is extremely ordered, but extremely unlikely to appear). I think where it breaks down is that there are sequences with relatively long homogenous substrings with entropy close to the specific values (in the sense that the length is e.g. log (n) - log log (n) as in the calculation before), where the human intuition of the entropy of the sequence is based on local factors (have I generated 'too many' 0s in a row?) and leads us astray.
In the video above you will witness a metal wire in a disorderly shape spontaneously form into an organized spring shape when entropy is increased (the wire is heated). There are no special effects or video rewinding trickery here, the phenomenon is very real.
It is as if you were looking at water spontaneously form into ice cubes but entropy is NOT reversing! It is moving forward.
The video is a really good example of entropy. It's good in the sense that if you understand why entropy is increasing when the metal is heated than you truly understand what entropy is... as the video gets rid of the notion of order and disorder all together.
That's right. There are many cases of increasing entropy where the system spontaneously progresses from a greater disorder to more organization.
Many people say that the when some system becomes more organized that means entropy is leaving the local system and increasing overall in the global system. This is not what's happening here. The wire is being heated. Atoms are becoming more organized and less disorderly by virtue of MORE entropy entering the system. If you understand this concept then you truly understand entropy. If not you still don't get it.
Yes, time reversal would also reverse the process of our perception and memory, so we would experience time "moving forward" even if time were "moving" backward. We would experience entropy increasing even if entropy were "decreasing with time". (And it's perfectly valid to call the past "+t" and the future "-t"; the laws of physics don't care; if you do that, you'll see that entropy decreases as "t" increases, but despite changing definitions you'll still only have memories of a universe with lower entropy than the present).
This is one reason why it might be more correct to say that time doesn't move at all. We just perceive a direction of time wherever the universe has an entropy gradient along "the time axis".
[Edit: to expand a bit: time reversal requires that some previous macrostate is achieved again. Entropy decreasing merely requires that the system enters any macrostate represented by less microstates than the current one.
Put in more simple terms, the ice cube could reform but in a different shape. Entropy would have decreased, but it would not "look like" time reversal - it would just look like something very strange had happened. ]