Readit News logoReadit News
glastra · 10 years ago
This so-called fascination and search for meaning or reason behind constants reminds of the anthropic principle [0], which basically goes like this: "things are the way they are because we wouldn't be here to observe them otherwise".

The last part of the article, however, is still quite intriguing. Nobody would expect any of the 3 components of alpha to change, unless somehow the measurements can be affected by the medium (as the speed of light depends on that).

[0]: https://en.wikipedia.org/wiki/Anthropic_principle

ncallaway · 10 years ago
(I could be wrong about this, I haven't thought about it in great detail)

I think invoking the Anthropic principle only makes sense if we can demonstrate that there are a number of other "places" with varying "conditions". For such a selection bias to occur there needs to be a population to actually select from. For example, the Anthropic principle makes sense regarding the composition of the earth because we know that there are a large number of planets with a distribution of orbits and masses around a distribution of stars. Of course we will be on a planet that supports life as we know it.

To invoke the anthropic principle regarding a constant would imply — to me, at least — that there would need to be a number of universes with different constants. Or different regions of this universe where the constant would vary.

If there truly is only a single universe, and the constant is not changing in time or space, then it would seem we did actually just happen to get lucky.

Edit: of course, at this point we can't say definitely either way about our universe. We don't know if ours is one among many, or if its the solitary universe in existence. We also, clearly, have uncertainty about how constant this constant actually is.

raattgift · 10 years ago
Another alternative was proposed by Hawking & Hertog (Phys Rev D, 2006, see below): walk the present values surface back to a superposition of possible universes and study the mechanisms that generate probabilities near one. That then is their way of asking about the degree of fine-tuning, with the anthropic principle being recasta as a constraint on the set of possible consistent histories.

H&H spend a lot of time comparing themselves to the approach of selecting an initial values surface in the early universe and marching it forward; they more or less do the same but flip the direction of the arrow of time and arrive at a superposed state. Re-flipping the arrow of time leads to relic fields like the CMB carrying evidence for the superposition.

Essentially this is just taking time-reversibility of local microscopic physics seriously (great), and absorbing the difference in degrees of freedom in macroscopic physics into a superposition (uh, ok). The problem of setting up a values surface remains pretty much the same. "Fine tuning" is just a way of saying that the system is sensitive to the values it evolves from (or to).

I'm not really convinced that that a present-day spacelike hypersurface is easier to write down than an early-time initial values surface.

ArXiv version: http://arxiv.org/abs/hep-th/0602091

baddox · 10 years ago
I don't see why there needs to be other universes that actually exist in order for the anthropic principle to apply. It seems to make perfect sense to apply it to a set of conceivable universes, even if such universes don't actually exist and even if it's not physically possible for them to exist.
jessaustin · 10 years ago
ISTM the different measured values of α described in TFA constitute evidence of "other places with varying conditions"?
DonaldFisk · 10 years ago
The component constants of α = e^2/4πεħc have dimensions, so depend on what units you use to measure them in. Their combination comprising the electromagnetic fine structure constant is, however, dimensionless, and is satisfactorily explained.

The Anthropic Principle is usually invoked to explain why two very large dimensionless numbers, i.e. the inverse gravitational "fine structure constant" (which is assumed to be constant), and the age of the universe measured in atomic units (which increases with time), are both very close to 10^40. It is suggested that, for intelligent life to evolve and measure them, the two numbers must be roughly equal, and other dimensionless numbers, such as the electromagnetic fine structure constant, must similarly have values within a narrow range. And this in turn has led some people to believe it's the deliberate action of a creator, or that there are numerous other universes whose constants have different values, and almost all are devoid of life.

Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe

However, if we apply this Anthropic Principle to planetary orbits, we could conclude that planetary orbits are roughly circular because, if they were (for example) very long rectangles (or some other random shape), we wouldn't be around to observe their shape.

Of course, no one believes that. It is accepted that Celestial Mechanics, based on Newton's Law of Gravity, (which can be derived from the more accurate General Theory of Relativity), adequately explains why planetary orbits are roughly circular.

An alternative explanation is that we simply haven't worked out all the physical laws. For example, the force of gravity could become weaker as the universe gets older or less dense (or, expressed in different units, gravity remains constant but elementary particles get lighter). Dirac, and later a few others including myself, modified Einstein's equations in an attempt to explain this coincidence.

Dirac large numbers hypothesis: https://en.wikipedia.org/wiki/Dirac_large_numbers_hypothesis

Another argument against fine tuning is the Inverse gambler's fallacy: https://en.wikipedia.org/wiki/Inverse_gambler's_fallacy

Deleted Comment

notthegov · 10 years ago
You are probably right, that it means nothing. And I am happy people think this because nothing is meaningless. I believe there is no grand secret to this number or of life.

In other words, things are the way they are because we wouldn't be able to fall in love, dance to the music and enjoy life, if things were otherwise.

Thinking too much about why, distracts from the now. Mystery is what makes life worth living. And hiding ourselves from our self is what makes discovery possible.

If reality is a computer simulation, then proving that is a bit redundant in my opinion.

However, I think it can be proven but once it is, what will there be to know?

ncallaway · 10 years ago
> However, I think it can be proven but once it is, what will there be to know?

If reality is a computer simulation I can think of a multitude of other questions about reality:

- Can we communicate "outside" of the simulation?

- Is the simulation a product of intelligence that is intentionally running the simulation? Or is it a mere by-product of a higher-order universe?

- By studying the simulation in great enough detail, is there anything we can learn about the "outside" universe?

- Is the "outside" simulation a turing machine? Can it compute a higher class of computational problems?

- Are the simulations infinite? Is the "outside" also a simulation? Is it simulations all the way down — or all the way up? Is there a "real" reality?

- If its simulations all the way down, how did the first simulation get started?

- If there is a "real" reality we get to start all of science over at square one studying this new reality.

This is a rough initial list, but I'm sure there's an endless number of questions to be asked and answered about reality even if we prove its a simulation.

kazinator · 10 years ago
"When scientists measure any quantity they must specify the units being used."

No they don't. For instance, a simple aspect ratio measures how something is wide relative to how it is long. It has no units.

This here desk I have here is about 1.7 times as long as it is wide. No inches or centimeters required.

There isn't anything amazing or mystical about a unitless quantity.

jsweojtj · 10 years ago
You've got the right intuition here. The unitless part is key for several reasons. It means that the number itself is meaningful. In your example, the ratio of 1.7 long/wide would be the same no matter what units used to make the measurement. (Obviously, the units DO have to actually cancel cm/inches doesn't work). The number for the speed of light (299 792 458 m/s) is a defined property, so that number doesn't mean anything deeply. However, 1/137... itself is directly meaningful. This is why people who work in fundamental constant research use dimensionless ratios.

Now for a piece that's more interesting. The fine-structure constant (alpha) is the coupling constant that sets the strength of electromagnetism. This means that its value is the thing that matters in the equation: e^2/\hbar c. Each of the other values is a derived quantity. Further, to speak a bit loosely, only changes in alpha matter -- in the sense that if the speed of light (c) changes, but the other constants (e and \hbar) change in a way that keeps alpha the same, then you wouldn't be able to tell with an experiment that anything has changed.

Contrast this situation w/ a change in alpha -- a table-top experiment would be able to detect the change (given that it's large enough, and we have methods of measuring changes on year-time scales that are a few parts in ~10^-18 (Rosenband, 2008)), as it would mean that physics has changed in a fundamental way.

http://phys.columbia.edu/~millis/1900/readings/Science-2008-...

raattgift · 10 years ago
Uh, pardon the stupid question, but isn't it safer to consider the fine structure constant to be the IR fixed point of \alpha (i.e. \alpha(\mu = m_e)^-1 = 137...)?

\alpha(\mu = M_Z) is about 128. But m_0(photon), h, and e are the same at both energy scales (we can determine the numerical values experimentally). To the best of our ability to measure, none of them (including, if we ignore the main posting, \alpha_em) varies anywhere in a background fixed by the isometry group of SR (where we find c as the sole free parameter, corresponding to the speed of a particle where m_0(particle) = 0).

So I'm having trouble understanding your assertion that "Alpha is the fundamental physical constant: c, e and \hbar are the derived quantities" (in your other comment, but also reflected in your second paragraph above).

I think we both totally agree that varying \alpha_em at different points in space or time leads to a mess.

bj0 · 10 years ago
> This here desk I have here is about 1.7 times as long as it is wide. No inches or centimeters required.

But you are using units. Your unit is the width of your desk. If it were a different width but the same length, your measurement would be different.

kazinator · 10 years ago
So tell me, how large is my desk?

Note that the aspect ratio lets us say things like: "these two desks have the same aspect ratio".

The "desk widths" interpretation of aspect ratio precludes this useful sentence from having semantics.

The aspect ratio 1.7 has a life of its own, independently of that desk.

contravariant · 10 years ago
So, if he measured the width and height again, using different units, would he get a different answer?
rsfern · 10 years ago
I think a lot of the confusion in this discussion is coming from the specific example. The fine structure constant, an aspect ratio, or something like Reynold's number for quantifying fluid flow turbulence -- these are all dimensionless numbers that scientists use, sure and there's probably nothing special about them.

I think the confusion is in actually measuring these things. Scientists use dimensionless numbers all the time, but you have to directly measure quantities with units to indirectly get the dimensionless quantity.

granfalloon · 10 years ago
Isn't that still a "unit"? (Your desk is 1.7 "desk widths" long?)
TheOtherHobbes · 10 years ago
Scientific units are:

1. Standardised. 2. Defined by a numerical relationship to observable physical constants, like c, by definition.

Desk widths aren't a scientific unit. You can only measure the ratio of width to height by reference to a standard unit like the metre, which in turn is based on a constant observable quantity - the distance travelled by light in one second.

You can pick your derived units using any relationship to c - like the distance travelled in 3.2 seconds. But that's still a derived unit, not a fundamental observable unit.

And ratios are dimensionless because they stay the same whatever derived unit system you choose, as long as it's consistent.

amelius · 10 years ago
To make things more complicated: the size of your desk isn't fixed because the universe is expanding. So the units you are using are just a convention to relate the size of an object to the size of other objects.
maaku · 10 years ago
So long as the fine structure constant remains the same, your desk (and ruler) won't change size. Rather they will heat slightly as the expanding universe stretches the chemical bonds and those same bonds spring back to average rest length.

/nitpick

kazinator · 10 years ago
Expressing the size of the desk in a coordinate system specifically chosen such that the size doesn't change is much more than a convention. Conventions are arbitrary, like whether to drive on the left or right side of a two-way road.
btilly · 10 years ago
The effects of air eroding the desk are much more significant than the expansion of the universe. And both are dwarfed by the fact that the texture of the surface keeps the exact length and width from being well-defined.
damianknz · 10 years ago
Also, I tend to move around while my desk stays still, so from my desks point of view I am not aging as fast.
glastra · 10 years ago
Indeed, the author conveys the wrong message when implying that only pure numbers have no units. Measurable quantities, as you point out, can also have no units.

Another point that throws me off is the Vulcan scientist part. They would need to be using the same numerical base in order to get that number. There might be some base-less representations of numbers, but they are usually restricted to representing integer numbers. The constant could still be represented by a fraction, but there would be no math-neutral way of saying "hey, this here is a fraction/division".

joaotorres · 10 years ago
The same number in a different base is still the same number..

Deleted Comment

NikolaeVarius · 10 years ago
Ratios and Quantities are completely different things
mcherm · 10 years ago
Not according to any set of definitions that I have ever heard of. Can you provide an explanation of the difference as you understand it?

To give one example, my piece of US Letter paper is 93.5 in^2 in area. One side is 8.5 in long. If I divide these, I find that 93.5 in^2 / 8.5 in = 11 in. Is that eleven inches a ratio or a quantity?

danharaj · 10 years ago
> This here desk I have here is about 1.7 times as long as it is wide. No inches or centimeters required.

How did you measure your desk in order to compute its length/width ratio?

glastra · 10 years ago
I fail to see where you're trying to get. c, e and h all have units and are measurable.
mtviewdave · 10 years ago
The most fascinating part about alpha to me is the implication that it isn't actually constant, and varies over time and/or distance. I once read a suggestion that perhaps the observed universe is simply the portion of the greater universe where alpha has a value that lets things like stars, planets, and life exist.
qubex · 10 years ago
Being actually a derived quantity, alpha's non-constancy actually implies that one or more of the underlying ‘constants’ vary non-homogeneously — which is what you said, but slightly different in import... observing alpha is actually just a convenient way of observing the others indexed together.
jsweojtj · 10 years ago
You've got this backward. Alpha is the fundamental physical constant: c, e and \hbar are the derived quantities. I'll quote part of another comment that I left on this thread:

> Now for a piece that's more interesting. The fine-structure constant (alpha) is the coupling constant that sets the strength of electromagnetism. This means that its value is the thing that matters in the equation: e^2/\hbar c. Each of the other values is a derived quantity. Further, to speak a bit loosely, only changes in alpha matter -- in the sense that if the speed of light (c) changes, but the other constants (e and \hbar) change in a way that keeps alpha the same, then you wouldn't be able to tell with an experiment that anything has changed.

> Contrast this situation w/ a change in alpha -- a table-top experiment would be able to detect the change (given that it's large enough, and we have methods of measuring changes on year-time scales that are a few parts in ~10^-18 (Rosenband, 2008)), as it would mean that physics has changed in a fundamental way.

> http://phys.columbia.edu/~millis/1900/readings/Science-2008-...

gradi3nt · 10 years ago
Physics experiments have ruled theories like this out. They can put a very very small upper bound on how much the constants can change over a volume the size of the universe, and over the lifetime of the universe.
leanthonyrn · 10 years ago
I thought that -1/12 was amazing, even to Vulcan scientists. -https://youtu.be/w-I6XTVZXww -https://youtu.be/0Oazb7IWzbA
mrob · 10 years ago
-1/12 is the result of applying zeta function regularization or Ramanujan summation to the sum of the positive integers. It's arguably interesting, but hardly amazing to the vast majority of people who have never heard of those techniques. But the thing that really annoys me is all the people presenting it as the finite limit of a divergent series (this is the default meaning of "=" after an infinite series, if you're using a non-standard meaning you have to specify that!). The first of those videos does this! It's nonsense, and this kind of sloppy approach only encourages contempt for mathematics.
chias · 10 years ago
> The best known example of a pure number [...] hc/2πe2 [...] leave[s] a pure number, 137.03599913.

This surprises me. I would have thought pi, the ratio of any circle's circumference to its diameter, is a much better known example of a pure number.

duaneb · 10 years ago
Pi is not a fundamental physical constant and has myriad uses. α is only special when observing that it is a physical constant; you need to observe the physical universe to arrive at the conclusion that it's a meaningful number.
chias · 10 years ago
Thank you for this explanation. The distinction was not clear to me after reading the article, but your comment makes a lot of sense.
spodek · 10 years ago
> This surprises me. I would have thought pi, the ratio of any circle's circumference to its diameter, is a much better known example of a pure number.

Everyone is so fancy. One, two, three, and even zero are pretty well known, even outside physics.

noobermin · 10 years ago
On non-internet[0] connected computers in my old uni's physics lab, the passwords were often some combination of the phrase "physics" and repetitions of the number "137". Quite the fascination.

[0] why I don't feel uncomfortable disclosing this here

rubidium · 10 years ago
The article is so-so. But the physics here is really cool. Essentially, there's growing evidence that the fundamental constants may not be perfectly constant. See http://arxiv.org/abs/1510.02536 for the gory details.
raattgift · 10 years ago
Would you really characterize this Wilczynska, Webb, King et al. paper as an argument that there is "growing evidence", rather than (say) that it's a number-crunching argument that the small amount of observations that suggested a dipole variation to Webb et al. (and King et al.) show a \Delta\alpha / \alpha that's close enough to unity that one can cherry pick and say "oh yes, there's a (small) dipole variation" or "oh no, the data is consistent with no variation" ?

I prefer their earlier slide deck for gory details (it's mostly based on King et al 2012).

https://www.eso.org/sci/meetings/2012/ESOat50/Presentations/...

See especially the "Really?" slide (p 27).

(The previous "What if it's correct?" slide undersells the impact to the standard cosmology of a violation of isotropy and the consequent erosion of the "must be homogenous at scales > 250 Mly" part of the cosmological principle. Also, "what if atomic physics is really obviously different only very slightly outside the horizon?" feels like a declaration of war against the Copernican principle with precious little evidence, and against pretty good theory that has other lines of evidence backing it (cf. Carroll @ http://www.preposterousuniverse.com/blog/2010/10/18/the-fine... who points to Banks, Dine & Douglas @ http://arxiv.org/abs/hep-ph/0112059 who in turn point to other work that shows that you probably can't vary \alpha without varying other constants like the m_e and QCD coupling).

rubidium · 10 years ago
Missed this comment until today. Helpful slide-deck and references, thanks!

I mean "growing evidence" in the sense that it wasn't even questioned before, and some very early-stage experiments have asked the question.

I agree with you that it's still _way_ to early to be making any conclusions, and it all may wash away as the experiments improve.

pervycreeper · 10 years ago
The article fails to explain where the 2*pi comes from, and why 1/alpha is more natural. Does this quantity arise in some context other than unit-analysis speculation?
jeffwass · 10 years ago
The 2pi comes from the usual method of using hbar, instead of h. Where hbar = h/2pi, and is the Planck constant adjusted to use radians instead of cycles.

This is much more natural when working with frequencies instead of using cycles.

Yes, this constant arises prolifically when looking at the atomic "fine structure", which modifies the usual hydrogen energies to includes interactions between an electron's spin and its orbit (the un-modified ones only include the kinetic energy of the reduced electron-proton and the electric potential energy). There are further interactions that can be added, eg the hyperfine interaction which includes the spin-spin interaction between the proton in the nucleus and the orbiting electron.

And if you use Planck units where hbar=c=G=1, you can do many things easily, for example denote the potential of an electron at distance r as just alpha/r (without all those other pesky constants embedded in Gauss's law).

jessriedel · 10 years ago
jeffwas has explained the factor of 2pi. It's just the 2pi radians in a circle to convert from angular frequencies to regular frequencies.

I don't know the detailed history, but in practice the fact that it's an inverse is essentially because it's most important role is as part of an expansion like a Taylor series. The standard method for calculating lots of observable effects gives an answer with the form

X = X_0 + \alphaX_1 + \alpha^2X_2/2! + ...