Thursday, October 04, 2007

Quantum Mechanics and Objective Reality

Frank Luger headshot by Frank Luger

The main features of quantum theory, such as the wave function, the uncertainty principle, wave-particle duality, indeterminacy, probabilistic behavior, exchange forces, spin, quarks and their various flavors and charms, etc. are so counterintuitive as to defy human intuition and common sense. It is often argued, that since they are abstractions, one way or another, maybe they are figments of overactive imaginations. Not quite, counters the theoretical physicist, because although there’s a tough road from mathematical modeling to scientific fact, there’s overwhelming experimental and other evidence in favor of quantum mechanics as objective reality.

In order to take a look at some of the considerations which allow one to state that the world at the tiny magnitudes of microphysics is as proposed by quantum theory, it may be instructive to deal with the wave function as one of the main representatives in question. Although a mathematical abstraction, the wave function corresponding to a physical system contains all the information that is obtainable about the system. For example, if a moving particle acted on by a force is represented by a wave function (psi), then measurement of a physical quantity, such as momentum, always yields an eigenvalue of the associated momentum operator. In general, the outcome of the measurement is not precisely predictable and is not the same for identically prepared systems; but each possible outcome, or eigenvalue, has a certain probability of occurring.

This probability is given by the squared modulus of the scalar product of the normalized wave function (psi), or state vector, and the eigenvector of the operator corresponding to that particular eigenvalue. Furthermore, not all operators representing physical quantities commute- that is, sometimes AB ≠ BA, where multiplication of the operators A and B corresponds to making two measurements in the order indicated. These unusual but unambiguous postulates, which associate probabilities with geometric properties of vectors in an abstract space, have great predictive and explanatory value and, at the same time, many implications that confound our intuition.

Because of the usefulness of the wave function in generating experimentally testable predictions, it appears that a mathematical abstraction here takes on a reality equivalent to that of concrete events, as envisioned by Pythagorean and Platonic philosophies. However, there is a direct connection between the abstraction and observable events, and there has not been much tendency in physics to place the wave function in some realm of ideal forms, platonic or otherwise.

A similar state of affairs already existed in classical electrodynamics, and some physicists remarked that Maxwell’s laws were nothing more than Maxwell’s equations. Perhaps because radiation always had been regarded as immaterial with wave properties, this point of view was not quite as disturbing as it became when matter waves had to be considered. In both cases, however, there does appear to be a problem in explaining how mathematical symbolism can do so much.

Platonic implications can be avoided if we look more closely at the actual, concrete role of the wave function in the theory. If viewed as a conceptual tool, rather than something given, the idea of a wave function containing information about observable events is not so strange. The meaning of the wave function is defined by its role in the theory, which after all is a matter of theorists interacting with events. A clue to this purely conceptual, computational role is the fact that a wave function can be multiplied by an arbitrary phase factor without changing its physical significance in any way. Also, the fact that it is a complex-valued function discourages one from interpreting it as something with spatial and temporal wave properties.

As the search for causes has diminished in modern physics, the success of microphysics in explaining the properties of complex structures such as atoms, molecules, crystals, and metals has increased markedly at the same time. If causality is conceived, as it once was, in terms of collisions among particles with well-defined trajectories, then it has no meaning at the quantum level. However, a remarkable consistency in the evolution of identical structures with characteristic properties is apparent in nature. Quantum mechanics goes far toward explaining how these composite systems are built up from more elementary components. Although the once predominant mechanistic view of colliding particles is no longer tenable, its decline has been accompanied by success in the actual achievement of its original aims.

Terms such as causality and determinism still are used occasionally by physicists, but their connotations are quite different from what they were in earlier times. The formalism of quantum theory implies that determinism characterizes states, but not observables. The state of the system described by a wave function (psi) evolves in time in a strictly deterministic manner, according to the Schrödinger equation, provided that a measurement is not made during that period of time. This usage of determinism actually is equivalent to the statement that the Schrödinger equation is a first-order differential equation with respect to time.

In contrast, if at some instant a measurement of a physical quantity is made, the possible values that might be obtained are represented by a probability distribution. Furthermore, a measuring instrument introduces an uncontrollable disturbance into the system, and, afterwards, that system is in a different state that is not precisely predictable. This situation led Max Born (1882-1970) to make a famous statement that the motion of particles conforms to the laws of probability, but the probability itself is propagated in accordance with the law of causality. The initial astonishment produced by this unforeseen turn of events was shortly followed by an even greater astonishment when these unconventional ideas proved to be extremely workable in practice.

Consider more closely the role of causality and of probability in the theory. The relationship (psi)1 → (psi)2, where (psi)1 and (psi)2 are states at successive instants in time, is completely determined in the theory, provided no measurement takes place during the interval. Moreover, if a measurement is made at some instant, the relationships (psi)1 → f(x) and (psi)2 → g(x), where f(x) and g(x) are probability distributions of an observable, also are completely determined. The new and strange features of the theory are embodied in the facts that (a) these probability distributions, in general, have nonzero variance, and (b) if the relation (psi)1 → f(x) is in fact exhibited by making a measurement, then the relation (psi)1 → (psi)2 no longer holds.

It is difficult to grasp intuitively that the probabilities referred to are those of measures that might be obtained on an individual system using a perfectly reliable instrument and seemingly come from nowhere. Expressed mathematically, the only appropriate probability space corresponding to the probability distribution of a quantum mechanical observable is provided by the real line, its measurable subsets, and the probability measure determined by the wave function; and that structure is not, as is usually the case, induced by an underlying probability space having physical significance. Despite intensive search over many decades, no such underlying probability space has ever been found, and it is now generally agreed that one does not exist. This search in fact resembled somewhat the frustrating attempts in the XIXth century to find an ether, a hypothetical universal space-filling medium propagating radiation.

Nevertheless, when matters are expressed as above, it appears that quite a lot about the theory is deterministic. Furthermore, this viewpoint discourages the tendency to confuse indeterminacy with lack of ability of scientists effectively to make contact with events. Probability distributions of measurements are objective, concrete things. Determinism fails when applied to the concept of an elementary corpuscle simultaneously having a definite position and a definite momentum, conditions never observed experimentally.

Quantum theory, as emphasized previously, was applied with excellent results to a broad range of phenomena; for example, the periodic table of the elements at last became understandable, and the foundations of all inorganic chemistry, and much organic chemistry and solid state physics were firmly established. Contrary to the expectations of some critics, the theory definitely has not encouraged a view of the world ruled by a capricious indeterminacy, but, on the contrary, has greatly enchanced the coherence and explanatory power of science.

Still, the above turn of events in the age-old problem of causality had not been anticipated. The fact that the implications of the theory conflicted in such a radical way with previous philosophical views was a departure from tradition that probably to this date has not been fully assimilated.

Eventually, one may hope, concepts such as causality, system, interaction, and interdependence will be extended and enriched by the findings of quantum physics. Perhaps we are already beginning to see this happen and to appreciate that the new viewpoint does not entail as much of a loss as we once believed. In both classical physics and quantum physics a list of well-defined dynamical variables is associated with each system, and in some respects the quantum mechanical description by state vectors is analogous to a phase-space representation in classical statistical mechanics. Formally, the dynamical variables play a different role in the two theories, but in both cases their specification exhausts the observable properties of the system. The probabilistic aspects of quantum theory, as stressed before, certainly do not imply an inability to find lawfulness and orderliness in nature.

Although quantum mechanical predictions of, for example, position are inherently probabilistic, in many instances a particle is sufficiently localized that probabilities of it appearing outside a restricted range are essentially zero, that is, the dispersion of the distribution is small. It becomes meaningful, for example, to speak of shells and subshells in atomic structure. Overall, it appears that abandonment of the rather limited classical cause-and-effect scheme is a minimal loss compared to the far greater gains achieved by the theory as a whole.

Like many ideas in quantum theory, the celebrated Heisenberg uncertainty principle becomes less mysterious if examined in its concrete role in the theory. The uncertainty principle is not an insight which preceded the theory, but is built into its structure, that is, it can be derived from the abstract formalism. Heisenberg’s matrix mechanics and its success in accounting for experimental results came first; the uncertainty principle and its implications then were recognized.

Essentially, this principle means that the dispersions, or variances, of probability distributions of noncommuting observables are constrained by one another, or, alternatively, that a function and its Fourier transform cannot both be arbitrarily sharp. The physical significance of this result is that measurements of certain pairs of observed quantities- such as position and momentum, or time and energy- cannot simultaneously be made arbitrarily accurate. The principle has been confirmed, many times, by an overwhelming mass of evidence. Accordingly, the principle is an objective property of events that must be confronted in future advances of our understanding of the physical world. Much the same is true about all the other main features of quantum theory.

Although quantum mechanics and the blurred mode of existence that it reveals represent current frontiers in the direction of the infinitesimally small, it is generally acknowledged that this is not the final answer. Quantum reality is reality, to be sure, but it is still very much a virtual reality inasmuch as it refers to states of affairs relative to Man. As such, it is reasonable to expect that it has a source and a destination, being perhaps an integral albeit temporal phenomenon of an underlying ultimate reality. That is, quantum mechanics is objective reality; but it remains to be seen where it comes from and where it goes. However, that’s another story.

Read More...

Sunday, September 16, 2007

Fundamental Requirements in Building Physical Theories

An Original Research Essay

Frank Luger headshot by Frank Luger

As mentioned in some of my previous essays the philosophy of science requires that any physical theory worth its salt must be built around at least potential observability and must obey the reduction principle, i.e. be capable of being shown to rest on established theories. These are logical requirements based on the consistency of Nature. However, if one approaches theory building in physics from the physical rather than the philosophical side, there are some other principles to obey; and these principles are sine qua nonrequirements of proper physical theories, in the sense of transcending any particular theory. Collectively, they may be called symmetry and conservation laws; and they directly rest upon invariances, which are independent of time and space and which are also based on the consistency of Nature. The difference is that while the philo-sophical requirements are a priori, that is, "dic-tated" by induction and synthesis; the physical requirements are a posteriori, that is, dictated by deduction and analysis of actual data. For the present heuristic purposes, let us concentrate on the latter kinds1.

Symmetry in Nature has been dealt with by some very famous authors2and likewise, conservation lawshave also been extensively discussed3. Instrumentalism and its extreme form, solipsism, would proclaim that as "beauty is in the eye of the beholder," symmetry is a figment of human imagination, based on the basic human need for aesthetic experiences. Scientific realism in general, and quantum realism in particular, on the other hand, would maintain that symmetry is inherentin Nature; and this whole disagreement in philosophical perspectives between instrumentalism and realism represents, in fact, the difference between epistemicand onticviewpoints and orientation emphases. While there are certain difficulties with both vantage points, especially in their extreme forms, most of the data from recent research in physics seems to tilt the balance in favor of quantum realism and against instrumentalism, especially in its earlier ("Copenhagen School") form4. Let's now briefly review first the theory of the basic symmetry and conservation laws, as they represent broad generalizations whereby physical theories may transcend time and space; and then, list the most important principles and laws.

Based on concepts from classical geometry, the word symmetryimplies divisibility into two or more even parts of any regular shape in 1, 2, or 3 dimensional ordinary (Euclidean) space. However, in physics, 'symmetry' has a more precise, albeit more general meaning than in geometry. Reversible balance is implied, that is, something has a particular type of symmetry if a specific operation is performed on it, yet it remains essentially unchanged. For example, if two sides of a symmetrical figure can be interchanged, the figure itself remains basically invariant. A triangle may be moved any distance, if there is neither rotation nor expansion-contraction involved, then the triangle remains symmetrical under the operation of translation in space. This means little in (projective) geometry, but in actual physical situations, it can be far from trivial. If we imagine an initially symmetrical shape with some weight attached to it as being moved to a different gravitational field, symmetry will not be conserved. Yet, the basic laws of physics are supposed to be independent from locations in space. And they are. What may be different are those aspects which are variable, but their interrelationships do not change. Symmetry will be conserved not relative to a fixed observer, but relative to the form in which the basic laws are expressed -- i.e. their mathematical descriptions.

The inevitable conclusion is that it is the mathematical expressions of physical laws which are responsible for ensuring that the form of the basic laws of physics is symmetrical under the operation of translation in space. For example, the law of conservation of momentum is a mathematical consequence of the fact that the basic laws have this property of assuming the same form at all points in space. The conservation law is a consequence of the symmetry principle, and there is reason to believe that the symmetry principle is more fundamental than the detailed form of the conservation law. A general theory, thanks to its mathematical armoury in which tensor analysis and differentiable manifolds assume great importance, is able to formulate basic equations which have the property of assuming the same form at all points in space.

Therefore, when "indulging" in theory building, the theoretical physicist is well advised to try to formulate his basic laws so that they become and remain symmetrical under any and all fundamental transformation. Fortunately, there are several well-known and well-established guidelines; and these are what we may subsume under the general heading of symmetry principles and conservation laws. It is important to keep in mind that conservation laws are mathematical consequences of various symmetries, thus as long as the theorist ensures that his formulations do not violate basic principles of symmetry, he stands a good chance of being subsequently able to deduce the appropriate conservation laws, and prove, at least to the satisfaction of the requirements of mathematical logic, the soundness of his conceptualizations. By contrast, failure to observe this guideline may result in heaps of impressive-looking pseudoscientific rubbish, as for example in various airy grandiose schemes and trendy New Age fads and hasty oversimplifications ad nauseam5. While it is true that a few symmetry principles and conservation laws are still controversial, and it is not always clear which conservation law is necessarily a (mathematical) consequence of which symmetry principle; the fact is that most of the relationships are well established, and repeated mathematical testing of various new theoretical models is not only always helpful but perhaps even mandatoryas well. That is, before making predictions and deducing testable hypotheses and subjecting them to observations and experiments, it is best to have played the devil's advocate and trying as hard as one can, to make a "liar" of oneself. This grueling task will pay grateful dividends later, by saving the theorist from self-discreditation and its inevitable consequence, death by ridicule.

Following Einstein and his postulates of Special Relativity, we accept that the form of the basic laws of physics is the same at all points in space. This is called symmetry under translation in space, and (mathematically) it leads to the law of conservation of linear momentum. This is one of the most fundamental principles of modern physics. Next, in a similar vein, we also accept that the basic laws of physics describing a system apply in the same form under fixed angle rotations — i.e. the laws have the same form in all direction. We may call this the principle of symmetry under rotation in space, and again, (mathematically) it gives rise to the law of conservation of angular momentum. Now comes time, i.e. that the form of the basic laws of physics does not change with the passage of time. Once a fundamental invariance is successfully identified, it can be assumed with great confidence that what was the case many millions of years ago will still be the case indefinitely into the future. This principle is called symmetry under translation in time, and (mathematically) it yields the law of conservation of energy (also known as the First Law of Thermodynamics). However, the next principle, that of symmetry under reversal of timeis somewhat controversial, because although it is theoretically possible, it is practically never observed. The principle leads to the great Second Law of Thermodynamics, through a series of steps which would be a bit too technical for the present purposes. Symmetry under time reversal maintains that a time reversal process can occur, but it does not say that it does occur or that it ever will occur. This is a rather subtle, and thus a much misunderstood and disputed point, as discussed in my paper "Conceptual Skepticism in Irreversible Energetics", cited in footnote No.1 (14) above. It is precisely because symmetry under time reversal is never observed in practice, but the opposite, i.e. asymmetry and irreversibility are always observed, that the Second Law of Thermodynamics is still one of the most controversial of the basic laws of physics. Disregarding mathematics for the moment, how theoretical reversibility gives rise to practical irreversibility in Nature remains somewhat nebulous. It is possible that irreversibility is a special case of reversibility due to a hitherto unexplained intervening construct or variable, rather than the other way around. Future research will tell, we hope.

Still another consequence of Einstein's Special Relativity theory is that the basic laws of physics have the same form for all observers, regardless of the observers' motions. In other words, the basic laws have the same form in all inertial frames of reference, and thus do not depend on the velocity or momentum of the observer. In Einstein's General Theory of Relativity, which is not as well substantiated as the Special Theory, the basic laws are assumed to have the same form for all observers, no matter how complicated their motions might be. Altogether, this is the principle of relativistic symmetry.

Turning to microphysics, it must be considered that fundamental particles have no individual differences in the sense of "identities", i.e. if we interchange two particles of the same class or category (vide infra), such action does not influence the physical process as a whole. This indistinguishability of similar particles gives rise to the principle of symmetry under interchange of similar particles. An electron is no different from any other electron. Furthermore, if negative charge cancels an equal amount of positive charge, then there is no known physical process which can change the net amount of electric charge. This is known as the law of conservation of electric charge, and it is thought to be a (mathematical) consequence of certain symmetry properties of the quantum mechanical wave function psy (Ψ). Similarly, if a particle cancels its antiparticle, there is no known physical process which changes the net number of leptons (light particles); and this is known as the law of conservation of leptons, although an underlying symmetry principle has not been unequivocally established. In a like vein, also in particle-antiparticle cancellations, the net number of baryons (heavy particles) remains the same; this is the law of conservation of baryons, and similarly to leptons, no underlying symmetry principle has been properly established. It is noteworthy, that while there are conservation laws for fermions, there are no such laws for bosons, photons, pions, kaons, etas, and gravitons.

There are also imperfect symmetries, which may or may not be intrinsic to Nature. That is, it is possible that Nature is constructed according to a scheme of partial or imperfect symmetry, whereby irreversibility would be the rule and reversibility the exception. It is more probable, however, that things are the other way around (reversibility is the rule and irreversibility is the exception), and the fault lies within our own machinery, as mentioned in some of my other writings (see footnotes). One such imperfect symmetry is charge independence. There is a principle of symmetry of isotopic spin, whose (mathematical) correspondent is a law of conservation of isotopic spin. This law applies to strong nuclear interactions, but is broken by electromagnetic and weak interactions. Also, there are then processes which involve what have come to be called the strange particles; and to each of them an integral number had been assigned, known as its strangeness. The law of conservation of strangeness is also an imperfect symmetry, inasmuch as strangeness is conserved in strong interactions, but not in weak interactions. However, the very particle-antiparticle symmetry turns out to be a broken or imperfect symmetry, because all weak interactions violate it; and there is no fully satisfactory explanation for this imperfect charge conjugation.

The principle of mirror symmetry maintains that for every known physical process there is another possible process which is identical with the mirror image of the first. Yet, this can also be a broken or imperfect symmetry, depending on "handedness" — inasmuch as one cannot put a left-hand glove on the right hand, no matter how much one glove may seem like the mirror image of the other. Mirror symmetry can be expressed mathematically in terms of a quantity called parity and there is a corresponding law of conservation of parity. However, weak interactions do not conserve parity, although all other types of interactions do. One example is that although the neutrino and the antineutrino are mirror images of one another, the neutrino is like a left-hand glove and the antineutrino is like a right-hand glove. Generally speaking, all weak interactions violate the symmetry principle of mirror reflection. All weak interactions violate the symmetry principle of particle-antiparticle interchange. All interactions, including weak interactions, are symmetrical under the combined operation of mirror reflection plus particle-antiparticle interchange6.

Despite such "violations" and "broken symmetries", when the universal "big" picture is contemplated, symmetries outweigh asymmetries sufficiently to restore one's faith in the esthetic beauty and efficient elegance of Nature. As shown by recent advances in cosmology7, although asymmetries are cosmological in origin, they somehow seem to fit integrally into the overall scheme of things, and thus represent no violations of any great law, but rather, they help to give rise to them and to maintain them in a sort of dynamic equilibrium, however unbalanced certain parts of the whole seem to be from time to time or even all the time. Therefore, it seems reasonable to conclude, that the more we come to understand the fundamental nature and ways of the Universe, the more we may become enchanted by its intrinsic beauty and harmony on the grandest as well as the minutest scales, whereby we may even catch an occasional glimpse of Eternity.


1 The philosophical requirements of potential observatibility and the reduction principle will be dealt with in another essay, which will examine the connection between the philosophy of quantum mechanics and that of modern interactional psychology, more or less within the framework of General Systems Theory.

2 e.g. Weyl, H. Symmetry, Princeton University Press, 1952; Wigner, E.P.: The unreasonable effectiveness of mathematics in the natural sciences, in Symmetries and Reflections, Scientific Essays of E.P. Wigner, Bloomington: Indiana University Press, 1978; Ziman, J.: Reliable Knowledge, An Explanation of the Grounds for Belief in Science, Cambridge: Cambridge University Press, 1978; etc.

3 e.g. Feynman, R.: The Character of Physical Laws, Cambridge, Mass.: The M.I.T. Press, 1965; Jammer, M.: The Philosophy of Quantum Mechanics, New York: Wiley, 1974; Weisskopf, V.F.: Knowledge and Wonder, Cambridge, Mass.: The M.I.T. Press, 1979; Ziman, J.: op. cit.; etc.

4 e.g.: Cook, Sir A.: The Observational Foundations of Physics, Cambridge: Cambridge University Press, 1994; d'Espagnat, B.: Reality and the Physicist, Cambridge: Cambridge University Press, 1989; Hawking, S.W.: A Brief History of Time, New York: Bantam, 1988; Peierls, R.: More Surprises in Theoretical Physics, Princeton, N.J.: Princeton University Press, 1991; Rohrlich, F.: From Paradox to Reality: Our Basic Concepts of the Physical World, Cambridge: Cambridge University Press, 1989; Weinberg, S.: The Quantum Theory of Fields, Vols. I-III, Cambridge: Cambridge University Press, 1995, 1996, 2000; etc.

5 e.g. Capra, F.: The Tao of Physics, New York: Bantam, 1975; LaViolette, P.A.: Beyond the Big Bang: Ancient Myth and the Science of Continuous Creation, Rochester, Vt.: Park Street Press, 1995; Zukav, G.: The Dancing Wu-Li Masters: An Overview of the New Physics, New York: Bantam, 1980; etc.

6 e.g. Blohintsev, D.I. Questions of Principle in Quantum Mechanics and Measure Theory in Quantum Mechanics, Moscow: Science, 1981; Eisberg, R. & Resnick, R.: Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, , 2nd ed., New York: Wiley, 1985; Holland, P.R.: The Quantum Theory of Motion, Cambridge: Cambridge University Press, 1993; Gómez, C., Ruiz-Altaba, M., & Sierra, G.: Quantum Groups in Two-Dimensional Physics, Cambridge: Cambridge University Press, 1996; McQuarrie, D.A.: Quantum Chemistry, Mill Valley, Calif.: University Science Books, 1983; etc.

7 e.g. Barrow, J.D.: The Origin of the Universe, New York: Basic Books, 1994; Binney, J. & Tremaine, S.: Galactic Dynamics, Princeton, N.J.: Princeton Astrophysics Series, 1987; Hawking, S.W.: op. cit., 1988; Hawking, S.W.: Black Holes and Baby Universes, New York; Bantam, 1993; Hawking, S.W. & Penrose, R.: The Nature of Space and Time, Princeton, N.J.: Princeton University Press, 1996; Kaufmann III, W. J.: Relativity and Cosmology, 2nd ed., New York: Harper & Row, 1985; Penrose, R. & Rindler, W.: Spinors and Space-Time, Vol.II: Spinor and Twistor Methods in Space-Time Geometry, Cambridge: Cambridge University Press, 1993; Rindler, W.: Essential Relativity: Special, General, and Cosmological, New York: McGraw-Hill, 1977; Wald, R.: Space, Time, and Gravity, Chicago Press, 1977.

Read More...

Tuesday, July 24, 2007

Why Take the Fifth?

" No person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger; nor shall any person be subject for the same offense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation."

... "'defence' of Lie's behaviour by referring to the close relationship between genius and madness really created a generally accepted explanation which has survived up to the present. By this act of 'defence' Klein did his old friend an incredible injustice."1

Fred Vaughan headshot by Fred Vaughan

We all know what it means to "take the fifth". It ain't good!

There have been many attempts to reduce the number, modify the structure, and alter the phraseology of Euclid's postulates, but it has been found that for plane projective geometry they are by and large very sound as initially presented. However, there seems to have been little effort to determine whether there might be a different postulate more appropriate that the fifth for modification to provide compatibility with the formalism of relativity and our current view of the universe.

That one of Euclid's postulates upon which he based The Elements of his geometry might be flawed, or worse yet, unnecessary is, of course, an integral part of present day establishmentarian mathematics and physics. The Fifth Postulate, that through any point only one line can be drawn parallel to any other has been unanimously selected as the culpable postulate invalidated by the current understanding of relativity and cosmology at larger scales of our universe.

Long before that mathematicians began exploring alternative geometrical possibilities deriving from the elimination of this assumption after repeatedly failing to reduce it to a provable theorem. This was before there was any inkling that we might actually live in such an alternative universe.2 Gauss actually attempted measurements employing light signals to determine based on such empirical evidence whether that might be the case, however. But with the advent of Einstein's relativity, bold conjectures of a combined spacetime exhibiting strange geometrical properties have been totally accepted by the scientific community, so that alternative-fifth-postulate-geometries thrive; notwithstanding this feeding frenzy on the Fifth, Postulate convincing evidence that another of Euclid's postulates is invalid continues to be denied.

Relativity provides the analytic work of pioneering mathematicians a context of immediate relevance and it should not be surprising that their work would have been re-evaluated with renewed interest. These former discoveries concerning viable geometries not requiring Euclid's Fifth Postulate revitalized mathematical physics.

One must note that even in the general theory of relativity, physical experiments are always considered as being conducted within locally-Lorentz reference frames. What this means is that even though an observer may experience wild gyrations of acceleration due to gravitation or his own rocket engines, at each moment in time it is only his instantaneous velocity relative to what is being observed that is pertinent to the geometry of his current observations. This is where one must begin if the objective is to map observations between oneself and other observers in relative motion. So the Lorentz geometry of special relativity would seem to be the local geometry of choice. This has been thought to involve a flat spacetime, but it is hardly without distortion as the author has discussed elsewhere. In particular relativistic aberration distorts the directions of objects in one frame of reference relative to where those objects are to be seen in the other. The coordinate axes of the other observer are not immune to this distortion

Let us look at Euclid's five postulates and attempt to determine for ourselves which one seems most likely to be at odds with such observational inferences made from Lorentz reference frames. Here are all five postulates3:

  1. Only one straight line can be drawn between any two points.
  2. A finite straight line can be extended indefinitely.
  3. Only one circle of a given radius can be centered at a given point.
  4. Through a point at a distance from a given line there is only one line that can be drawn through the point that is perpendicular to the given line.
  5. Through a point at a distance from a given line there is only one line that can be drawn that is parallel to the given line. 4

In lieu of the apparent directional distortions of the three perpendiculars that constitute the spatial axes of Lorentz reference frames of various observers in relative motion, one can but wonder why there has been this preoccupation with the Fifth Postulate anyway? What we have found is that each of all possible coincident observers with unique relative velocities would witness all other observers' perpendicular directions to be misaligned with regard to their own. Parallel lines of sight in one frame of reference would remain parallel for the others although they would in concert be pointing off in other directions.

So it seems self-evident that to make sense of the coordination of the geometrical observations and constructions between relatively moving observers, we must reject the Fourth Postulate! It seems to the author that we may even need a new theory of perpendiculars. But his elder sister did nickname him "Perpendicular" — Perpy for short — so maybe such stigmata warps ones sense of geometrical rectitude.

On that charge I think I will claim my Fifth Amendment right.


1 Written by Marius Lie's friend and collaborator Friedrich Engel at his death. The quote is provided gratuitously as being of possible interest to this audience.

2 Robert Bonola, Non-Euclidean Geometry, Dover, New York (1955), originally published 1914. Supplements within this book contain "The Theory of Parallels" by Nicholas Lobachevski, and "The Science of Absolute Space" by John Bolyai. The book also provides a context for the pioneering efforts of such names as Gerolamo Saccheri (1667-1733), Johann Lambert (1728-1777), Adrien Legendre (1752-1833), Wolfgang Bolyai (1775-1856), Friedrich Wachter (1792-1817), Bernhard Thibaut (1776-1832), Karl Gauss (1777-1855, Ferdinand Schweikart (1780-1859), Franz Taurinus (1794-1874), Nicholas Lobachevski (1793-1856), John Bolyai (1802-1860), B. Riemann (1826-1866), Ludwig Helmholtz (1821-1894), and Marius Lie (1842-1899).

3 This version involves only a slight rephrasing of those given by Sir Thomas Heath in The Elements of Euclid. Changes parallel Playfair's rephrasing of the Fifth Postulate.

4 In 1795, John Playfair (1748-1819) offered an alternative version of the originally translated postulate involving interior angles, which was: That if a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the straight lines, if produced indefinitely, will meet on that side on which the angles are less that two right angles. This alternative version, of course, gives rise to the identical geometry of Euclid. It is Playfair's version of the Fifth Postulate that most often appears in discussions

Read More...

Wednesday, May 30, 2007

The Theoretical Significance of a Logarithmic Distance-Redshift Relationship

Fred Vaughan headshot by Fred Vaughan

There is something very compelling about a logarithmic functional form for the distance-redshift relation in observational cosmology. In fact, it is so compelling as to seem logically necessary as the form of that relationship — whether that fact is generally acknowledged or not, which of course…it is not.

To adequately understand this, let us look at what is involved in light being redshifted along a propagation path between emission and observation. Suppose there is an observer at point A for which a telescope on earth would suffice as an instance. And suppose that there is an ensemble of atoms in a star in a distant galaxy that we will refer to as point C that emit light at a specific wavelength associated with the spectra of a particular element. These atoms emit photons of light that can ultimately be observed by the telescope at A. If there is a distance-related redshift in the spacetime where all this takes place, then the wavelength of the light λAobserved at A will be related to the emission wavelength λCemitted at location C according to the redshift definition:

ZAC = ( λA − λC ) / λC
This is true no matter what the separation between A and C or anything else. It's just a definition. For physical reasons ZACmust be a continuously increasing function of the separation AC. So, let us define the redshift-related parameter ζ(d) as a continuous function of the separation d = AC as follows:
ζ(d) = ZAC + 1 = λA / λC

Since ζ(d) applies to for any separation, we should be able to place an observer at any point B along the light path from C to A, where d1= AB and d2 = BC, with the observed radiation exhibiting redshifts as follows:

&zeta(d1) = λA / λB and &zeta(d2) = λB / λC
Therefore, over the total distance for which d = d1 + d2 the following relation must apply:
ζ(d1 + d2) = ζ(d1) . ζ(d2),
And as a necessary consequence of this relation, we must have that:
ζ(d) = e αd = e α ( d1 + d2 ).
And, of course, the inverse functionality must be:
d(ζ) = ln (ζ)

The "standard model" embraces a broad class of disparate alternatives loosely associated by adherence to Hubble's hypothesis and one form or another of Einstein's theory of general relativity. The Einstein — de Sitter model is but one of the simpler of these alternatives that exhibits a "flat" spacetime, because of which it is frequently discussed for didactic purposes, although it is generally disparaged as a somewhat naïve candidate for serious consideration. This short shrift seems ill-advised to the author in light of the interesting fact that a key feature of the Einstein — de Sitter model (unlike the others that are considered more viable) is that the distance-redshift relation is given by the logarithmic form.

Although the Einstein — de Sitter model is virtually never considered a viable contender by current cosmologists for the ultimate acceptance, its logarithmic form of the distance-redshift relation is generally used for convenience in analyzing associated phenomena because it so closely fits the actual data as distances to observed objects increase. Strange isn't it?

The preceding discussion explains the situation depicted in the figure below.

log red shift diagram

It is worth considering what would be implied by a relationship other than one involving the logarithm: What is involved is whether or not homogeneity applies to this relationship.

The seeming improbability of, but nonetheless presumed, failure of the logic we have described above is what has contributed so substantially to presumptions of the supposed evolution of developments in our universe. But if one decouples redshifting as an observed phenomenon (whatever its cause) from constraints imposed by whatever causes it according to one cosmological theory or other, then the logarithmic relationship to distance continues to make logical sense as we have shown above. We will be told, of course, that to presume that distances could be linearly additive if space itself is nonlinearly distorted would itself be an improbability. But would it? Even along a curved path the distance along that path is linearly additive as the basis for the integration of distance along infinitesimal line segments.

In the next figure we have drawn a situation similar to that shown in figure 1 except that space is such that line of sight distance is curved along a light path through space. In this case, in addition to observers A and B, we have observer B capable of emitting light from a separate source at the moment of his observation of the light from C which is set to resonate at precisely the same frequency (wavelength) as the radiation he observes. Let us analyze the possibilities here.

log red shift diagram

As before, we must now have that ζ(d1) = λA / λBand ζ(d2) = λB1 / λC. This would seem to apply by reason of the definition of redshift, if the source of the radiation of wavelength λB1is indeed set up to equal that of λB. This can be verified by the digital communication from B to A independent of the redshift impact on that link if A's antenna is properly tunable. Then as long as there is a general formula applicable throughout space and time relating redshift and distance,

d ← f(Z+1) and Z+1 ← ζ(r)
If the peculiar functionality of the inverses f(x) and ζ(y) were independent of position in spacetime (i. e., if spacetime is indeed homogeneous), then the logarithmic/exponential relationships must apply. It is the ad hoc denial of this cosmological principle that had reigned supreme since Copernicus that empowers the standard model with the freedom to deny an otherwise logical premise.

Read More...

Wednesday, January 31, 2007

Problems With Yahoo! Groups

Richard May headshot by Richard May

What's most disconcerting to me is receiving my own messages from Yahoo! Groups, before I've sent them or even written them! I guess that Yahoo's services are getting somewhat random temporally; Maybe Yahoo is harnessing entropy to save money.

Some of the Yahoo! Groups messages actually disappear, vanishing like information lost by Hawking radiation from black holes. The information/energy actually re-emerges in other brane worlds, as Yahoo! Groups advertisements. You may have noticed that no matter how bad Yahoo's services become, the ads always work just fine.

Some of the Yahoo's ads in our brane world apparently run on reconfigured bits of information lost in message disappearances in other brane world or parallel universes.

May-Tzu

Read More...

Thursday, January 25, 2007

Cosmic Coincidences?

by Fred Vaughan

Fred Vaughan

There seem always to be these nearly insurmountable epistemological traps and barriers to overcome. We seem always to be peering down the wrong end of telescopes, until very occasionally by some accident of fate, we run off yelling "Eureka! Eureka!" like demented hippies in the backwoods of California. Our various highly evolved linguistic and mathematical skills get applied primarily to justifying the particular inanity that happens to be in vogue — never with actually changing paradigms. There seem always to be mathematical mappings of what is known of the unknowable depths of our universe to the shallow waters of our intellectual wading preference, but the veracity of such mappings are warranted no more than formal propriety justifies aphorisms depicted in poesy.

Consider what we know of our universe with regard to its composition as a very diffuse but impure hydrogenous plasma. Yes, as surely as to a first approximation we ourselves are mere bags of salt water, the universe is a hydrogenous plasma, both being pretty damn good approximations! With only this much firmly in our grasp, we must resist urges to charge off like rabid string theorists to find the big end of some telescope, waving at cameras and grabbing microphones as they go!

How diffuse? About 10-25 grams per cubic meter. So in sifting through a cubic meter or so of universal debris at random you might find an odd proton, an electron to neutralize the concoction, and by-product neutrinos all whizzing about at significant fractions of the speed of light. The most obvious decomposition of this plasma being that apparently on large scales everywhere in the universe it is 76 percent hydrogen nuclei and 24 percent helium nuclei (by mass such that there are about twelve hydrogen nuclei per each helium) with mere traces of other isotopes.

At high temperatures helium nuclei are formed from hydrogen nuclei by nuclear fusion. (Of course at even higher temperatures protons which comprise the nucleus of hydrogen can be created from neutrons, and positrons, with neutrinos and associated "opposites" dashing about, but let us ignore third tier observations.) All nuclear reactions are reversible with equilibrium percentages of each product determined by temperature. Those of us who still accept the conservation of energy — notice that most cosmologists do not — insist that if the 24 percent helium did indeed derive from primordially pure hydrogen plasma, then the energy released would not be totally lost. This caveat holds to the extent that the universe is a closed system, which it would seem to this author to be by definition. This radiant energy, however thermalized, must therefore still be present somewhere in the universe.

Now if you go through the calculations, and they are very straight-forward, you will find that the amount of radiation energy released per cubic centimeter is precisely the amount of energy invested in the microwave background radiation. All fashionable cosmological theories take this to be a mere coincidence. They tell us that the facts of annihilation associated with an unknowable primordial imbalance in matter and antimatter right after a miracle happened resulted in that glut of energy which today is viewed as some sort of perversely understood "fact" of the universe supposedly in reality being only 3 degrees Kelvin rather than the many orders of magnitude higher temperatures observed everywhere we look! According to these theories the energy balance coincidence is just a strange happenstance of our being here now rather than somewhere somewhat similar a billion years ago or hence! With such a perspective my confusion might have been avoided. But I don't have it!

So how "bright" should it be if this coincidental amount of radiation that we all agree is actually out there is actually out there? Well, let's think about that: On average every hundred cubic meters or so of the universe contains evidence of these reactions having taken place. From our observation point the intensity from each reaction is diminished as 1/r2 where r is the distance to each occurrence. We arrive at Olbers paradox with the number of cubic meters increasing as the square of the distance, r2. Thus, we get to the crux of the paradox when we combine these two effects for the entire universe. But of course modern cosmology resolves such difficulties by demanding a finite universe of radius Ro = 1/Ho where Ho is Hubble's constant. So we end up with a modest(?) intensity given by:

Equation 1
So a finite universe and a justifying Bang are made for each other. But if the redshift-distance relation is accepted as mere fact rather than some grandiose deduction from conjecture, to the accuracy of precise observations the relation is characterized by r = Ro ln (z+1), which theorists will tell you corresponds to an "Einstein-de Sitter Universe." Here we have distance given by the natural log of redshift, z, plus one, all divided by Hubble's constant. The effect of redshift is to reduce the frequency of radiation, thereby reducing its intensity by the factor 1/(z+1) = e− r/Ro. So that in an infinite universe we would have:
Equation 2
Thus, identical facts can be used to justify opposite theories if you're into that.

Of course cosmology involves a mass of observations concerning a broad scope of concepts, all of which must be understood in such a way that they agree before any comprehensive theory will ever even approach some sort of validity. But, as with the preceding, there seem to be more ways of looking at each fact than initially meets the eye. Einstein's gravitation equations don't address the obvious possibilities of gravitational energy suffering the depredation by redshifting while being propagated. Why not? Nor, of course, should "Newton's iron sphere theorem" be taken as having any relevance once one realizes that the metaphor does not hold for a closed universe for which there is no inside-outside surface. Here too, therefore, observed gravitational effects of finite universes cosmologists favor can be matched or bettered by virtually identical ones involving indefinite extension.

Are these mere cosmic coincidences? I don't think so.

Read More...

Tuesday, January 23, 2007

The Nature of Nature Understanding Nature

Richard May headshot by Richard May

As individual members of our species have genetically-based cognitive limits on what they are capable of understanding, there is no reason to suppose that species don't also have asymptotic cognitive limits. Chimps and gorillas have no facility with the equations of Newtonian mechanics and there are no homo sapiens theoretical physicists with IQs between 90 - 110.

"The search[for physical laws and particles may] be over for now, placed on hold for the next civilization with the temerity to believe that people, pawns in the ultimate chess game, are smart enough to figure out the rules," George Johnson, "Why is Fundamental Physics so Messy?" WIRED magazine, February, 2007.

It may not be a matter of a civilization, i.e., culture, with the "temerity to believe", a feature of some human religions not of science, that people are smart enough to figure out the rules of the universe. Actually being neuro-biologically more intelligent may be required to do the next level of physics. Another species with a more highly evolved higher brain structure and higher genetically-based cognitive limits may be required to achieve the next major theoretical synthesis.

Even ordinary human occupations have cognitive thresholds and so may there be species-specific cognitive thresholds for the theoretical tasks required to ascend to the next level of a perhaps infinite regress of 'ultimate truths' about the universe. It is the ultimate anthropocentric hubris to presume that the Protagorian dictum "Man(as his brain is currently evolved)is the measure of all things" necessarily applies to fundamental understanding of Nature herself!

May-Tzu

Read More...

Friday, December 15, 2006

Spinors

Martin Hunt headshot by Martin Hunt

Spinors
A photograph of a lithograph produced by Martin Hunt.

Read More...

Tuesday, December 12, 2006

Is A Photon (just) An(other) Object?

Fred Vaughan headshot by Fred Vaughan

Fred Vaughan back in the day
the author back in the day

Even very intelligent people tend to abandon logic in dealing with concepts of the special theory of relativity and especially with regard to those involving frame independence and mutual observability - the latter of which most have never even considered. These notions derive from a common sense presumption that a "ray of light" (read photon) emitted or detected at a given point in spacetime could have been emitted or detected by any other source or observer, respectively that happened to have been coincident at that particular instant in time. The presumption results from Einstein's insistence that Lorentz transformation "relations must be so chosen that the law of the transmission of light in vacuo is satisfied for one and the same ray of light (and of course for every ray) with respect to…"1coincident observers in uniform relative motion. Thus a photon was presumed to be a mutually observable real object. Well, it isn't.

Subsequent to Einstein's coining of this phrase concerning "the law of the transmission of light" in the first decade of the last century, much that was common sense about light had to be reevaluated and corrected because of light's notoriously non-commonsensical behavior. Einstein himself was a major contributor to that revised understanding that did not near completion for another twenty years. In fact, when he received the Nobel Prize for physics in 1921 it was for his powerful insights into the nature of light and the "photo-electric" effect in particular which involves the interaction of light and matter. In bestowing that honor, no mention was made of his more exhaustive efforts in relativity and most certainly not this "law" that gave rise to frame independence and mutual observability.

There have been notable challenges to the doctrine. For example, as early as 1926 in discussing the "nature of light" Gilbert Lewis who was the one who originally coined the term "photon," stated, "…we can no longer consider one atom the active agent and the other as an accidental and passive recipient, but both atoms must play coordinate and symmetrical parts in the process of exchange."2So that to presume that a photon of light is just an object that passes a point in spacetime available for inspection by any observer (rather than a specific emitter/absorber pair) had become extremely questionable within a very few years of Einstein's having coined his own catch phrase. Re-evaluation of whatever concepts depend upon it became an outstanding obligation, but in this case it was an obligation never addressed by those accepting the established interpretation of the Lorentz equations. But Lewis's position was notably cited by Wheeler and Feynman in their analyses of light as an inter-particle interaction in contrast to its being just another object or "wave/particle duality".3 But such interaction concepts with regard to the transmission of light do not seem ever to have been addressed specifically in the context of re-examining this cornerstone of the established interpretation of the Lorentz equations. Cramer did, however, address this misconception in his Transaction Interpretation of quantum mechanics.4

Einstein's and Minkowski's interpretation of the Lorentz equations postulates that events involving the emission, refraction or absorption of light in one frame of reference must be observable in these same senses by observers in any momentarily coincident frame of reference using their own equipment. This interchangeability insists not only on the possibility of coincident observation by relatively moving observers, but posits coincident observation of the very same events, which denies the unique role of the observer (absorber) in effecting Lewis's ultimate observation transaction. To instruct us with regard to the significance of this mutuality demand with respect to the interpretation of the Lorentz equations, Aharoni lays out the scheme very succinctly as follows: "Had an event not possessed absolute significance there could be no question of transforming its coordinates from one frame to another."5 So quite apart from the experimentally verified Lorentz relationship between observed events, a velocity addition formula was conjectured with no tests for refutation that ennobled the equations as a coordinate "transformation."6 So the very meaning of the Lorentz transformation equations as a transformation of one event rather than a correspondence between two events is what is at issue and resolution of this matter is of major epistemological significance.

Certainly, without experimental verification these equations ought not have been presumed, because of vague similarities to other mathematical forms, to fall into a category of coordinate conversion of identical events rather than a simpler correspondence between unique events related by the nature of observation. The latter is in more or less the same sense that observation is handled in quantum theories where the observer and what is observed are inextricably entwined. This interpretation would not violate other verified aspects of relativity; it would merely indicate that an event observable now by one observer corresponds to a different event on the world line of the source observable now by another. It would be in complete agreement with Einstein's insistence that the results of Lorentz calculations be considered as mensurable coordinate values. Both events would be observable by the other observer at some time, just not while in coincidence. This interpretation is similar to that of the parallax relationship of everyday experience. The Lorentz equations are at least as directly related to such a parallax translation of coordinates interpretation as they are to the usual didactic association with skew rotation employed in typical relativity texts.

The differences between these alternative interpretations of the mapping of events provided by Lorentz's equations must be subject to the usual refutation/verification procedures of experimental physics. So let us consider requirements on experiments that could determine whether such Lorentz-transformed events (more correctly "Lorentz-correspondent events") can possibly be the very same or must be distinct one from the other so as to comply with, or violate, the conjectured frame independence and mutual observability hypotheses.

An adequate test requires each of two relatively moving observers to obtain two types of data as shown in the figure below. The data must include that which an observer himself (or a relatively stationary synchronized assistant) observes directly and that observed and communicated at coincidence by the other observer or his synchronized assistant who will also be in uniform relative motion with the same velocity. The experiment will, furthermore, involve both measurements of electromagnetic emission and absorption events occurring exclusively within each observer's own apparatus and measurements involving interactive phenomena with the atoms and molecules of the apparatus of the other observer. Altogether this requires comparison of four categories of observation as shown.

The six relationships among these four types of experimental data pertinent to refutation of frame independence and mutual observability are also shown. Diagonally related observation types (I and IV, as well as II and III) pertain to observations of "common" events (or more explicitly to one specific event occurring on one particular object) by relatively moving observers and are presumed by theory to be related by the Lorentz equations. Note that these are the proper subject matter of the special theory but it has not been feasible to conduct such experiments. Horizontally related observation types (I and III, as well as II and IV) pertain to observations of analogous (i. e., similar but definitely not the same) events in the other frame of reference. Legitimacy of the assumed analogs depends upon the apparatus of each observer being constructed in accordance with identical drawings and that initiation of the identical experimental procedures by the observers be synchronized so as to maintain symmetry. These are sometimes erroneously assumed to exhibit a Lorentz relationship ostensibly pertaining to that between II and III (and presumably I and IV) and to have thereby confirmed length contraction and time dilation. Data obtained in horizontal categories (I and III as well as II and IV) require communication between observers with coincident assistants involved as appropriate for a definitive comparison. The relationship between I and III (and between II and IV) would seem by covariance to be an identity, but this is counter to the established interpretation in which the other's clocks are presumed dilated, etc.. Performing all these tests would substantiate or falsify the conjecture concerning light being just another object upon which so much of Einstein's and Minkowski's interpretation rests.

relativity observations possibilities
Four categories of observations possible in tests of relativity and their various relationships

Although experiments are still not feasible for comprehensively comparing all these measurements, one can at the very least use logical consistency as a criterion of validity for the various interpretations of Lorentz's equations. The author believes there to be a serious lack in the required consistency.


1 A. Einstein, Relativity - The Special and the General Theory, Crown, New York, p. 32. (1961)

2 G. N. Lewis, "The Nature of Light," Proc. N. A. S., Vol. 12, pp. 23-24 (1926)

3 J. A. Wheeler and R. P. Feyman, "Interactions with the Absorber as the Mechanism of Radiation," Rev. Mod. Phys., 17, 157 (1945); and J. A. Wheeler and R. P. Feynman, "Classical Electrodynamics in Terms of Direct Interparticle Action," Rev. Mod. Phys., 21, 425 (1949).

4 J. Cramer, "Transaction Interpretation of Quantum Mechanics," Rev. Mod. Phys., 58,3, 647-687 (1986).

5J. Aharoni, The Special Theory of Relativity, 2nd Ed., Dover, New York (1985), p. 38.

6 See R. F. Vaughan, Aberrations of Relativity (on sale through lulu publications on ReasonAndRhyme.com). The specific articles referenced are: "Are There Inevitable Uncertainties in Our Maps of the Universe," pp. 56-60; "The Certainty Principle," pp. 61-62; "Learning Addition All Over Again," pp. 63-68.

Read More...

Friday, November 17, 2006

Deirdre and Alana Poe and Their

Tell Tale Hearts

Fred Vaughan headshot by Fred Vaughan

Two nicely-endowed identical twins - lovely girls - decided to strip relativity of its mystery by resolving once and for always the riddle of the "twin paradox." They began by spending considerably on a spacious user-friendly ion blaster equipped with exercise room, bathroom, makeup room, and other amenities so that the life style of the traveling twin could remain equivalent to that of her sister left behind. In addition, they spent even more to instrument themselves to the hilt - medical equipment costs being what they were in the US at the time. This involved specially developed brassieres with sensitive nonintrusive transducers in the left cups that could detect each heartbeat and powerful transmitters to broadcast each coded beep to the ends of the universe. In addition each maintained a receiver antenna for her own and the other coded beeps with a readout of the cumulative heart beats of both twins. When the instrumentation was so well implemented that it no longer itched and could not be seen under a silk gown, they were satisfied.

Perhaps they were operating under false assumptions. For they had come to believe that without mishap or sickness identical twins should have identical numbers of heart beats in their lifetimes and that on the average they would have the same number of heartbeats each year. They tested this hypothesis for a couple of years early in their lives and found that whereas Dierdre had 31,600,029 beats between their 16th and 17th birthdays, Alana had 31,558,371. But then Deirdre had had her first fling somewhat later than Alana and they were gratified that between their 17th and 18th birthdays, Deirdre had 31,579,181 and Alana had 31,579,219. So the idea seemed to work fairly well as biological clocks go. By then the preparations of the ion rocket were completed and so, being inhibited by God's not playing dice, they decided to draw straws. Deirdre drew the short straw so she would have to stay home and watch. They reset their counters to zero, fastened their bras, and with no more adieu Alana was off!

Deirdre watched with some alarm as her own counter ticked along at its usual rate while Alana's crept along, slowing ever so methodically so that at the end of one year it read only 27,932,420 and during the second year it registered only 23,684,400 more ticks. Deirdre was happy that during the third (and final) year of the outward bound leg of Alana's mission she had 23,693,767 beats. At this point Deirdre's readout said 94,737,600 whereas Alana's read only 75,310,587. For the next year Deirdre worried because the number of heartbeats from her beloved sister did not increase as dramatically as she had hoped. But eventually it began picking up and by the end of the fourth year Deirdre was worrying about whether Alana's heart could hold up under the stress of the increasing toll of heartbeats.

The spaceship was sighted at an extreme distance some five and a half years after blast off and the sisters became ecstatic at the prospects of giggling together once again as they had when they were both young. Once they were in voice contact, they no longer watched their readouts as they had so assiduously before. Upon touchdown Alana stepped through the hatch opening she beamed and said, "One tiny step for me and a giant one for womankind!" Whereupon the sisters embraced with giggles enough to make up for years of loneliness. Their beepers raced.

Luckily a couple of thoughtful - though somewhat insensitive - male geek scientists who had become fascinated with the story (and instrumentation, to say nothing of the attractive girls) ripped the bras off the women to stop the beeping and read the meters at this historic point. The bewildered men looked from the now bare-breasted women, back to their readouts, and back again, over and over again in excitement. They shook themselves and looked again. Finally, in total disarray and confusion one of the men asked the other, "Does this mean there's more to life than just so many heart beats?"

The other thought for a while and said finally, "I think it means that if life, or time, or whatever you want to call it is measured as a number of significant events such as heartbeats, then covariance must apply and that quantity must be preserved across reference frames - but damn those twins are beautiful, aren't they? I think the younger one wants me!" he added with a wink.

The twins held their breasts modestly and looked at the men and back at each other in utter disbelief and amazement. Rapidly their biological clocks pheromonally re-synchronized and began pulsing in unison.

"I've been away a long time," Alana said wistfully.

"But not as long as you've been gone," Deirdre stated as a final scientific wrap-up to the just-completed experiment, and then with much more enthusiasm she asked, "Which one do you want?"

Deirdre and Alana sketch
sketch by the author

Read More...