Thursday, November 15, 2007

Outsmarting Your Ass

Frank Luger headshot by Frank Luger

The salt merchant’s ass was so laden and so thirsty that jumping into the river became inevitable. Behold! Thirst quenched and burden much eased. Alas, such repeated smartness spoiled enough merchandise to bring lash and curse - uselessly. Then, wisdom saw the ass laden with enough sponge to match the usual weight. Animal intelligence or not, this is how Man outsmarted... his ass.

Although anecdotal, the story is true. The wise merchant was none other than Thales (cca. 624-548 B.C.E.), the first great thinker in ancient Greece. Regrettably, the storyteller Plutarch fails to mention who the ass was.

Read More...

Thursday, October 04, 2007

Quantum Mechanics and Objective Reality

Frank Luger headshot by Frank Luger

The main features of quantum theory, such as the wave function, the uncertainty principle, wave-particle duality, indeterminacy, probabilistic behavior, exchange forces, spin, quarks and their various flavors and charms, etc. are so counterintuitive as to defy human intuition and common sense. It is often argued, that since they are abstractions, one way or another, maybe they are figments of overactive imaginations. Not quite, counters the theoretical physicist, because although there’s a tough road from mathematical modeling to scientific fact, there’s overwhelming experimental and other evidence in favor of quantum mechanics as objective reality.

In order to take a look at some of the considerations which allow one to state that the world at the tiny magnitudes of microphysics is as proposed by quantum theory, it may be instructive to deal with the wave function as one of the main representatives in question. Although a mathematical abstraction, the wave function corresponding to a physical system contains all the information that is obtainable about the system. For example, if a moving particle acted on by a force is represented by a wave function (psi), then measurement of a physical quantity, such as momentum, always yields an eigenvalue of the associated momentum operator. In general, the outcome of the measurement is not precisely predictable and is not the same for identically prepared systems; but each possible outcome, or eigenvalue, has a certain probability of occurring.

This probability is given by the squared modulus of the scalar product of the normalized wave function (psi), or state vector, and the eigenvector of the operator corresponding to that particular eigenvalue. Furthermore, not all operators representing physical quantities commute- that is, sometimes AB ≠ BA, where multiplication of the operators A and B corresponds to making two measurements in the order indicated. These unusual but unambiguous postulates, which associate probabilities with geometric properties of vectors in an abstract space, have great predictive and explanatory value and, at the same time, many implications that confound our intuition.

Because of the usefulness of the wave function in generating experimentally testable predictions, it appears that a mathematical abstraction here takes on a reality equivalent to that of concrete events, as envisioned by Pythagorean and Platonic philosophies. However, there is a direct connection between the abstraction and observable events, and there has not been much tendency in physics to place the wave function in some realm of ideal forms, platonic or otherwise.

A similar state of affairs already existed in classical electrodynamics, and some physicists remarked that Maxwell’s laws were nothing more than Maxwell’s equations. Perhaps because radiation always had been regarded as immaterial with wave properties, this point of view was not quite as disturbing as it became when matter waves had to be considered. In both cases, however, there does appear to be a problem in explaining how mathematical symbolism can do so much.

Platonic implications can be avoided if we look more closely at the actual, concrete role of the wave function in the theory. If viewed as a conceptual tool, rather than something given, the idea of a wave function containing information about observable events is not so strange. The meaning of the wave function is defined by its role in the theory, which after all is a matter of theorists interacting with events. A clue to this purely conceptual, computational role is the fact that a wave function can be multiplied by an arbitrary phase factor without changing its physical significance in any way. Also, the fact that it is a complex-valued function discourages one from interpreting it as something with spatial and temporal wave properties.

As the search for causes has diminished in modern physics, the success of microphysics in explaining the properties of complex structures such as atoms, molecules, crystals, and metals has increased markedly at the same time. If causality is conceived, as it once was, in terms of collisions among particles with well-defined trajectories, then it has no meaning at the quantum level. However, a remarkable consistency in the evolution of identical structures with characteristic properties is apparent in nature. Quantum mechanics goes far toward explaining how these composite systems are built up from more elementary components. Although the once predominant mechanistic view of colliding particles is no longer tenable, its decline has been accompanied by success in the actual achievement of its original aims.

Terms such as causality and determinism still are used occasionally by physicists, but their connotations are quite different from what they were in earlier times. The formalism of quantum theory implies that determinism characterizes states, but not observables. The state of the system described by a wave function (psi) evolves in time in a strictly deterministic manner, according to the Schrödinger equation, provided that a measurement is not made during that period of time. This usage of determinism actually is equivalent to the statement that the Schrödinger equation is a first-order differential equation with respect to time.

In contrast, if at some instant a measurement of a physical quantity is made, the possible values that might be obtained are represented by a probability distribution. Furthermore, a measuring instrument introduces an uncontrollable disturbance into the system, and, afterwards, that system is in a different state that is not precisely predictable. This situation led Max Born (1882-1970) to make a famous statement that the motion of particles conforms to the laws of probability, but the probability itself is propagated in accordance with the law of causality. The initial astonishment produced by this unforeseen turn of events was shortly followed by an even greater astonishment when these unconventional ideas proved to be extremely workable in practice.

Consider more closely the role of causality and of probability in the theory. The relationship (psi)1 → (psi)2, where (psi)1 and (psi)2 are states at successive instants in time, is completely determined in the theory, provided no measurement takes place during the interval. Moreover, if a measurement is made at some instant, the relationships (psi)1 → f(x) and (psi)2 → g(x), where f(x) and g(x) are probability distributions of an observable, also are completely determined. The new and strange features of the theory are embodied in the facts that (a) these probability distributions, in general, have nonzero variance, and (b) if the relation (psi)1 → f(x) is in fact exhibited by making a measurement, then the relation (psi)1 → (psi)2 no longer holds.

It is difficult to grasp intuitively that the probabilities referred to are those of measures that might be obtained on an individual system using a perfectly reliable instrument and seemingly come from nowhere. Expressed mathematically, the only appropriate probability space corresponding to the probability distribution of a quantum mechanical observable is provided by the real line, its measurable subsets, and the probability measure determined by the wave function; and that structure is not, as is usually the case, induced by an underlying probability space having physical significance. Despite intensive search over many decades, no such underlying probability space has ever been found, and it is now generally agreed that one does not exist. This search in fact resembled somewhat the frustrating attempts in the XIXth century to find an ether, a hypothetical universal space-filling medium propagating radiation.

Nevertheless, when matters are expressed as above, it appears that quite a lot about the theory is deterministic. Furthermore, this viewpoint discourages the tendency to confuse indeterminacy with lack of ability of scientists effectively to make contact with events. Probability distributions of measurements are objective, concrete things. Determinism fails when applied to the concept of an elementary corpuscle simultaneously having a definite position and a definite momentum, conditions never observed experimentally.

Quantum theory, as emphasized previously, was applied with excellent results to a broad range of phenomena; for example, the periodic table of the elements at last became understandable, and the foundations of all inorganic chemistry, and much organic chemistry and solid state physics were firmly established. Contrary to the expectations of some critics, the theory definitely has not encouraged a view of the world ruled by a capricious indeterminacy, but, on the contrary, has greatly enchanced the coherence and explanatory power of science.

Still, the above turn of events in the age-old problem of causality had not been anticipated. The fact that the implications of the theory conflicted in such a radical way with previous philosophical views was a departure from tradition that probably to this date has not been fully assimilated.

Eventually, one may hope, concepts such as causality, system, interaction, and interdependence will be extended and enriched by the findings of quantum physics. Perhaps we are already beginning to see this happen and to appreciate that the new viewpoint does not entail as much of a loss as we once believed. In both classical physics and quantum physics a list of well-defined dynamical variables is associated with each system, and in some respects the quantum mechanical description by state vectors is analogous to a phase-space representation in classical statistical mechanics. Formally, the dynamical variables play a different role in the two theories, but in both cases their specification exhausts the observable properties of the system. The probabilistic aspects of quantum theory, as stressed before, certainly do not imply an inability to find lawfulness and orderliness in nature.

Although quantum mechanical predictions of, for example, position are inherently probabilistic, in many instances a particle is sufficiently localized that probabilities of it appearing outside a restricted range are essentially zero, that is, the dispersion of the distribution is small. It becomes meaningful, for example, to speak of shells and subshells in atomic structure. Overall, it appears that abandonment of the rather limited classical cause-and-effect scheme is a minimal loss compared to the far greater gains achieved by the theory as a whole.

Like many ideas in quantum theory, the celebrated Heisenberg uncertainty principle becomes less mysterious if examined in its concrete role in the theory. The uncertainty principle is not an insight which preceded the theory, but is built into its structure, that is, it can be derived from the abstract formalism. Heisenberg’s matrix mechanics and its success in accounting for experimental results came first; the uncertainty principle and its implications then were recognized.

Essentially, this principle means that the dispersions, or variances, of probability distributions of noncommuting observables are constrained by one another, or, alternatively, that a function and its Fourier transform cannot both be arbitrarily sharp. The physical significance of this result is that measurements of certain pairs of observed quantities- such as position and momentum, or time and energy- cannot simultaneously be made arbitrarily accurate. The principle has been confirmed, many times, by an overwhelming mass of evidence. Accordingly, the principle is an objective property of events that must be confronted in future advances of our understanding of the physical world. Much the same is true about all the other main features of quantum theory.

Although quantum mechanics and the blurred mode of existence that it reveals represent current frontiers in the direction of the infinitesimally small, it is generally acknowledged that this is not the final answer. Quantum reality is reality, to be sure, but it is still very much a virtual reality inasmuch as it refers to states of affairs relative to Man. As such, it is reasonable to expect that it has a source and a destination, being perhaps an integral albeit temporal phenomenon of an underlying ultimate reality. That is, quantum mechanics is objective reality; but it remains to be seen where it comes from and where it goes. However, that’s another story.

Read More...

Sunday, September 16, 2007

Fundamental Requirements in Building Physical Theories

An Original Research Essay

Frank Luger headshot by Frank Luger

As mentioned in some of my previous essays the philosophy of science requires that any physical theory worth its salt must be built around at least potential observability and must obey the reduction principle, i.e. be capable of being shown to rest on established theories. These are logical requirements based on the consistency of Nature. However, if one approaches theory building in physics from the physical rather than the philosophical side, there are some other principles to obey; and these principles are sine qua nonrequirements of proper physical theories, in the sense of transcending any particular theory. Collectively, they may be called symmetry and conservation laws; and they directly rest upon invariances, which are independent of time and space and which are also based on the consistency of Nature. The difference is that while the philo-sophical requirements are a priori, that is, "dic-tated" by induction and synthesis; the physical requirements are a posteriori, that is, dictated by deduction and analysis of actual data. For the present heuristic purposes, let us concentrate on the latter kinds1.

Symmetry in Nature has been dealt with by some very famous authors2and likewise, conservation lawshave also been extensively discussed3. Instrumentalism and its extreme form, solipsism, would proclaim that as "beauty is in the eye of the beholder," symmetry is a figment of human imagination, based on the basic human need for aesthetic experiences. Scientific realism in general, and quantum realism in particular, on the other hand, would maintain that symmetry is inherentin Nature; and this whole disagreement in philosophical perspectives between instrumentalism and realism represents, in fact, the difference between epistemicand onticviewpoints and orientation emphases. While there are certain difficulties with both vantage points, especially in their extreme forms, most of the data from recent research in physics seems to tilt the balance in favor of quantum realism and against instrumentalism, especially in its earlier ("Copenhagen School") form4. Let's now briefly review first the theory of the basic symmetry and conservation laws, as they represent broad generalizations whereby physical theories may transcend time and space; and then, list the most important principles and laws.

Based on concepts from classical geometry, the word symmetryimplies divisibility into two or more even parts of any regular shape in 1, 2, or 3 dimensional ordinary (Euclidean) space. However, in physics, 'symmetry' has a more precise, albeit more general meaning than in geometry. Reversible balance is implied, that is, something has a particular type of symmetry if a specific operation is performed on it, yet it remains essentially unchanged. For example, if two sides of a symmetrical figure can be interchanged, the figure itself remains basically invariant. A triangle may be moved any distance, if there is neither rotation nor expansion-contraction involved, then the triangle remains symmetrical under the operation of translation in space. This means little in (projective) geometry, but in actual physical situations, it can be far from trivial. If we imagine an initially symmetrical shape with some weight attached to it as being moved to a different gravitational field, symmetry will not be conserved. Yet, the basic laws of physics are supposed to be independent from locations in space. And they are. What may be different are those aspects which are variable, but their interrelationships do not change. Symmetry will be conserved not relative to a fixed observer, but relative to the form in which the basic laws are expressed -- i.e. their mathematical descriptions.

The inevitable conclusion is that it is the mathematical expressions of physical laws which are responsible for ensuring that the form of the basic laws of physics is symmetrical under the operation of translation in space. For example, the law of conservation of momentum is a mathematical consequence of the fact that the basic laws have this property of assuming the same form at all points in space. The conservation law is a consequence of the symmetry principle, and there is reason to believe that the symmetry principle is more fundamental than the detailed form of the conservation law. A general theory, thanks to its mathematical armoury in which tensor analysis and differentiable manifolds assume great importance, is able to formulate basic equations which have the property of assuming the same form at all points in space.

Therefore, when "indulging" in theory building, the theoretical physicist is well advised to try to formulate his basic laws so that they become and remain symmetrical under any and all fundamental transformation. Fortunately, there are several well-known and well-established guidelines; and these are what we may subsume under the general heading of symmetry principles and conservation laws. It is important to keep in mind that conservation laws are mathematical consequences of various symmetries, thus as long as the theorist ensures that his formulations do not violate basic principles of symmetry, he stands a good chance of being subsequently able to deduce the appropriate conservation laws, and prove, at least to the satisfaction of the requirements of mathematical logic, the soundness of his conceptualizations. By contrast, failure to observe this guideline may result in heaps of impressive-looking pseudoscientific rubbish, as for example in various airy grandiose schemes and trendy New Age fads and hasty oversimplifications ad nauseam5. While it is true that a few symmetry principles and conservation laws are still controversial, and it is not always clear which conservation law is necessarily a (mathematical) consequence of which symmetry principle; the fact is that most of the relationships are well established, and repeated mathematical testing of various new theoretical models is not only always helpful but perhaps even mandatoryas well. That is, before making predictions and deducing testable hypotheses and subjecting them to observations and experiments, it is best to have played the devil's advocate and trying as hard as one can, to make a "liar" of oneself. This grueling task will pay grateful dividends later, by saving the theorist from self-discreditation and its inevitable consequence, death by ridicule.

Following Einstein and his postulates of Special Relativity, we accept that the form of the basic laws of physics is the same at all points in space. This is called symmetry under translation in space, and (mathematically) it leads to the law of conservation of linear momentum. This is one of the most fundamental principles of modern physics. Next, in a similar vein, we also accept that the basic laws of physics describing a system apply in the same form under fixed angle rotations — i.e. the laws have the same form in all direction. We may call this the principle of symmetry under rotation in space, and again, (mathematically) it gives rise to the law of conservation of angular momentum. Now comes time, i.e. that the form of the basic laws of physics does not change with the passage of time. Once a fundamental invariance is successfully identified, it can be assumed with great confidence that what was the case many millions of years ago will still be the case indefinitely into the future. This principle is called symmetry under translation in time, and (mathematically) it yields the law of conservation of energy (also known as the First Law of Thermodynamics). However, the next principle, that of symmetry under reversal of timeis somewhat controversial, because although it is theoretically possible, it is practically never observed. The principle leads to the great Second Law of Thermodynamics, through a series of steps which would be a bit too technical for the present purposes. Symmetry under time reversal maintains that a time reversal process can occur, but it does not say that it does occur or that it ever will occur. This is a rather subtle, and thus a much misunderstood and disputed point, as discussed in my paper "Conceptual Skepticism in Irreversible Energetics", cited in footnote No.1 (14) above. It is precisely because symmetry under time reversal is never observed in practice, but the opposite, i.e. asymmetry and irreversibility are always observed, that the Second Law of Thermodynamics is still one of the most controversial of the basic laws of physics. Disregarding mathematics for the moment, how theoretical reversibility gives rise to practical irreversibility in Nature remains somewhat nebulous. It is possible that irreversibility is a special case of reversibility due to a hitherto unexplained intervening construct or variable, rather than the other way around. Future research will tell, we hope.

Still another consequence of Einstein's Special Relativity theory is that the basic laws of physics have the same form for all observers, regardless of the observers' motions. In other words, the basic laws have the same form in all inertial frames of reference, and thus do not depend on the velocity or momentum of the observer. In Einstein's General Theory of Relativity, which is not as well substantiated as the Special Theory, the basic laws are assumed to have the same form for all observers, no matter how complicated their motions might be. Altogether, this is the principle of relativistic symmetry.

Turning to microphysics, it must be considered that fundamental particles have no individual differences in the sense of "identities", i.e. if we interchange two particles of the same class or category (vide infra), such action does not influence the physical process as a whole. This indistinguishability of similar particles gives rise to the principle of symmetry under interchange of similar particles. An electron is no different from any other electron. Furthermore, if negative charge cancels an equal amount of positive charge, then there is no known physical process which can change the net amount of electric charge. This is known as the law of conservation of electric charge, and it is thought to be a (mathematical) consequence of certain symmetry properties of the quantum mechanical wave function psy (Ψ). Similarly, if a particle cancels its antiparticle, there is no known physical process which changes the net number of leptons (light particles); and this is known as the law of conservation of leptons, although an underlying symmetry principle has not been unequivocally established. In a like vein, also in particle-antiparticle cancellations, the net number of baryons (heavy particles) remains the same; this is the law of conservation of baryons, and similarly to leptons, no underlying symmetry principle has been properly established. It is noteworthy, that while there are conservation laws for fermions, there are no such laws for bosons, photons, pions, kaons, etas, and gravitons.

There are also imperfect symmetries, which may or may not be intrinsic to Nature. That is, it is possible that Nature is constructed according to a scheme of partial or imperfect symmetry, whereby irreversibility would be the rule and reversibility the exception. It is more probable, however, that things are the other way around (reversibility is the rule and irreversibility is the exception), and the fault lies within our own machinery, as mentioned in some of my other writings (see footnotes). One such imperfect symmetry is charge independence. There is a principle of symmetry of isotopic spin, whose (mathematical) correspondent is a law of conservation of isotopic spin. This law applies to strong nuclear interactions, but is broken by electromagnetic and weak interactions. Also, there are then processes which involve what have come to be called the strange particles; and to each of them an integral number had been assigned, known as its strangeness. The law of conservation of strangeness is also an imperfect symmetry, inasmuch as strangeness is conserved in strong interactions, but not in weak interactions. However, the very particle-antiparticle symmetry turns out to be a broken or imperfect symmetry, because all weak interactions violate it; and there is no fully satisfactory explanation for this imperfect charge conjugation.

The principle of mirror symmetry maintains that for every known physical process there is another possible process which is identical with the mirror image of the first. Yet, this can also be a broken or imperfect symmetry, depending on "handedness" — inasmuch as one cannot put a left-hand glove on the right hand, no matter how much one glove may seem like the mirror image of the other. Mirror symmetry can be expressed mathematically in terms of a quantity called parity and there is a corresponding law of conservation of parity. However, weak interactions do not conserve parity, although all other types of interactions do. One example is that although the neutrino and the antineutrino are mirror images of one another, the neutrino is like a left-hand glove and the antineutrino is like a right-hand glove. Generally speaking, all weak interactions violate the symmetry principle of mirror reflection. All weak interactions violate the symmetry principle of particle-antiparticle interchange. All interactions, including weak interactions, are symmetrical under the combined operation of mirror reflection plus particle-antiparticle interchange6.

Despite such "violations" and "broken symmetries", when the universal "big" picture is contemplated, symmetries outweigh asymmetries sufficiently to restore one's faith in the esthetic beauty and efficient elegance of Nature. As shown by recent advances in cosmology7, although asymmetries are cosmological in origin, they somehow seem to fit integrally into the overall scheme of things, and thus represent no violations of any great law, but rather, they help to give rise to them and to maintain them in a sort of dynamic equilibrium, however unbalanced certain parts of the whole seem to be from time to time or even all the time. Therefore, it seems reasonable to conclude, that the more we come to understand the fundamental nature and ways of the Universe, the more we may become enchanted by its intrinsic beauty and harmony on the grandest as well as the minutest scales, whereby we may even catch an occasional glimpse of Eternity.


1 The philosophical requirements of potential observatibility and the reduction principle will be dealt with in another essay, which will examine the connection between the philosophy of quantum mechanics and that of modern interactional psychology, more or less within the framework of General Systems Theory.

2 e.g. Weyl, H. Symmetry, Princeton University Press, 1952; Wigner, E.P.: The unreasonable effectiveness of mathematics in the natural sciences, in Symmetries and Reflections, Scientific Essays of E.P. Wigner, Bloomington: Indiana University Press, 1978; Ziman, J.: Reliable Knowledge, An Explanation of the Grounds for Belief in Science, Cambridge: Cambridge University Press, 1978; etc.

3 e.g. Feynman, R.: The Character of Physical Laws, Cambridge, Mass.: The M.I.T. Press, 1965; Jammer, M.: The Philosophy of Quantum Mechanics, New York: Wiley, 1974; Weisskopf, V.F.: Knowledge and Wonder, Cambridge, Mass.: The M.I.T. Press, 1979; Ziman, J.: op. cit.; etc.

4 e.g.: Cook, Sir A.: The Observational Foundations of Physics, Cambridge: Cambridge University Press, 1994; d'Espagnat, B.: Reality and the Physicist, Cambridge: Cambridge University Press, 1989; Hawking, S.W.: A Brief History of Time, New York: Bantam, 1988; Peierls, R.: More Surprises in Theoretical Physics, Princeton, N.J.: Princeton University Press, 1991; Rohrlich, F.: From Paradox to Reality: Our Basic Concepts of the Physical World, Cambridge: Cambridge University Press, 1989; Weinberg, S.: The Quantum Theory of Fields, Vols. I-III, Cambridge: Cambridge University Press, 1995, 1996, 2000; etc.

5 e.g. Capra, F.: The Tao of Physics, New York: Bantam, 1975; LaViolette, P.A.: Beyond the Big Bang: Ancient Myth and the Science of Continuous Creation, Rochester, Vt.: Park Street Press, 1995; Zukav, G.: The Dancing Wu-Li Masters: An Overview of the New Physics, New York: Bantam, 1980; etc.

6 e.g. Blohintsev, D.I. Questions of Principle in Quantum Mechanics and Measure Theory in Quantum Mechanics, Moscow: Science, 1981; Eisberg, R. & Resnick, R.: Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles, , 2nd ed., New York: Wiley, 1985; Holland, P.R.: The Quantum Theory of Motion, Cambridge: Cambridge University Press, 1993; Gómez, C., Ruiz-Altaba, M., & Sierra, G.: Quantum Groups in Two-Dimensional Physics, Cambridge: Cambridge University Press, 1996; McQuarrie, D.A.: Quantum Chemistry, Mill Valley, Calif.: University Science Books, 1983; etc.

7 e.g. Barrow, J.D.: The Origin of the Universe, New York: Basic Books, 1994; Binney, J. & Tremaine, S.: Galactic Dynamics, Princeton, N.J.: Princeton Astrophysics Series, 1987; Hawking, S.W.: op. cit., 1988; Hawking, S.W.: Black Holes and Baby Universes, New York; Bantam, 1993; Hawking, S.W. & Penrose, R.: The Nature of Space and Time, Princeton, N.J.: Princeton University Press, 1996; Kaufmann III, W. J.: Relativity and Cosmology, 2nd ed., New York: Harper & Row, 1985; Penrose, R. & Rindler, W.: Spinors and Space-Time, Vol.II: Spinor and Twistor Methods in Space-Time Geometry, Cambridge: Cambridge University Press, 1993; Rindler, W.: Essential Relativity: Special, General, and Cosmological, New York: McGraw-Hill, 1977; Wald, R.: Space, Time, and Gravity, Chicago Press, 1977.

Read More...

Monday, August 13, 2007

Martyrs of Science

Frank Luger headshot by Frank Luger

It may sound strange, perhaps even somewhat bizarre, but despite its 'normal' neutrality, science also has had its share of bloodshed throughout the turbulent course of its history. To be sure, the number of martyrs of science is very small in comparison to other endeavors of the human race; yet the tragedies involved are so much more shocking, because of the very few albeit very great names. Had these lives not had to end prematurely, and, in some cases rather brutally, humanity would have benefited a lot more than it has and civilization would be more advanced than presently.

Hippasus of Metapontum was drowned at sea by his fellow Pythagoreans for discovering irrational numbers. Archimedes of Syracuse was slain by a Roman legionary for disobeying authority. Hypatia of Alexandria was crucified and mutilated by a Christian mob for her 'pagan' religion. Berthold Schwarz was blown to pieces for discovering gunpowder. Giordano Bruno was burnt at the stake by the Inquisition for championing Copernican heliocentricity. Antoine Lavoisier was guillotined, officially for his state activities, in reality for his scientific genius, by the French Revolution. Likewise, Évariste Galois was shot to death, ostensibly in a duel of honor, but in reality for his mathematical genius mixed with his political radicalism. Finally, Alan Turing was poisoned for his genius as well as his blatant homosexuality, as an embarrassment to the Establishment. These are just the most outstanding names that spring to mind in connection with scientific martyrdom, but no doubt, there must have been more throughout the history of science over the past two-and-a-half millennia, roughly speaking.

Science, in the currently understood sense of being that intellectual pursuit which is characterized by the scientific method, is only four centuries old. Previously, science was an integral part of natural philosophy and some practical concerns, such as geometry and astronomy. It is thus somewhat curious, maybe even paradoxical, that the 'true' martyrs belong to antiquity and their case comes to an end with the death of Giordano Bruno in 1600 A.D. Strictly speaking, the martyrs of modern science after Bruno are perhaps more appropriately designated as 'pseudo' martyrs since their deaths seem to have less to do with their science than with their nonscientific activities. However, the evidence is meager and leaves plenty of room for doubt and speculation.

Be they true martyrs or pseudomartyrs, the fact remains that they were great scientists and their untimely demise is a most regrettable and shameful scar on the history of human civilization. Their tragedies are exacerbated by the causes behind their deaths, because regardless of how they actually died, they were really the victims of ignorance and arrogance, one way or another, in each and every case. After all, frustration, anger, jealousy, envy, and all such emotions fuelling hostile thoughts and actions are but situation-specific manifestations of ignorance and arrogance, in whatever proportions.

Hippasus of Metapontum Hippasus of Metapontum (cca. 500-450 B.C.) was thrown overboard by the frustrated Pythagoreans after he proved the horribly undeniable irrationality of √2, with which he actually discovered a whole class of numbers that cannot be expressed as the quotient of two integers and whose decimal expansions never repeat and never terminate. This was too much for the Pythagoreans, who attributed mystic significance and much else to integers and whose ignorant and arrogant dogmatism could not tolerate 'heresy'. Who knows, perhaps the Pythagoreans deluded themselves by thinking that they were the 'custodians' of the secrets of cosmic beauty and harmony, and as irrational numbers pricked their inflated egos, they thought they could suppress such offensive ugliness by drowning poor Hippasus.

Archimedes of Syracuse Archimedes of Syracuse (cca. 287 — 212 B.C.) was the first and greatest mathematical physicist of antiquity, whose accomplishments are legendary. But he was a menace to the Roman Empire. During the siege of Syracuse he set Roman ships on fire by parabolic mirrors and smashed them on the rocks with various ingenious devices. Marcellus, the Roman commander, is alleged to have given orders that Archimedes be captured unharmed. The old man was doodling in the sand of his garden with a stick, working on various geometry problems. When his captor told him to go with him, Archimedes replied, a bit absent-mindedly: "Noli turbare circulos meos!" (Do not disturb my circles!)- whereupon the frustrated Roman soldier flew into a rage and slew him. Resisting arrest was thus the official story. Was there more? Was he, in reality, deliberately murdered? Revenge by arrogant Romans ignorant of mathematics and science?

Hypatia of Alexandria Hypatia of Alexandria (370-415 A.D.) was the first outstanding woman mathematician in recorded history. She was teaching at the famous Library of Alexandria as head of the Platonist school, and students flocked to her from all over. She was very beautiful, charming, and witty; but, unfortunately, she practiced the ancient Greek religion of polytheism. This was anathema to some of the early Christian sects who felt threatened by her 'pagan' learning and depth of scientific knowledge. Incited by Bishop Cyril, a mob of Christian monks pulled her out of her carriage, beat her, dragged her to a church, stripped her naked and crucified her by nailing her to the church door. Her flesh was mutilated by sharp tiles, part of her body was thrown to dogs and the rest burned. Perhaps they crucified her upon her refusal to be forcibly converted to Christianity, but there can be no doubt that she was jealously perceived as a menace… with the affront of being a woman.

Berthold Schwarz Berthold Schwarz (cca. 1318-1384 A.D.) of Freiburg, Germany, was a Franciscan monk. His original name was Konstantin Anklitzen. He took the name of Bruder (Brother) Berthold upon entering the monastery. Schwarz, meaning 'black' in German (Berthold der Schwarze), was added later as an indication of black magic, since he was a practicing alchemist, who is generally credited with the discovery of gunpowder and the invention of artillery. Apparently he was blown to pieces by some spark or flame accidentally detonating a batch of his nefarious powder. More likely, the explosion was not accidental; he was murdered because his black arts threatened to revolutionize warfare with incalculable consequences as far as (pre)Renaissance times were concerned. Also, perhaps the hitherto undreamed of tremendous destructive potential of gunpowder was thought to represent satanic powers, wholly impermissible for a Franciscan monk. Either way, sorcery and witchcraft had to be involved, which the Church was obliged to extirpate, especially from one of its own members.

Admittedly, these are speculative points, since the existing evidence is meager and far from being unequivocal. It is possible that the Church wanted to avoid exposure of the potentially embarrassing matter, especially if the Inquisition had to handle things; so, maybe, the murder of Berthold Schwarz was simply and deliberately made to look like an accident. Or, alternatively, there could have been some secular power causing the explosion, perhaps another country hoping to monopolize the new weapon. Maybe a combination of such factors?

Giordano Bruno Giordano Bruno (1548-1600) was a dangerous and subversive radical, a spiritual alchemist and a rather versatile philosopher to boot. As such a maverick, he did surely get himself into plenty of trouble wherever he went, and it was only a matter of time before he was formally denounced and the Papal Inquisition got him on charges of heresy. After several years of 'protective custody' and his stubborn refusal to recant, he was finally burned at the stake on February 17, 1600. What was his unpardonable crime? Quite simply, the effrontery of promoting the heliocentric model of Copernicus. After all, if the Sun did not revolve around the Earth, much of Church dogma could be demolished. Man's closest kinship to God as well as Man's dominion over Nature were severely threatened by such abominable ideas. Man's cosmic significance could turn into absurd insignificance…

Antoine Lavoisier Antoine Lavoisier (1743-1794) is generally venerated as the father of modern chemistry. He was also prominent in the histories of biology, economics, and finance. He is well remembered for overthrowing the phlogiston theory and with the correct assignment of oxygen and hydrogen to various processes, for the establishment of the proper theory of combustion. His laws of molecular combinations based on the law of conservation of mass are valid even today. His various accomplishments in different fields mark him as a truly outstanding scientist. Unfortunately, as a noblemen and as a statesman, he was denounced as a traitor by the French Revolution and promptly guillotined. "The Republic has no need of geniuses" (i.e. scientists) was the cynical condemnation pronounced by his judge. Perhaps this is the real clue of his martyrdom. True, many noblemen and statesmen were executed; but the scientific genius, still regarded akin to dreaded black magic by ignorance and arrogance, was most likely the underlying reason why Lavoisier was seen as a menace.

Evariste Galois Évariste Galois (1811-1832) was also perceived as a menace by the French Establishment. True, he was a young political firebrand-radical, but that was an embarrassment in academic circles, not more. The menace was his genius, which aroused much jealousy and resentment, especially in mathematical circles. Even a mathematician of such caliber as Simeon Poisson failed to understand the work of Galois.

Yet, despite his youth and lack of formal relevant credentials, the significance of the contribution of Galois to modern mathematics cannot be overemphasized. He was shot to death in a duel, ostensibly over a matter of honor involving a young woman; but in reality for the menace of his genius, peppered with his radical views and activities. Considering the highly nervous temperament of Galois, it must have been an easy matter to provoke him to a duel. Sadly, anachronisms do not last long, no matter how brilliant they are.

Alan Turing Finally, Alan Turing (1912-1954) was a brilliant British mathematician who might have represented enough menace to the Establishment to be murdered by potassium cyanide.

His intellectual accomplishments are legendary, and without the 'Turing machine' theoretical computer science could not become a modern miracle. Unfortunately, he flaunted his homosexuality, which must have been intolerable for the conservative academic Establishment. His eccentric genius of course evoked much jealousy, which could be the real reason for his untimely demise. The official verdict of suicide is suspect. He had no reason to kill himself, for one thing. For another, he could hardly have eaten an apple laced with cyanide without noticing the characteristic bitter almond taste. Also, it would have been much simpler to take an overdose of sleeping pills. Homosexuality was then a crime, and he was charged with it. He was given the choice of prison or libido-reducing hormones. He chose the latter and underwent such treatment for a year before he died. Anyway, whatever the exact factors were, Turing may be regarded as a (pseudo)martyr of science.

It is not only tragic, but ironic as well, that science, the only neutral pursuit of the human intellect, has its own 'pantheon' of martyrs. Some of the above mentioned tragedies, such as those of Hippasus and Archimedes, could perhaps be suffered, one way or another. Less tolerable were those of Hypatia, Schwarz, and Bruno. This ends the list of 'pure' martyrs. The 'pseudo' martyrs of modern science died under nebulous circumstances, but in each case, they must have been perceived as threats to the hostile and jealous Establishment. What runs through each martyrdom as a red thread since antiquity to the present, is the ignorance and arrogance of lesser intellects. That such intellects still run society is not only the real tragedy but the deplorable irony of all times as well.

Read More...

Tuesday, July 24, 2007

One of the Greatest Modern Composers: Stravinsky

Frank Luger headshot by Frank Luger

When I was in my pre-teens, my parents had arranged for me to take piano lessons. I was not particularly good at it, except for manual dexterity; but I had little ear for music, and even less patience for learning the delicate technicalities. It took about six months of ‘torture’ before my training was abandoned as hopeless. However, during that time, my private tutor, who was no lesser personage than Gabriella Bartók, the niece of the world-famous composer Béla Bartók, had often admonished me and tried to motivate me by insisting that I should aim at nothing less than excellence. She used to cite the examples of famous Hungarian musical geniuses, such as Liszt, Kodály, and Bartók; and, since this was already during the Stalinist times, for political ‘correctness’ she also cited such great names as Tchaikovsky, Rimsky-Korsakov, Rachmaninoff, and very often, Stravinsky.

She had never met the other three Russians, but she had trained for a while with Igor Fedorovich Stravinsky (1882-1971) in her youth; and so she was in position to tell me many stories, even amusing anecdotes. Now, it has been a generation (30 years) that Stravinsky died (Gabriella Bartók died even earlier, due to breast cancer, if I remember correctly); therefore, as a bit of commemoration, let me relate what I recall of her Stravinsky stories, interlaced, spiced, and completed with actual historical details.

While vacationing in Heidelberg, Germany, on a hot Summer afternoon in 1902, Rimsky-Korsakov was approached by a 20-year old law student from the University of St-Petersburg (later renamed Leningrad, now back to its old name). Introducing himself as the son of one of Russia’s foremost opera stars, the youth begged the composer to listen to a piece he had recently written and to tell him whether or not it showed any of the talent necessary for a career in music. Taken by surprise by the young man’s insistence but being delighted with his evident enthusiasm for music, Rimsky-Korsakov agreed to a hearing. The student played the piano for about half an hour, then respectfully awaited the ’verdict’ of the already famous composer.

“Young man, your music is quite nice,” Rimsky-Korsakov reportedly told him; “but in all fairness to you, I would suggest that you continue with your law studies. However, should your interest in music remain, you might perhaps enroll in some formal courses in counterpoint and harmony. Then, maybe, you will come back and play for me again and I will be able to give you a more favorable assessment.”

His hopes dashed for the moment, the crestfallen Stravinsky took the advice and returned to his law books. Music, however, soon gained the upper hand again. Writing a piano sonata, a year later he called on Rimsky-Korsakov a second time. The composer greeted him warmly and listened to his music intently, apparently impressed with what he was hearing. Occasionally he asked Stravinsky to repeat a specific passage, nodding and keeping time to the music when it was played. Then came the second verdict:

“You asked me once before if you had any ability whatsoever and I told you to continue with your law studies. I’ve just changed my mind. You are wasting your talents with law. Come to me tomorrow morning- early, mind you- and we will begin your training in serious instrumentation.”

In later years Stravinsky was to recall, the weeks and months he spent with Rimsky-Korsakov were among the happiest in his life. As a teacher, the composer was merciless. He drove his young apprentice and drove him very hard. Anything short of perfection brought down his wrath. Perfection, itself, he dismissed with scarcely a word of praise. “A man’s music,” Rimsky-Korsakov used to explain, “should always be perfect, so why should we applaud something that is so basic to successful composing.”

Prompted by such admonishings, late in 1907 Stravinsky completed his first large work, the “Symphony in E-flat major.” Performed in St-Petersburg on January 22, 1908, it instantly met with critical acclaim. A second work finished soon afterward and named “Le Faune et la bergère” (Fauna and the shepherdess) did not fare as well, but it did prove to be more than sufficient to bolster Stravinsky’s rising stature in the world of music.

Galvanized into action, confident as he had never been before, tireless in his work, Stravinsky threw himself into his music. Secretly, he began to compose a new orchestral work, which he hoped to present as a gift to Rimsky-Korsakov upon the forthcoming marriage of the master’s daughter. Called “Fireworks”, it was finished just a week before the wedding. Delighted with his surprise, Stravinsky packed up his score and shipped it off to his revered master. However, by some irony of fate, Rimsky-Korsakov was never to see it. On the day that it arrived, he died. One of the world’s greatest composers had passed on and for Igor Stravinsky, the loss was a terrible blow. Friend, teacher, and colleague, Rimsky-Korsakov had been the young composer’s guide and inspiration.

Presented in St-Petersburg, “Fireworks” exerted a profound influence on the future of the rising composer. In the audience, the night of its debut, was Serge Diaghilev, soon to become famous as the mastermind behind the magnificent ‘Ballet Russe’ [Russian Ballet, later almost synonymous with ‘Bolshoi’ even though ‘Bolshoi’ was the name of the largest (as it means “big” or “great” in Russian) and most elegant theater in Moscow during and after Stalin]. Hearing Stravinsky’s music, Diaghilev invited the composer to orchestrate two Chopin pieces for a forthcoming ballet performance. Stravinsky did, and the results were so outstanding that he was commissioned to undertake a major work revolving around an old Russian myth- the tale of the Fire-Bird.

It took Stravinsky nearly a year to complete his task, but at last, on June 25, 1910, “L’Oiseau de Feu” or “The Fire-Bird”, was presented at the Paris Opera. The audience went wild with delight. Stravinsky was given an incredible ovation. Debussy, hearing the score, rose at the conclusion of the ballet and hurled himself into Stravinsky’s arms. Gabriel Pierne, who conducted that evening, later declared, “The Fire-Bird” is music such as I have never heard before. The world will not soon forget it. Mark my words. Igor Stravinsky will someday help free the musical thought of today and lead it in new directions.” And so it proved. “The Fire-Bird” established Stravinsky’s reputation and carried his name to music lovers around the globe. Elated with his triumph, the composer immediately plunged into a new work. Titled “Petrouchka”, it was first seen in Paris in 1911. To ensure its success, Diaghilev had seen to it that Nijinsky and Karsavina were the ballet’s principal dancers, that the finest supporting cast to be found anywhere was on hand, and that the settings were of unmatched beauty.

Paris received “Petrouchka” with even more enthusiasm than that attending the debut of “The Fire-Bird”. The city’s newspapers, next morning, hailed Stravinsky as a personage of music equal in stature to France’s beloved Claude Debussy. And Debussy himself declared, “That man injects a vital force into music that will carry him- and music- very far”.

Following “Petrouchka” came “The Rite of Spring”, a ballet which perhaps evoked one of the most fantastic exhibitions in the history of music. Presented on May 29, 1913, rarely has a composition ever carried its audience away so completely. Even for Igor Stravinsky, “The Rite of Spring” marked a monumental turning point in his career. His success established, the piece shook the musical world to its very roots and made him one of the most loved or most despised, most defended or most maligned figures in the history of his art. In rapid succession, he proceeded to compose such works as the opera-oratorio “Oedipus Rex”; the ballet “Apollon Musagete”; the suite “Pulcinella”; and the ballet “L’Histoire de Soldat” (Soldier’s History).

Visiting the United States for the first time in 1925, Stravinsky was much impressed with what he saw. Musical America, on the other hand, was just as impressed with what it saw in him and welcomed the composer with open arms. The various tours on which he embarked in the years that followed were all highly successful, so much so, as a matter of fact, that when Stravinsky completed his ballet “Jeu de Cartes” (Card Game) , he decided it would be given its premiere in New York. Presented in 1937; the audience proved to be every bit as enthusiastic as the Parisian groups that had greeted the ballets “The Fire-Bird” and “Petrouchka”. With the onset of World War II, Stravinsky abandoned his home in the outskirts of Paris. Traveling to the United States and eventually settling in California, be became a naturalized American citizen and plunged back into his work. His major American works have included the magnificent opera “Rake’s Progress”; the ballet “Orpheus”; and the controversial “symphony in Three Movements”.

This is where I should stop the storytelling, because my own musical training had stopped in the mid-1950’s. Stravinsky had lived and produced until his death in 1971, but I know nothing of his late period in life. At any rate, it is generally recognized that already in the mid-1950’s he was acknowledged as one of the world’s greatest modern composers. Igor Fedorovich Stravinsky had achieved his aims while he was still alive, regardless of difficulties; and thus had been successful in avoiding merely posthumous recognition, the lamentable fate of many great artists.

Read More...

Monday, July 23, 2007

Basic Notions of Mathematical Proofs

Frank Luger headshot by Frank Luger

Elementary mathematical proofs rest upon the basic principles of mathematical logic, which in turn are direct applications of classical Aristotelian logic to mathematics. Classical logic was used in Euclid’s Elements, on which all traditional geometry and mathematics was built, using propositional logic or the logic of propositions. The essence of propositional logic was laid down in the three famous “Laws of Thought” by Aristotle (384-322 B.C.E.), namely the Law of Identity (A = A), the Law of Non-Contradiction (A never equals non-A), and the Law of Excluded Middle (either A or non-A). They can also be expressed in symbolic logic as: if p, then p (p implies p by the Law of Identity); not both p and not-p [~(p and ~p), by the Law of Non-Contradiction, where the tilde ~ means negation]; and either p V ~p by the Law of Excluded Middle, where V means exclusive “or”. These ‘Laws of Thought’ have remained essentially unchanged ever since. In propositional logic, these basic principles take the following form (the Law of Identity is so basic that it is taken for granted, so it isn’t even mentioned): First Principle: Law of the Excluded Middle (for any proposition, p, the proposition, “either p or not-p” is true; Second Principle: Law of Contradiction (for any proposition, p, the proposition “p and not-p” is false); and the Third Principle: Law of Transitivity of Implication (for any propositions, p, q, r, the proposition, “if p implies q and q implies r, then p implies r,” is true. By definition, a general proposition is a proposition expressible in one of the following forms for a specific designation of x and y: (a) All x’s are y’s. (b) No x’s are y’s. (c) Some x’s are y’s. (d) Some x’s are not y’s. Propositions are many times stated in the form of hypotheses and conclusions. But one must be careful, because the conclusion being true provides no information in itself about the truth or falsity of the hypothesis.

There are certain relationships between implications involving the same two statements or their negatives that occur sufficiently often to make special terminology helpful, as follows. For a given implication, “p implies q”, or “if p then q” or “p only if q” is evident from what has been said above. The converse is the implication, “q implies p”, or “if q then p”, or “q only if p”’ while the inverse is the implication, “not-p implies not-q”, or “if not-p then not-q”, or “not-p only if not-q”. Finally, the contrapositive is the implication, “not-q implies not-p”, or “if not-q then not-p”, or “not-q only if not-p”. It is noteworthy that a given implication and its contrapositive are logically equivalent. The concept of logical equivalence applies in general to pairs of propositional forms. We say that two propositional forms are logically equivalent, provided they have the same set of meaningful values and the same set of truth values; that is, each has the same true-false classification as the other for all possible choices of the variables. For a true implication, “if p then q”, where p and q are propositional forms, p is said to be a sufficient condition for q, and q is said to be a necessary condition for p; i.e. q necessarily follows from p.

The purpose of the foregoings was introductory “warm-up” to enable us to apply logical principles to finding and proving new mathematical results. Mathematics is an abstract science in the sense that it consists of a system of undefined terms about which certain statements are assigned a true classification (these are the axioms and the postulates), which, together with basic defined terms, are used to develop additional propositions. These in turn, are then shown to be true or false according to the rules of logic that have so far been considered (such true propositions being called theorems).

Many of the new results in such a system are proved by direct methods that involve primary applications of the Law of Transitivity for Implications mentioned above.

However, indirect methods of proof are also used frequently, both in mathematical developments and in everyday reasoning, with compelling, even necessarily true results. When a child asks, “Has Daddy gone to work?” and Mother answers, “See if the car is in the garage,” it is likely that the thought pattern involves, “If Daddy has gone to work, then the car is gone.” When the child finds the car in the garage, he concludes, “If the car has not gone, then Daddy has not gone to work,” thus utilizing the contrapositive to arrive at a “No” answer to his original question.

Direct proofs both in their forward (reasoning from premises to conclusion) and backward (reasoning from conclusion to premises) varieties are quite straightforward, and as such, need not be treated here. However, while a direct proof may often be given where an indirect method is employed, the latter is often clearer, more forceful, and shorter. This is such an important phase of reasoning that it will be worthwhile to consider a general analysis and some further examples. There are essentially two forms in which indirect reasoning may appear, frequently interchangeably.

Form I of Indirect Reasoning consists of proving the contrapositive and thereby the desired implication. To show “p implies q” is true, we show that “not-q implies not-p” is true. For example, we assume simple properties of integers, also the definition that a prime number is a positive integer which is divisible by no other integers than itself and 1.

Proposition: If an integer greater than 2 is prime then it is an odd number.

Proof: (1) If an integer greater than 2 is not odd, it is even, by definition.
(2) If an integer greater than 2 is even, it is divisible by 2, by definition.
(3) If an integer greater than 2 is divisible by 2, it is not prime.
(4) Hence, if an integer greater than 2 is not odd, it is not prime, by the Transitive Property of Implications (vide supra).
(5) Therefore, if an integer greater than 2 is prime, then it is an odd number, since step 4 states the truth of the contrapositive.

Q.E.D.*

Form II of Indirect Reasoning essentially follows the pattern:
(a) To prove true: p implies q, where p has a true classification.
(b) Show: p and not-q imply r, where r is known to be false.
(c) A false conclusion indicates a false hypothesis; hence, not-q is false.
(d) Not-q being false shows that q is true. This is the desired result.

For example, assume the usual terminology of plane geometry and the proposition, “From a point not on a straight line, one perpendicular, and only one, can be drawn to the line. Prove the
Proposition: Two straight lines in the same plane perpendicular to the same line are parallel.

Notation: Let L be the given line through distinct points A and C, with AB perpendicular to L at A and CD perpendicular to L at C.

Restatement: If AB and CD are each perpendicular to L, then AB and CD are parallel.

Proof: Assume p: AB is perpendicular to L and CD is perpendicular to L, and not-q: AB and CD are not parallel.
(1) AB and CD not parallel imply that AB and CD intersect in a unique point P, by definition of parallel lines.
(2) AB and CD are distinct lines through point P not on L, both perpendicular to L, by hypothesis p.
(3) This is false by the proposition quoted for reference.
(4) Hence, not-q is false, since a false conclusion requires a false hypothesis in a true implication.
(5) Therefore, AB and CD are parallel (q is true).

Q.E.D.

Indirect methods of reasoning are sometimes called “proof by contradiction” (or reductio ad absurdum) due to the property of arriving at the negative, or contradiction, of a known true proposition. By virtue of the Laws of Thought cited above, (self) contradictions are absurd, and may therefore be safely discarded.

When the deductive aspect of inquiry, which has been emphasized above, is applied to mathematics or to other scientific fields, it frequently is preceded by an inductive aspect. The latter is concerned with the search for facts or information by observation and experimental procedure. Once the available facts have been assimilated, the scientist proceeds by induction to the formulation of a hypothesis or premise of a general nature to explain the particular facts observed and the relationships among them. The deductive aspect involves logical reasoning leading from this hypothesis to new statements or principles, which then may be checked against the facts already available. This use of inductive and deductive procedures to complement, reinforce, and check each other in the formulation of scientific knowledge comprises the main part of what is called the scientific method.

Note: Q.E.D. is a standard abbreviation from Latin, Quod Erat Demonstrandum (That which was to be Proved); but in the case of as yet unproven theorems, it reads Quod Est Demonstrandum (That which is to be Proved).

This is the Latin rendering of the original Greek phrases with which Euclid used to finish or start his proofs, and both of these have become habitual expressions in the classical mathematical literature of most countries.

Read More...

Monday, July 16, 2007

The Selfish Shellfish

by Frank Luger

Frank Luger in Montreal

Once upon a time in a deep blue sea
In a bay of beauty that’s rare to see
Among a myriad colorful shellfish
There lived an oyster- resentful, selfish…

He thought he was just an ugly scallop
Of course he got wallop after wallop
Imitating the majestic lobsters
Irritating the domestic oysters…

Fed up with mockery he could not stand
He tried to burrow deep into the sand
Until near-suffocation forced him back
Onto his usual self-torture track…

Then he tried to run away but failed
Thus with painful self-disgust he ailed
So much that his shell remained tightly shut
All his life lacking pride with which to strut.

Tho' a grain of sand bothered him inside
And his neighbors kept on chiding outside
He stubbornly refused to be of use
And preferred to clam up without excuse.

He knew that wasting life is a sad crime
And that selfishness gave him a bad time
Yet he kept wallowing in self-pity
While dreaming about his divine city.

Slowly washing ashore he dreamed away
But one day he awoke to a sunray
The radiant heat of which meant his death
Unless he repented with his last breath.

His grain of sand, grown into a tumor
Suddenly cracked his dry shell and humor;
There burst forth a pearl such as never seen
To delight God forever with brilliant sheen.

Read More...

Monday, June 18, 2007

Erudition, Eloquence, and Elegance in Mathematics

Frank Luger headshot by Frank Luger

In any field of human intellectual endeavor, the 'sine qua non' of 'eternal' excellence are the three classical hallmarks known as erudition, eloquence, and elegance. All these three, in turn, entail various qualities, as it will be mentioned below. They are the indispensable legs of the tripod on which true quality of the timeless transcendental kind rests, which may be best expressed by the single word: excellence.

Mathematics is a rather unique intellectual endeavor. Unquestionably one of the greatest triumphs of the human intellect -- and by the same token, a truly great tribute to it -- mathematics enables one to go 'where no one has gone before' or to borrow a literary phrase, 'where even angels fear to tread'. Now, in this essay, I don't wish to get drawn into such disputes as Platonic vs. Aristotelian mathematics, or whether mathematics is discovered or created or both; and I have no intention either to discourse on merits and shortcomings, or to engage in any kind of sermon for or against mathematics. Quite simply, the purpose of this paper is to draw attention to just what constitutes excellence in mathematics, regardless of the idiosyncrasies of any particular mathematician, whether still living or already standing in the Pantheon, frozen in lofty marble among eternal geniuses. In other words, don't expect here a cookbook recipe for winning the Fields Medal1, quite regardless of how smart and (mathematically) knowledgeable you might be. You need much originality in this field, despite a huge amount of indispensable basic knowledge and rapid developments in every branch; and it is fair to say that no matter how impersonal mathematics appears, your own cognitive style and pattern-recognition gifts leave plenty of room for individuality in this special, infinite playground of the intellect.

However, none of the above justify the first sentence of the previous paragraph. Mathematics is unique because the man-made and the nature-made get intimately intertwined in it; and both components are present in every branch of mathematics, no matter where you look. But the proportions may be very different. In probability and statistics, number theory, and the like, the man-made aspect predominates so much that the nature made component can only be discerned with specific effort; whereas in most areas of mathematical physics, the nature-made aspect is not only conspicuous, but must even be given priority2, at least according to the vast majority of (theoretical) physicists.

Also, there have been shifting emphases on these components throughout history; but it is only in relatively recent times, that mathematics has gradually become independent of natural philosophy at first and from physical significance at last. This emancipation has taken about a century, roughly between the non-Euclidean geometries of Gauss, Bolyai, and Lobachevsky in the early XIXth century until the Quantum Mechanics of Planck, Bohr, Schrödinger, Heisenberg, Born, Dirac, et al. around the 1930s. Dirac, in particular, in a famous quote has gone as far as asserting that if there is a discrepancy between experiment and mathematics, one ought to jettison the experiment and retain the mathematics. Today, maybe 10% or so of advanced mathematics has physical meaningfulness; and mathematical research merrily proliferates following its own recipes, in disdainful disregard of physics or even philosophy except perhaps for the part of philosophy which belongs to logic in general and mathematical logic in particular. In other words, the man-made component has become overwhelmingly, maybe to 90%, predominant over the nature-made component; and this is precisely why mathematics is so unique, considering that no matter where we look, it works.

A word of qualification is in order. It works, and ubiquitously at that; but this is still in our world of human sense-perception. How it may or may not work independently of the 'bubble' of virtual reality within which its human creators perforce live, remains an open question. If there is such thing as ultimate reality, it may or may not be adequately handled by our mathematical sophistication. One might conjecture, that human limitations can but result in projections of those limitations into other worlds, assuming that such worlds exist, quite independently of us. Also, with regards to extraterrestrial life, no matter how probable it is that such life is intelligent, there's no reason whatsoever, to suppose that those life-forms evolved intelligence along human lines. To be sure, there are certain mathematical things that we have reason to believe, are universal; for example the prime numbers, as it was eloquently emphasized in the late Cornell astrophysicist Carl Sagan's masterpiece: "Contact". But unless and until some way is found for interstellar communication and travel, we have no means to confirm or disconfirm such conjectures. Perhaps some future discovery will rob the 'Queen of Sciences' of her crown, but until then, mathematics reigns supreme, and we ought to bow to her.

Erudition is the first and perhaps most obviously important requirement for mathematical excellence. Little comment is needed. One must be intimately familiar with advanced mathematics in order to attain mastery. In fact, no attainment of mastery is possible before every detail has become so intuitively evident, that one can run circles around it and devise alternatives. It's just that the rapid growth of every branch of mathematics makes it increasingly difficult to master but ever narrower segments of specialities. A curious situation arises whereby the more one knows about specifics, the less one is able to keep sight of the whole, all the way until the absurd predicament of knowing everything about almost nothing. Already a hundred years ago this was well stated in one of Poincaré's theorems, according to which the more one approximates a mathematical truth, the more elusive it becomes. This recent proliferative trend has produced more and more 'specialist barbarians', which, of course, is at the expense of erudition, because erudition requires much general knowledge in addition to specialized competence in whatever narrow aspect of mathematics. This applies to both pure and applied fields of mathematics, although perhaps attains greater importance in the pure fields, because it is here that the specialization tendencies are the most pronounced. It is fair to say that today's mathematicians are less erudite in a general sense than ever, no matter how competent they may be in some highly specialized area. The only solution to this predicament is synthesis, whereby many seemingly disparate aspects are brought to common denominators; and the resultant simplification gives rise to generalization, which then makes room for new growth cycles. Generalization is one of the most important aspects of the growth of mathematics, being the key to usefulness. Interestingly, the greater the generality, the greater the simplicity. This is one of the main reasons why advanced mathematics is easier than the less advanced parts. Simplicity also greatly facilitates the other two requirements of mathematical excellence: eloquence and elegance, by getting rid of unwanted or unnecessary information and drawing attention to the important facts. Simplification by generalization was most eloquently illustrated by David Hilbert, the second 'Prince of Mathematicians' (the first was Gauss) in 1890, when he proved Gordan's 1868 theorem by throwing away 90% of Gordan's premisses and putting the rest into the form which is known as Hilbert's Finite Basis Theorem. This far-reaching and profound theorem shows eloquently and elegantly that greater generality and greater simplicity are practically inseparable.

Eloquence, as the second requirement for mathematical excellence, might strike a strange note. That's because eloquence is traditionally associated with rhetorics and the fluent, polished, and effective use of language, especially in public speaking. Yet the same argument or proof may be presented clumsily or eloquently. An eloquent proof immediately appears as smooth, almost natural flow of ideas, without even a trace of unnecessary or cluttering information. Obviously, a prerequisite of eloquence is thorough mastery of the field in general and the problem in question in particular. Yet, technical mastery is not enough. One needs a certain creativity and imaginative playfulness, as well as originality and style. This brings us to the third leg of the tripod: elegance.

Elegance, as the third requirement for mathematical excellence, is the truly artistic aspect. Its hallmarks are grace and refinement, ingenuity and simplicity, extraordinary effectiveness and efficiency. To explore, to discover patterns, to explain the significance of each pattern, to invent new patterns similar to those already known, are among the normal activities of what mathematicians do. How they do what they do depends on the quality of the mathematician. An excellent mathematician has an almost inimitable style, but despite much idiosyncrasy, the style will invariably be elegant.

The history of mathematics is marked by alternating contractions and expansions, analyses and syntheses, unifications and generalizations. If all of mathematical knowledge could be expressed in two principles, the excellent mathematician would not rest until s/he could demonstrate that the two are rooted in a single one. But that would give rise to new problems, and new cycles of expansions-contractions-expansions. Such pulsation has been characteristic of the growth of mathematics throughout history in erudite, eloquent, and elegant ways. As mathematics is both science and art3, it may perhaps be fairly said, that erudition stands for science, elegance for art, and eloquence bridges the two of them.


1 Equivalent of the Nobel Prize in mathematics, except that the Fields Medal is conferred upon its recipient only once every four years, in contrast to the Nobel laurels which are and have been awarded every year.

2 There have been many disputes around this point. Some famous people, such as Bohr, Dirac, etc. argued in favor of throwing out those parts of microphysics which 'deviated' significantly from mathematics; whereas Einstein et al. insisted on the priority of physics and the experimental validation of mathematical theories. Pure mathematicians in the vein of Gauss, Hilbert, Poincaré, Hardy, and many others, simply could not care less either way; for them the intrinsic esthetics and consistency of pure mathematics was far more important and normative than whether physicists happened to find any pragmatic use for beautiful mathematical theorems.

3 cf. Luger, F. Necessitas Mathematicae, in Commensal, No. 100, March 2000, pp. 20-24; also in Telicom, Vol. XV, No.1, Oct./Nov. 2000, pp. 66-71; Gift of Fire, Issue 122, Jan. /Feb. 2001, pp. 36-41; PhiSIGma, No. 23, Sept./Oct. 2001, pp. 20-25.

Read More...

Monday, June 04, 2007

Good Salesmanship at a Glance

Frank Luger headshot by Frank Luger

Transforming initially vague or perhaps nonexistent client interests into definite (signed) commitments, a.k.a. contracts, is the task of each and every salesman. In other words, the salesman mediates between client and business, fitting client needs to business profiles, and conversely, making sure that those needs are adequately filled by the business. How well this is done by the salesman is the matter of good salesmanship.

Good salesmanship? What’s that? The moment we hear the word “salesmanship”, most of us will bristle, shudder, and start chasing away uncomfortable mental images like fast-talking used-car salesmen, dishonest life-insurance agents, fly-by-night operations, pyramidal sales, door-to-door solicitors, and various ‘merchants’ cheating us left, right, and center, one way or another. Anyone who has ever been sold a ‘lemon’ and / or has been treated with deceptive sales practices will be rather wary in any situation involving ‘salesmanship’, fearing bad salesmanship, such as the above.

The hallmarks of good salesmanship are marketing flair and personal integrity, selling true quality goods and services with reliability and validity. Both are two-way streets, i.e. they help the business as well as the customer. They help the business by maintaining the cash-flow and increasing the reputation while they help the customer by adequate need fulfillment and appropriate anxiety reduction.

Salesmanship, whether bad or good, takes place within the framework of sales mentality. Generally speaking, there are but two kinds of sales mentality; those of the shopkeeper and the entrepreneur. The shopkeeper puts something attractive in the window and then waits for the walk-in. If there is no walk-in, the shopkeeper does not eat tonight. Small wonder, then, that what has evolved with the shopkeeper mentality is client-grabbing and price-haggling. While they may yield immediate business and bring in some cash, they do not contribute to good marketing and long-term development. By contrast, the entrepreneur mentality involves steady client-handling and price-fixing. The entrepreneur creates markets and builds trust where none existed. He does so by activity as opposed to the passivity of the vendor in the shop. Primarily, he handles transactions as investment opportunities of the business as well as himself. Also, he takes every occasion to promote his company, but does so with reliability and validity. All the time he thus displays marketing flair and personal integrity- in short, good salesmanship.

Client-grabbing and price-haggling are typical of the oriental world, in fact, they are still a way of life all over Asia and the Middle-East, though by no means limited to these regions. However, in the West, that is, primarily in Europe and North-America, steady client-handling and price-fixing have gradually become the prevailing practices. The first kind is the older, rural or merchant one, reflecting on various historical eras, whereas the second is much more associated with the modern industrial world and urban lifestyles. Of course, one cannot go into an oriental bazaar and expect fixed prices any more than start haggling over the price tags of big-city department stores. However, the second trend is slowly gaining ascendancy as modernization proliferates worldwide.

Let’s take just one example: room sales in a small hotel of a big city. The setting is highly competitive, so the little-known small hotel cannot well afford to lose sales, especially in low season. Under such circumstances, surely, there is a strong temptation for client-grabbing and price-haggling. However, the market share of a little-known small hotel ought to be based on excellent reputation, maintained with true quality products and services as well as modest and steady pricing. If the quality of the products and / or the services is poor or questionable, then the customer will not return. If the price is too high, he will go elsewhere. If the price is too low, he will lose respect and refer to the hotel as ‘cheap dump’ both in public and private. If the price is inconsistent, he will lose confidence. If he bargains successfully, he will sneer and snicker in the back and spread bad reputation. If he arrives in the early morning hours and gets a substantial discount, pretty soon the customer influx will shift to dawn. If he needs the room for only a short time and gets away with half the price, the market will be surely damaged. If he aggressively complains and intimidates the clerk enough to obtain a big rebate, others will follow suit. In short, nothing but modest and steady prices backed by quality products and services, sold by firm and consistent yet polite and cheerful salesmanship will build enough client trust and yield enough client satisfaction to result in repeat business and good referrals and steady hotel promotion toward stable market share. Thus, even in the case of a little-known small hotel good salesmanship consists of marketing flair and personal integrity, regardless of specific situations and general competition.

In sum, good salesmanship is neither the art of selling fridges to Eskimos nor the greed of squeezing of every penny out of a customer. Rather, it is the proper selling of true quality goods and services with reliability and validity, both personal and business-wise; i.e., marketing flair and personal integrity. Surely, this is the only way to acquire the kind of reputation and dignity which is the gateway to steady market share and future prosperity, now or ever.

Read More...