Vernadskian Time: Time for Humanity
A PDF of this article can be found here.
Abstract
Vladimir Vernadsky’s concept of the different phase spaces of abiotic processes, life, and cognition, serve to illustrate the errors of trying to build up the entire universe from abiotic processes, through the implicit rejection of the possible existence of uniquely biological or noetic principles. In his 1930 “The Study of Life and the New Physics,” Vernadsky directly addresses the hereditary errors in the Cosmos of Newton: that the concepts of space, time, energy, and matter are final, that they can be determined from purely abiotic experiments, and that there is a fundamental distinction between the studied universe and the minds that study it. Here, I will: (1) demonstrate several ways in which space, time, energy, and matter are not the same in the abiotic, the biological, and the cognitive domains, and do not have the same meanings as they had in 1900, (2) reveal the absurdities of applying the laws of thermodynamics to living and human systems, or to the universe as a whole, and (3) give specific applications of Vernadsky’s outlook to novel types of time, including absolutely unique aspects of human time, which shed light on what can truly be called “universal” principles.
Introduction
Humanity is unique among known physical and living processes: it has characteristics that differentiate it absolutely from lower domains, including an absolutely unique quality of time. Singular expressions of human time are brought into sharper relief by developing specific inabilities of lower forms of time to comprehend human activity, specifically the concepts of time embodied in the laws of thermodynamics, whose misapplication to domains outside their legitimate scope have resulted in such extrapolations as the “heat death of the universe” and a tendency of the universe towards “disorder.”
Problems arise when a term or concept developed in one context is applied in other contexts without reconsidering the term’s suitability, or its meaning. While this is particularly common when the terms have everyday meanings in addition to scientific ones (such as “action”), the laws of thermodynamics, particularly the second one, are inappropriately and recklessly applied in fields in which their application is in doubt until developed anew as a required principle in that field. A glaring error is in the explanation of the second law of thermodynamics as mandating a universal increase in “disorder” and then committing the outright fraud (or extreme laziness) of illustrating this for students through discussions of messy dorm rooms and disorderly desks. While “disorder” is a poor word to use even when discussing the physical concept of entropy in abiotic, micro-contexts, it has absolutely no meaning when applied to macro-scale objects.1Entropy, a measurable physical concept developed to explain the quantity of energy available to do useful work, was found to increase (or remain constant) in all processes of closed systems. This increasing entropy, which gives a time-direction for physical processes, has frequently been inaccurately described as representing an increase in the “disorder” of a system. This concept of disorder has been applied to domains far afield from physical chemistry. Many students learn of entropy through analogies of the increasing disorder of a desk or room, unless effort is made to tidy things up. But the objects on a desk do not move on their own or “tend” towards any new configuration of their own accord; they are moved by human beings.
Clear examples of concepts whose meanings change with the development of later insights are given by the Ukrainian-Russian biogeochemist Vladimir I. Vernadsky in his 1930 “The Study of Life and the New Physics,” in which he directs particular attention to the errors of the Newtonian conception of the Cosmos, in which the investigating mind is separated from the investigated world, and to the changing meanings of space, time, energy, and matter—four concepts which he points out have a different meaning to the scientist of 1929 than they had in 1900.
The development of new concepts, inexpressible in terms of the former language, is the most powerful expression of human creativity: an action which cannot be performed by logic or by any form of artificial intelligence. This uniquely human ability to create metaphors (concepts which can be expressed only through specific inabilities to express them in the previous language) is the basis of modern science, and is an increasingly powerful, and ultimately the most powerful, force of nature.
In this article, several failures of mis-applying concepts in inappropriate contexts (the laws of thermodynamics, and the arrow of time) are addressed in light of Vernadsky’s calls for biology and cognition to serve as bases for new developments in physics. This article will: (1) demonstrate several ways in which space, time, energy, and matter are not the same in the abiotic, the biological, and the cognitive domains, and do not have the same meanings as they had in 1900, (2) reveal the absurdities of applying the laws of thermodynamics to living and human systems, or to the universe as a whole, and (3) give specific applications of Vernadsky’s outlook to novel types of time, including absolutely unique aspects of human time, which shed light on what can truly be called “universal” principles.
Vernadsky’s Context
In his 1930 essay “The Study of Life and the New Physics,” Vladimir Vernadsky wrote: “Space, time, matter and energy are clearly distinguished for the naturalist of the year 1929, from the space, time, matter and energy of the naturalist of 1900.” With the work of Max Planck and Albert Einstein on the quantum world and on relativity, these four terms had indeed dramatically changed their meanings. Although the same words were used, the concepts behind them were incompatible with those laid down by Newton, who wrote that “absolute space, in its own nature, without regard to anything external, remains always similar and immovable,” and that “absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external.”
Under Einstein’s theory of relativity, space itself was no longer indifferent to objects in space, and no longer always similar to itself: instead, it scaled along directions of motion, and it curved gravitationally. Although the concept of location had not disappeared, changes arose in the distances between locations, and space now had a measure of curvature, rather than being flat.2 For more on “flat” versus “curved” space, see Bernhard Riemann: The Habilitation Dissertation.
Similarly, Einsteinian time is considerably at odds with Newtonian time. Rates of flow of time now varied with relative motions and with gravitational fields. The very notion of “moment” was shown to be different for different observers:to one observer two events might appear simultaneous, which, however, would not appear to be simultaneous to another observer. The notion of a particular moment in time now only had meaning with respect to a particular position in space. Space and time were no longer separable.
Matter and energy were considered distinct in 1900; separate conservation laws of matter and energy described all physical changes as maintaining the same amount of matter and the same quantity of energy, before and after the change. Yet, according to Einstein, these two concepts did not refer to wholly separate domains, but rather were related through the equation E=mc², which relates energy (E) to mass (m) times the speed of light (c), squared. This implies the possibility of mass vanishing, and becoming energy, or energy turning into matter. Work in the nuclear field demonstrated astonishing amounts of energy released from radioactive materials, along with a slight decrease in their mass, in accord with E=mc².
A discrete, rather than continuous nature of matter and energy were developed. In 1900, the concept of atoms had not achieved complete acceptance and physical concepts were considered as continuous. By 1930, the discrete, atomic nature of matter was almost universally accepted. Another violation of continuity was discovered by Max Planck, and further worked out by Einstein, in the quantum nature of light energy: light energy could exist only in multiples of a discrete quantity of energy proportional to the frequency of the light. The particle nature of light, which had been rejected for centuries in favor of the wave theory, returned, as the photons into which light was now discretely divided.3This did not eliminate the wave understanding of light: while some experimental contexts seemed to demand the particle nature of light, others (indeed, most) required that light act as a wave. For more on the birth of the quantum, and the errors of the Copenhagen Interpretation of quantum mechanics, see A New Quantum Physics: Rejecting Zeus.
Over the course of only three decades, these four basic concepts of physical science were radically transformed, by work on the domains of the very small and the very large. How might work on the domain of life further transform these basic concepts? Vernadsky asks: “Cannot the life sciences effectively change the fundamental representations of the scientific universe—the representations of space, time, energy, matter—in a radical way? And is this list of fundamental elements of our scientific thought complete?”4V. I. Vernadsky, “The Study of Life and the New Physics,” 1930, translation by Meghan Rouillard, 2015, in press.
A Brief History of Thermodynamics
Up to the First Law
Before the 1800s development of thermodynamics, heat, matter, and mechanical energy were considered quite differently. In the 1700s, heat was related to a substance known as phlogiston, which was considered to exist in bodies capable of being burned, during which process the phlogiston was released as the fiery heat. This also explained why ashes weighed less than the substances that were burned: they had lost their phlogiston. But the great chemist Antoine Lavoisier (1743–1794), demonstrated by careful measurement that when metals were burned (or rusted), the total mass increased, overturning the previous theory. Lavoisier introduced the concept of caloric, a “fluid” of heat, which flowed into bodies being heated.5Lavoisier also developed a powerful conservation principle: that in all chemical and mechanical changes, the total quantity of matter would not change, and that the total mass of each element, considered individually, would also not change. By this theory, caloric was a substance, and its total quantity was conserved as it flowed from hot bodies to cold ones, or was released from them chemically. In the early 1800s, Sadi Carnot studied the operation of heat engines (steam engines), and developed specific mathematical relationships between the flow of heat (as caloric) and the potential physical work accomplished, in order to gauge the potential power of such engines.
Heat and mechanical work later came to be understood under a common system of thought. In the 1840s, James Prescott Joule performed experiments in which the falling of a weight caused paddles to agitate and heat a reservoir of water; mechanical work was transformed into heat, meaning that caloric (if it were a real substance) was being created, and mechanical energy was disappearing. Joule unified heat and mechanical energy, by determining a mechanical equivalent of heat, relating the calorie (a measure of heat) to mechanical energy (measured today in joules).6Before the derivation of the mechanical equivalent of heat, the calorie—a measure of the heat required to change the temperature of a body—was considered as relating to a different domain of nature than the joule (or the foot-pound, used by Joule before the unit bearing his name existed)—which measured mechanical work. Joule determined that there was a direct measurable equivalence between quantities of heat energy and mechanical energy: a calorie is about 4 joules. The Calories in food (often written capitalized) are actually kilocalories, and are about 4 kilojoules each. The principle of the conservation of energy could now be expanded, in a rigorous way, from the domain of mechanical dynamics to the field of thermodynamics. Processes could involve a transformation between (mechanical) energy and heat flow, but never the creation or destruction of total thermodynamic energy.
Further research on the relationships between heat and mechanical work revealed that heat did not exist as a substance. Although a gas under certain conditions and at a certain temperature could be said to contain a certain quantity of energy, it could not be said to contain a definite quantity of heat, of caloric. This was demonstrated by considering different ways of changing a gas system from one state to another, which would result in varying amounts of work or heat-flow, depending on the means taken to go from one state to the other. Thus, one state of a gas could not be said to contain a specific excess of heat compared to another state. The caloric theory of heat faded, and heat was considered not as a substance which would flow between bodies, but only as a measure of flow. Heat flow could be measured, but heat content no longer existed as a scientific concept.
The results of this research were expressed in what are known as the laws of thermodynamics, which encompass both heat and mechanics (hence the name, thermodynamics). The first two laws will be considered separately.
The first law of thermodynamics states that the total quantity of energy in a closed system is conserved. The energy may exist, and be transformed, in various forms, such as mechanical work performed, mechanical and gravitational potential, chemical potential (the ability of a fuel to give off heat when burned), heat flow, and the state-energy of a gas under different volume, pressure, and temperature conditions. Among all such energies, transitions could be made, but no process would result in energy being created or lost.7This concept finds its origins in the work of Leibniz, who wrote of the connection between vis viva (living force—today’s kinetic energy) and vis mortua (dead force—today’s potential energy), as in his 1695 Specimen Dynamicum.
A specific possible efficiency of the transformation of heat-flow into mechanical work was developed, furthering Carnot’s work. Given two heat reservoirs of different temperatures, one hot (TH) and the other cold (TC), the maximum ratio of heat from the hot reservoir converted into work, rather than that lost in heating the colder reservoir, is (TH-TC)/TH, known today as the Carnot efficiency.8 For example, a combustion engine operating at a temperature of 1000K (730°C or 1340°F) as TH, in an environment of temperature 300K (30°C or 80°F) as TC, would have a maximum efficiency of (TH-TC)/TH or (1000K-300K)/1000K = 700K/1000K = 70%. Thus, at most 70% of the heat generated in the engine could be converted into mechanical work, while 30% would heat the colder surrounding environment. This is the maximum theoretical efficiency of an engine operating at these temperatures. Considered in the opposite direction, this relationship also gives the maximum possible efficiency of a refrigeration cycle, of the amount of work required to cause heat to flow from a colder reservoir to a hotter, as in moving heat from the interior of a refrigerator to the air of the kitchen. This maximal efficiency follows from general considerations of systems of gases, and the conservation of energy.9Note that it does not require entropy or the second law of thermodynamics, which will be addressed below. The transformation of heat into mechanical work, at the full Carnot efficiency, can be reversed by the application of mechanical work, in order to heat the hot reservoir, and bring the heat from the cold reservoir back to the hot one.
When the principles of mechanical physics and this law of thermodynamics are combined, it is possible to describe how a thermomechanical system will change. At each moment, given the state of the system and how it is currently changing, the state and nature of change it will take in the next moment can be determined, as can the state and nature of change it must have had in the preceding moment.10For example, knowing the position and speed of a swinging pendulum at one moment allows the determination of its later motion as well as its previous motion. For these principles, the past and the future are equivalent, differing by being in opposite directions according to time, but having no fundamental difference otherwise.11An ideal heat engine, reversed, is an ideal refrigerator. The transformation of heat into work, with some flow of heat into a cold reservoir, can be reversed by the application of work to add the heat back to the hot reservoir, moving heat from the cold reservoir to the hot one at the same time, in the process of refrigeration. Ideal heat engines and refrigerators are the same process, viewed in opposite directions of time. The future can be determined just as accurately as the past. In this way, past and future, forward and back in time, are analogous to left and right: they can be described as opposites, but there is no fundamental distinction between the two directions themselves.12This can help explain the difficulty children have in developing a sense of left and right. Max Planck writes, in his Treatise on Thermodynamics, “From the point of view of the first law, the initial and final states of any process are completely equivalent.”
These physical principles can be said to be time-symmetrical, or reversible. Nothing in the phenomena described by physical dynamics or the first law of thermodynamics serves as a basis for intrinsically differentiating the past from the future, or before from after—there is nothing to define a particular direction of time.
The Second Law is Developed
Based on the common observations that heat was never seen to flow of its own accord from a cold object to a hotter one, that gases would tend to diffuse rather than to concentrate, and that friction, found in almost every process, gave rise to a definite direction in time by turning motion into heat, what is known as the second law of thermodynamics was developed in the mid- to late-1800s by Rudolf Clausius, Ludwig Boltzmann, Josiah Willard Gibbs, and Max Planck, among others.
“Entropy” is required to understand this second law. Clausius proposed entropy as a function of state of a gas, a new function of its mass, volume, temperature, and pressure, and noted that, in a closed system (without external work or heat-flow), this quantity was never found to decrease.13 For readers desiring a more specific treatment, the change in entropy can be briefly stated as ΔS = ∫dQ/T, where the change in entropy (S) is the integral of the heat absorbed (Q) divided by the temperature (T). In his Treatise on Thermodynamics, Planck derives a general function for the entropy of a gas: Φ = M(cv log T + R/m log v + constant), where Φ is the entropy, M the mass of the gas, cv the specific heat of the gas at constant volume (the amount of heat applied to change its temperature), T the temperature, R the ideal gas constant, m the molecular weight, and v the volume per mass of gas, from which he shows that the change in entropy would be ∫dQ/T under certain conditions. Entropy was a new physical concept, not derivable from earlier theories, whose increase accorded with the forward flowing of time. Note that while increasing entropy gives the direction of time, it does not define a rate of time. Interestingly, the dynamical laws of normal physics do give rates in time, without defining a direction in time. Entropy does not change if a gas is compressed or expanded by outside work, without the flow of heat, as in an ideal gas shock-absorber, making such a process reversible. Entropy can be roughly understood as measuring the amount of energy in a system unavailable to do work. Of the total energy in a system, the amount that was “free” to do work, would decrease over time. By use of this metric, it was now possible to define the direction of time: the entropy of a closed system can never be lower “after” than it was “before.” Rather than before and after simply being opposites, a specific concept, entropy, now defines an arrow of time.14 Note that while increasing entropy gives the direction of time, it does not define a rate of time. Interestingly, the dynamical laws of normal physics do give rates in time, without defining a direction in time.
Under the second law, reversible and irreversible processes are distinguished. Reversible processes involve no change in entropy. Examples include the swinging of a pendulum in a vacuum (without friction), the motion of a planet around a star, or the ideal compression and expansion of a gas shock-absorber.15Ideal, in the sense that the compression and expansion occurred very slowly, and the cylinder was so well insulated that no heat flowed to or from the surroundings. Reversible processes do not have an intrinsic time or a final state that they head towards: both a future state, and a past state, could be reached by the system without external work or heat. Irreversible processes, however, do involve a change in entropy, moving always towards higher entropy as time moves forward. Examples of irreversible processes are the creation of heat by friction, the expansion of a gas without doing work, or heat flowing from a hotter body to a cold one.16Two brief examples may be useful. First, let’s take two bodies at different temperatures, TH (hotter) and TC (colder), and let heat flow from the hot body to the colder one. Using ΔS = ∫dQ/T, we can say that the change in entropy of the hot body, as it loses heat, is -Q/TH, while the change for the colder body, gaining heat, is Q/TC. The quantity Q (heat flow) is the same for both bodies, since we assume that all heat flowing from the hot body went to the cold one. The total change in entropy is therefore ΔS = -Q/TH + Q/TC. Since TH is greater than TC, the negative term -Q/TH will be smaller than the positive term Q/TC, and the entropy has increased. As a second example, consider the expansion of gas without doing any work and without external heat flow. Take two gas tanks connected by a valve, one full of gas, and the other totally empty. If the valve is opened just a bit, gas will flow into the empty tank until both have the same pressure. Experiments revealed that the temperature of the gases would not change overall, and the energy of a gas had been shown to depend only on its temperature, not its pressure or volume. Therefore, this experiment results in no change in internal energy of the gas. But, if we use Planck’s entropy, Φ = M(cv log T + R/m log v + constant), before and after the motion of the gas, we see that the only term that has changed is v, the volume per mass of the gas, which has doubled from v to 2v. Therefore, ignoring the terms that have not changed, the change in entropy is ΔΦ = M(R/m log 2v) – M(R/m log v), which has a positive value. The entropy of the gas has increased with this expansion, and this is indeed the direction that the process is always observed to occur in. Gases expand to fill available volumes, rather than spontaneously concentrating. Since irreversible aspects are present in almost any process, the concept of a truly reversible process has a meaning which is mostly theoretical.
The atomic theory of matter made for a complication in the understanding of entropy, which was resolved through the use of statistics. On the micro-scale, the individual atoms and molecules making up gases were considered to behave according to dynamic (reversible) physical laws, and thus the difficulty arose of reconciling an increase in entropy (irreversibility) on the macro-scale, with the reversible nature of the particles which made up the macro-states. How could there be an intrinsic direction of time, if all the particles making up irreversible processes, could individually be time-reversed?17Consider, again, the example of gas concentrated in one tank being allowed to fill both tanks. If this entire process were then paused, and then played in reverse, and we watched it with an imaginary molecular microscope, the motion of the gas particles in reverse (concentrating in the one tank and emptying the other) would not violate any laws of physics. Yet, gas is never observed to do this. How can the direction of gas motion be explained by entropy in the large, while examination of its particles reveals a process that could go in the other direction? This was the sort of problem addressed by Boltzmann’s treatment of entropy.
Boltzmann developed statistical mechanics to explain entropy from another standpoint. Rather than being a function of the conditions of a gas as a whole, it could be understood as a function of the number of possible configurations a gas’s particles could take that would correspond to a given macro-state.18 While the second law is frequently expressed as an increase in “disorder,” with recourse to examples of the disorder of macro-scale objects which do not change on their own (such as messy rooms and disorderly desks), this is the mis-application of the everyday word “disorder” out of its meaningful context of micro-states (where it is still a regrettable word) to the totally different context of macro-states. As Professor Frank Lambert humorously points out, objects in a room do not inherently move or lurch towards disorder. They are not a closed system, since the cause of their motion is the people moving them, and does not lie in the objects themselves. See his excellent sites, available from secondlaw.oxy.edu, for more on the pedagogical disaster of using “disorder” to explain entropy. For example, the number of ways for gas particles to exist in two connected gas tanks, is immensely greater than their all being in only one tank, explaining the greater entropy of this distribution. The existence of so many more ways for the gas to be in both tanks explained its expansion to fill both. With these refinements, the disparity between reversible micro-phenomena and irreversible macro-phenomena was bridged.19 Return to the example of footnotes 16 and 17. If the motion of all the gas particles were instantly reversed once they filled the second tank, they would indeed all move back into the first tank. But among all the ways of starting with the general state of two tanks full of gas, how many among the possible configurations of particle positions and momenta in the two tanks would result in all the particles moving into one tank? Only an exceedingly, incredibly tiny number among the impossibly enormous number of potential configurations. This is why, statistical mechanists would say, we do not observe gases to spontaneously concentrate into smaller volumes. Although it is not strictly impossible from the molecular standpoint, it is incredibly unlikely.
The laws of thermodynamics, combined with all the other laws of mechanical physics, unify heat and mechanics, and provide an arrow of time. While the physics of reversible processes have before and after only as opposites, the physics of irreversible processes give a definite direction in time for the evolution of the systems they apply to. The second law gives an intrinsic metric to differentiate before from after.
The First Law of Thermodynamics is Not Universal
The first law of thermodynamics states that in all physical processes, energy is neither created nor destroyed. While Einstein’s demonstration of the interrelationship of matter and energy shows this principle to be untrue, because energy and mass can be interconverted (as in nuclear processes), the changing nature of “energy” will be the focus here, in two respects: (1) that energy is actually created by human beings, and (2) in a case of re-contextualized meanings, economic “energy” is distinct from the energy of the physics, as seen when we consider energy flux density.
In the 1800s, as the laws of thermodynamics were developed, any attempt to measure the total energy of the planet would have erred dramatically. Such an estimate would have included such factors as: sunlight, chemical compounds (such as that in living matter, and in hydrocarbons in the crust), elevations of physical structures (gravitational potential), and the flow of wind and water. While making such an estimate of the total energy available on the planet as a whole would be quite difficult, this is not the greatest problem with undertaking such an endeavor. Rather, consider what would not have been included at all. Such a survey in the 1800s would not have included the nuclear fission energy potential of the planet’s uranium and thorium, or the fusion potential of its deuterium. The estimate of global energy would have been wildly off, not only for reasons of lack of knowledge of the composition of the body of the Earth, but because the domain of possible sources of energy was incomplete: nuclear processes were unknown.
Did the human species increase the amount of energy on Earth, or only discover already-existing energy? Resisting the urge to answer the question in physical terms which would exclude a specifically human response, the honest answer can be given: we have increased the energy available to the human species: we have increased our economic potential. A useful distinction can be made between the two natures of “energy”: it both refers to something we have discovered about the external physical world, and at the same time refers to a mental tool that we use to advance our thought and power. Human beings create resources, including energy.20Before metallurgy, malachite was used as a cosmetic, but became the main ore for producing copper when it became a resource for the new process of metallurgy. Petroleum was not a resource before the chemical era—it was a mess.
Another example illustrates the importance of considering the quality of energy. Rather than only the quantity of energy, consider the energy flux density, the concentration of the power applied to a process. This example is that of heating a home. We will consider two ways of providing the heat. The first is the direct creation of heat in the home by burning fuel (oil or natural gas), and the second is using fuel to produce electricity in a power plant, and then using that electricity to power a heat pump. In the first case, the efficiency of the furnace or boiler would be measured by the annual fuel utilization efficiency (AFUE)21U.S. Department of Energy, http://energy.gov/articles/energy-saver-101-infographic-home-heating, which is around 85% for a typical modern unit. This means that 85% of the heat in the fuel is delivered to the home.
Now, consider the case of using the natural gas to produce electricity, to then run a heat pump. A typical natural gas power plant converts only about 42% of the gas’s heat into electricity.22Energy Information Administration. http://www.eia.gov/electricity/annual/html/epa_08_01.html Yet, the electricity, being a higher quality, more dense form of power than mere heat, is able to accomplish much more than heat can: running motors, powering electronics, producing metals, etc. And even in heating a home, electricity is more efficient than heat itself.23Excepting very cold areas where heat pumps efficiencies are too low to be useful. By using a heat pump, which moves (“pumps”) heat into the home from the outdoor air, electricity from the power plant is multiplied by a coefficient of performance (COP) for the heat pump, a measure of the heat supplied to the home as a ratio of the energy supplied to the pump. For common heat pumps, the COP is in the range of 2–4, meaning that several times more heat are brought into the house, than the energy used for the pumping. Multiplying the electricity conversion rate (42%) by the COP, gives a value of 84–168%. This means that the home has received heat equal to 84–168% of the heat energy in the natural gas fuel. Recall that directly burning the natural gas in a home furnace would have provided 85% of the gas’s heat to the home.
Therefore, converting natural gas to electricity and using a heat pump powered by that electricity can provide up to roughly twice the home heating provided by the direct use of natural gas (168% compared to 85%). And this is a case where the effects are comparable: supplying heat. Without transforming natural gas to electricity, it cannot be used it to power a telephone, a robot in a factory, or a traffic light system. No amount of natural gas can produce an x-ray image of a broken bone.
This is a simple example of what Lyndon LaRouche refers to as a “curious feature” of technological development in his economics textbook So, You Wish to Learn All About Economics?, whereby
“we tend to accomplish much higher rates of work with the higher energy-flux density of a fraction of the total power supplied to the machine, than with the entire power supplied at relatively much lower energy-flux density. It appears that less power accomplishes more work than a greater amount of power.”24Lyndon H. LaRouche, Jr., So, You Wish to Learn All About Economics?, second edition, EIR News Service, 1995, p. 10.
The “energy” of electricity can be measured in the same physical units as the “energy” of heat, or the chemical “energy” of molecular structure, but these units do not fully express the economic usefulness of that energy. By considering the intensity of the energy, we can differentiate among levels of energy potential, such as the possibilities of: (a) a wood-powered economy, in which energy can be used for heating, cooking, some material treatments, and some metallurgy, (b) a coal-powered economy, in which steam engines can economically be used to transform heat into motion, allowing dramatic changes in production and transportation, (c) an electricity-using economy, where energy can be moved along a wire rather than by transporting fuel, and where the potentials for production are increased by motors, metallurgy is transformed by electrolysis, communication changes fundamentally, and computer automation transforms the nature of productive work, to (d) a quantum-physics and nuclear economy of dramatically increased power capabilities,25The power of nuclear changes is five to six orders of magnitude more dense (100,000 to 1,000,000 times greater) than chemical fuel. See Jason Ross, Forging Fusion. laser and electron-beam technologies, and, with fusion, the potential to develop control over the inner solar system.26 As in the use of fusion-powered spacecraft to reach any part of the inner solar system within a matter of weeks, rather than years. This is a necessity for planetary defense against asteroids and comets, for example.,27For more on this series of economic platforms, see Physical Chemistry: The Continuing Gifts of Prometheus.
While the first law of thermodynamics does apply on the physical level, where energy is neither created nor destroyed (excepting subatomic processes, where energy and matter are related by E=mc²), the “economic energy” available to human economy, as qualified by the type of energy, most certainly does increase, by the human process of creative discovery. This occurs even when less total energy is recovered from fuel sources, by using that more concentrated power to greater effect.
Wild Misapplications of the Second Law
The meaning of the second law of thermodynamics has been greatly simplified and the field of its application dramatically extended, giving it an entirely foreign pop-science meaning. The two major misapplications are in the concept of the “heat death of the universe,” and in the notion that the second law indicates a universal increase of “disorder.”
It was William Thompson (later Lord Kelvin) who is credited with first expounding on the inevitable dissipation of all potential energy into mechanical motion and heat, with a result that “would inevitably be a state of universal rest and death, if the universe were finite and left to obey existing laws.” His colleague Hermann von Helmholtz wrote of the “heat death” of the universe as the ultimate state it would reach, in which no more energy would be available for any processes to make use of. This is the ultimate extrapolation: to apply current knowledge (which at the time, did not include nuclear processes28Resulting in Thompson’s inaccurate estimate of the age of the Sun.) to the entire universe, about which it will always be presumptuous to assume anything approaching a complete understanding.
The other problem plaguing the second law, disturbing its repose as a legitimate and useful physical principle (in its proper domain), is the notion that it insists that the universe will become more “disorderly” over time. Although Boltzmann did indeed use the word “disorder”29Or rather, the German word Unordnung. in his discussion of entropy, it was in the context of expressing a characteristic of aggregates of gas molecules, of the number of states in which the gas particles could be said to be within a certain range of a given macro-scale condition, such as temperature, pressure, and volume. For example, a higher state of temperature allowing a greater diversity of particle motions, was thus a state of higher entropy. This use of the word “disorder” applies to aggregates of microscopic particles, moving about and interacting of their own accord. It manifestly does not apply to objects on a desk, dirty clothing heaped on the floor, etc. Clothes do not move around in large groups, colliding with each other and imparting kinetic energy according to statistical rules. The books in a library do not unshelve themselves and become disorderly at night while the librarians are not supervising them. These objects move due to outside causes: people. They do not spontaneously move of their own accord towards states characterized by a greater diversity of possible distributions.
There is no universal commandment that “disorder,” in whatever context anyone might wish to apply the word, increases. The second law of thermodynamics, is, as its appellation indicates, a law of thermodynamics: not of biology, society, or clothing.
The Second Law of Thermodynamics is Not Universal
Beyond these wildly and foolishly irresponsible extrapolations of logic and linguistics, processes of life and human cognition provide further examples of the second law not being universal in scope. Although the second law cannot actually be applied to macro-scale processes to which it has no relevance, the prevalence of thoughts about universal “entropy” makes it worthwhile to address order and complexity on larger scales, with the caveat that this discussion is of the commonly used notions of entropy and disorder, rather than the actual physical concept.
A powerful economic concept helps make this clear—the concept of the necessity of progress. In his economics textbook,30 See chapter 2 of LaRouche’s So, You Wish to Learn All About Economics? Lyndon LaRouche develops a global measure of economic progress: the potential relative population density of a society, as a function of that society’s scientific and cultural practice. Relative to the quality of improvement of land, how many people could potentially be supported per land-area? This is the potential relative population density (PRPD). Economic value lies in increasing the rate of increase of PRPD. LaRouche writes that while it is obvious that technological regression necessarily implies a decrease in PRPD, the implications of simply ceasing to progress are less clear. By the drawdown of more concentrated resources (energetic, and raw material, for example), LaRouche reasons that the physical cost of providing the base resources will necessarily increase over time in a technologically static society (through the necessity of more difficult mining, etc.), and this increasing cost will lead to a reduction of economic capabilities overall. Without continued technological increase, society will regress; it is impossible to stay still.
From this context, the increasing entropy, and decreasing free thermodynamic energy of physical systems, can be thought of as analogous to the relative, local regression of a system characterized, more universally, by its discontinuous advancements. Clear examples are seen in the very similar domains of life on evolutionary scales of time, and of human economics.
Consider the development of photosynthesis. Before the development of life capable of using the energy of sunlight, terrestrial life depended on chemical energy, such as that utilized by organisms living around deep-sea vents emitting high-energy molecules such as hydrogen sulfide. The total energy available to life was small, and was generated from deep-earth processes. From the thermodynamic standpoint, the energy potential of these molecules (and the processes generating them) would eventually be used up as the gases escaped the Earth’s crust, and the Earth’s cooling temperature produced less of them. While a modern-day environmentalist may have called for the conservation of scarce hydrogen sulfide, and for measures to be taken for its more efficient use, a different route was taken by life. The development of photosynthesis meant that an entirely new, and immensely vast energy source now became available to life, capable of supporting a great deal more biological material and energy flow, and an increase in the biogeochemical energy of living matter. Later, the move of life to land, and the new structures and processes required for it, unlocked an increasing photosynthetic capability.
While the local tendency may appear to be towards decreasing free energy when examining small physical systems, the characteristic of life as a whole, and of human society, is in precisely the opposite direction. Or rather, not precisely opposite, in that the upward shifts occur as leaps, rather than the continuous decrease of free energy expressed by the second law of thermodynamics. An anthropomorphized system of hot gas may look with dismay and dread at the locally decreasing free energy, but life and cognition are not universally characterized in this way.
Vernadsky demonstrates that the biogeochemical energy of life has increased over time. In a 1926 speech on evolution, Vernadsky developed what he called his “second biogeochemical principle,” which states that the evolution of species has an intrinsic direction, moving towards species which increase their chemical and energetic effect on the surrounding environment (in Vernadsky’s terms, increasing the biogenic migration of atoms). More recent studies strongly support Vernadsky’s second principle, indicating that the evolution of life (as far as currently observed) defines an arrow of time, always moving in one direction: towards greater energy use per species and greater corresponding effects on the biosphere.31See “Biospheric Energy-Flux Density,” 21st Century Science and Technology, Spring 2013. Also, Benjamin Deniston, “Towards Demonstrating Vernadsky’s Second Biogeochemical Principle: The Implications of Metabolic Scaling Laws for Vernadsky’s Views on Evolution and the Spacetime of Living Matter,” forthcoming.
Looking at a larger (cosmic) scale, we see the irrelevance of physical entropy in understanding the development of the universe. As an example, consider the big bang theory, according to which the lowest-entropy, highest free-energy state the universe was ever in, occured some billions of years ago, and everything has gone downhill since, from a thermodynamic point of view. Again, any increase in this physical quantity of entropy is absolutely irrelevant to cosmology when considering the manifest increase in order and complexity: the development of galaxies, stars, planets, and, around our star, Sol, life and cognition. Perhaps a galaxy does have higher entropy than a collection of cosmic dust or the subatomic soup supposedly existing within the first microseconds after the big bang, but this does not in any way indicate that it has more disorder, or is less interesting in its characteristics.
A human being, turning food into bodily motion and biological upkeep, is thermodynamically a drain on free-energy,32 Throughout this report, the use of “free-energy” is in the thermodynamic sense, as developed by Gibbs. It has no relation to “zero-point energy,” “vacuum energy”, or any other form of purported energy which is “free” in the sense of having no cost. and yet is the source of creative developments that qualitatively increase the mental tools and the useful energy available to the species: human free-energy increases, without paying any regard to locally decreasing thermodynamic free-energy.
Even the statistical mechanics interpretation of entropy, according to which the increasing entropy is measured as an evolution towards states of greater likelihood, of greater means of possibility, is opposite to the changes seen in life and humanity. Far from moving towards states of greater probability, evolutionary changes in life, and economic changes in humanity, move to states of zero probability, of previous impossibility. A future comes to be, which the physical past could not have created. Bronze Age humanity had available to it an entire array of processes and materials that simply did not exist in the Stone Age, just as mammals have molecules and biological processes that never existed in earlier life, such as reptiles, and which could not, because earlier life did not regulate its temperature.
In sum, we need not lose sleep about a physical quantity—entropy—getting larger, fretting that the future will be a disorderly heat-soup; rather we can instead look at the actual development over time of the universe and of our species, and see that they are characterized by contrary processes. Our understanding of the universe must include these phenomena of life and cognition, or it is appallingly incomplete. Increasing complexity, increasing biological energy, and increasing economic energy may be coherent, locally, with the second law of thermodynamics, but are certainly not explained by it. The second law of thermodynamics is not universal.
Reflections on Time
Comprehension of the characteristics of time in thermodynamics and physics gives a greater pungency to the different kinds of time seen in the biosphere and noosphere. To review: Dynamical laws of physics, with the first law of thermodynamics, govern reversible processes, for which before and after are opposite directions in time, but lack any inherent difference. The irreversible processes covered by the second law of thermodynamics have a direction in time, an inherent distinction between before and after, based on the concept of physical entropy increasing in the direction of the flow of time.
Two prominent differences seen in the times of life and of human thought are their quantized character, and dependence on context. The quantized, discrete nature of these times is seen in the development of new “technologies,” such that before and after are distinguished not by a scalar quantity increasing in the direction of time-flow, but by the fact that after cannot be reached from before. That is, humanity in the nuclear age creates states of matter which could never have been created in earlier ages of human development, because the requisite technologies did not exist.33Time, alone, did not bring about the future state. The time of human development is characterized by increasing our dimensions of action, rather than any (scalar) measure which has a meaningful value in all contexts.34Lyndon LaRouche has referred to the work of Bernhard Riemann on Abelian functions, as a means of more directly expressing increasing levels of complexity, incommensurable with the previous level. On this topic, see Jason Ross, Bernhard Riemann: Potential and Abelian Functions, part 1 and part 2.
The existence of uniquely biological time is revealed by study of characteristic metabolic rates for classes of animals over evolutionary time. Joint examination of metabolic rates and times for biological periods (such as time of population doubling, or heart rate) reveals characteristic specific metabolic rates for classes of animals. For example, reptiles have an average energy use per gram per lifespan of approximately 230 kJ/g/lifespan. For amphibians it is less, 210 kJ/g/lifespan, while for mammals it is 1,600 kJ/g/lifespan, and for birds, 5,280 kJ/g/lifespan.35Benjamin Deniston, “Towards Demonstrating Vernadsky’s Second Biogeochemical Principle.” The increasing values, roughly characteristic of each of these classes of vertebrate life, point to the increasing rate of biogenic energy flow over evolutionary time, and, even more significantly, point to an inherently biological unit of time. These class-specific metabolic rates arise only when the unit of time considered is unique to each species: its average lifespan. Thus, a meaningful empirical generalization is arrived at—a class-wide specific metabolic rate—through the use of a specifically biological unit of time, one related to a biological process (the time of generations), rather than a physical process, and which exists only in the context of the form of life being considered. This is an example of the development of the concept of time, due to the study of life, exactly as Vernadsky called for, and foresaw.36Within life, Vernadsky differentiated metabolic time, generational time, and evolutionary time, and considered the characteristics specific to each, “The Problem of Time in Contemporary Science”, unpublished translation.
Human Time
An entirely new characteristic of time exists when considering the process of human cognition: the characteristic of now. Recall that: the first law of thermodynamics distinguishes before and after only as opposites; the second law of thermodynamics gives an inherent direction to time; living processes give a quantized nature of time (as in the cyclic nature of generational time); and evolutionary changes in life give an incommensurable direction of time, where after is distinguished from before by the existence of processes that could not have existed before. While the meanings of before and after have developed, none of these kinds of time has yet required a now.
Now is a particular kind of moment, distinguished from a then. While thens exist in physics, and can be distinguished from each other, now does not exist; it is not a concept required by phenomena. Now is not the moment between before and after; it is the moment between the past and the future. The laws of thermodynamics give a distinction between before and after, but these befores and afters can be before or after any particular moment, any then. Before and after are not the same as past and future. When is “right now”, physically? What differentiates, fundamentally, now from an hour ago?
Furthermore, under Einstein’s relativity, the concept of a universal time, of the possibility of a shared moment in time for different observers, has vanished with the disappearance of simultaneity. Recall that one event may appear to follow another to one observer, while both events could appear simultaneous to another observer.37This is demonstrated in The Extraordinary Genius of Albert Einstein, starting at 1:04:30. Therefore, a single moment in time, a universal simultaneity, cannot exist. There can be no universal “now”. In what way, then, can now have meaning? What sort of phenomenon requires the existence of a “now”?
It is by the nature of free will that human beings have a now, the time of decision. The opportunity to willfully create incommensurable shifts like those seen in life only over evolutionary time, is present at every moment to the human individual: the opportunity in each now to make choices that are not pre-determined, and which, when creative, are choices which distinguish the future fundamentally from the past, by increasing the possible domain of human action. These nows of discovery, these moments of creative insight, are the reason for the existence of economy as a characteristic of the human species, not seen in other life.
The direction of humanity does not inexorably “tend” in any direction; it is a series of nows, of constant opportunities for decision. When human time is used to make a fundamental discovery, the truest substance of the universe is made visible to the human mind: the substance of incommensurable change itself.
What are “Universal” Principles?
If extending principles beyond the domains of their discovery is fraught with danger, can any truly universal principles ever be known? Consider this warning from Vernadsky:
“In one case, in 1824 the young French engineer Sadi Carnot founded thermodynamics. Carnot’s principle defines the unidirectional course of a process in time. Thirty years later, Rudolph Julius Clausius, then a professor at Zürich, in the principle of entropy, generalized this unidirectional process (which is expressed geometrically in space-time by a polar vector of time) to all of reality, as defining the “end of the world.” In this form, this was an extrapolation of a logical thought, but not a phenomenon of reality.”38Vernadsky, “The Problem of Time in Contemporary Science”, unpublished translation.
If universal “phenomena of reality” exist, how can they be sought? What could be the basis of potentially valid insight into universal principles, if extrapolations from scientific principles cannot be counted on? What can we say about the universe, if our knowledge is always incomplete?
Nicolaus of Cusa, in his work De Docta Ignorantia (On Learned Ignorance), which inspired Johannes Kepler and made modern science possible, begins by setting his eyes on what he saw as the most universal of universals, God, and then goes on to discuss the universe in the context of his insight on the Creator. The central concept of knowledge employed by Cusa, that of educated ignorance, informed by a coincidence of opposites, does not use small parts or small ideas to build up to great ones, but details the specific way that a lack of knowledge is itself an appropriate means to express new concepts. For example, Cusa states that God is that maximum to which nothing is opposed, including the minimum, and that He is the light which is not opposite of darkness.39Nicolaus of Cusa, De Docta Ignorantia, translation by Jasper Hopkins, Book I. These are contradictions, specific unspeakables, to lead the reader towards an incomprehensible concept, by making the incomprehensibility more specific.
Cusa then takes up the universe, again communicating his thoughts in the form of impossibilities arising from attempts at understanding which are below the level required for comprehension. He reasons that that which is less than truth cannot measure truth precisely, and applies this insight to astronomy. Cusa maintains that no planet can move in a circle, as a perfect circle, embodying absolute perfection, cannot exist in the created world, and could not be a cause of motion.40De Docta Ignorantia, Book II. Similarly, perfectly uniform (circular) motion was impossible: how could two motions be so equal, as not to be capable of still greater equality? Thus, both circles and uniform motion were rejected as true means of understanding the planets. What remained? Nothing, in the language Cusa sought to surpass. His follower, Johannes Kepler, applied physics to astronomy, providing an affirmative higher level of thought that resolved the impossibility of understanding astronomy geometrically, as brought forth by Cusa. Kepler’s use of physics, of a physical principle whose application varied in every moment, shocked his contemporaries, laid an entirely new basis for astronomy, and opened the path to modern science.
A general conclusion can be drawn from this astronomical example. Contrary to Aristotle’s view that opposites could not co-exist, or the logician’s view that all conclusions exist inherently in the original premises, Cusa maintained the primacy of the process of discovery itself, whereby contradictions drive the mind to hypothesize a new concept, not derivable from the past—a conclusion that defies the premises, rather than following from them. Cusa held that it was through this process, of knowing through specific ignorance, that one could come the closest to seeing God. Resolving paradoxes through developing new metaphors for understanding is more than a technique for arriving at physical truths: this process is the truest substance of nature.41 Jason Ross, Metaphor: an Intermezzo.
Every human being is born with the potential to apply this process of discovery: to exist in the efficient immortality of discovering principles and applying them for the betterment of society, where betterment is seen in increasing the capability of fellow people to participate in this most characteristically human of behaviors.
The creation of such a society, free from the oligarchism that currently threatens global thermonuclear warfare, is the most beautiful, the most human, and the most urgently pressing task facing mankind today.
Recorded on March 22, 2015, these are the opening remarks by Jason Ross on aspects of time unique to biology and mankind, from the standpoint of Vladimir Vernadsky and Lyndon LaRouche. With a particular focus on why the laws of thermodynamics do not apply to humanity.
Leave a Reply