• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Key Discoveries in the History of Science

Well all right!!!

It has been a long time since we had a good scene-evolution versus theist debate.

Learner sounds like the old argument that science is really faith based, so science and religion are equally valid. Scientifism will likely come up.

The way I say it on cosmology science, philosophy, and region converge.

There is no way to experimentally prove any cosmology. To me it is useful, but it is mathematical philosophical speculation with numerous interpretations.

Some 'religiously' believe the BB is an absolute fact, I do not. It is a good model.


He is also touching on the matchmaker argument.

An aborigine in the jungle who never saw any technology finds a watch. It does not look natural therefore it mist have benn made by somebody.

One then looks at the universe and concludes somebody must have made it. IOW god did it.

Logical but not scientific.
 
The discovery of the fundamental constituents of the Universe has gone in a cycle:
  • Discovery of a few entities. Category may be poorly defined or not even properly recognized.
  • Discovery of many entities. Well-defined category.
  • Discovery of regularities among the entities.
  • Discovery of underlying simplicity and the causes of the regularities.
These entities have gone through that cycle, with all but the Standard Model completing it:
  • Atoms and chemical elements
  • Atomic nuclei
  • Hadrons
  • Standard-Model particles
 
Atoms and chemical elements

Their first stage started at least two and a half millennia ago, with the first known speculations on the nature of matter dating back to then. Pre-Socratic Greeks proposed earth, water, air, fire, and sometimes aether as the fundamental constituents or elements, and Chinese proposed earth, wood, metal, water, and fire. Speculations on how divisible matter is also go back that far, with some Greeks and Indians proposing a limit: indivisible particles or atoms.

But alongside them were recognized different kinds of metals, with seven metals being recognized in Greco-Roman antiquity: gold, silver, copper, iron, tin, lead, and quicksilver. These got associated with the seven “planets”: the Sun, the Moon, Venus, Mars, Jupiter, Saturn, and Mercury, in that order. This is why quicksilver is more usually known to speakers of many languages as versions of “mercury”, though speakers of some languages use versions of “quicksilver” or “silverwater”.

The elements moved into the second stage with Antoine-Laurent de Lavoisier’s publication of his Elementary Treatise of Chemistry in 1789, which featured the first modern definition of elements, and also a a list of them:

Light, heat, O, N, H, S, P, C, Cl, F, B, Sb, As, Bi, Co, Cu, Au, Fe, Pb, Mn, Hg, Mo, Ni, Pt, Ag, Sn, W, Zn, CaO, MgO, BaO, Al2O3, SiO2

Aside from light and heat, that is 31 presently-recognized chemical elements or oxides of them. Lavoisier himself only classified them into metals and nonmetals, but other chemists found additional regularities, and even tried to organize them into a Periodic Table of Elements.

In 1869, Dmitri Mendeleev announced his version, which included gaps for elements that he predicted. He likely felt very justified in doing so, because since Lavoisier’s time, some 31 more elements had been discovered, and he could easily have concluded that there may be more to discover. His predictions were successful; the missing elements, gallium and germanium, were discovered in 1875 and 1886, and their properties were close to his predictions.

This moved the chemical elements into the third stage, and while it was happening, scientists were making progress on their divisibility. Between 1798 and 1804, Joseph Proust did several experiments, and showed that some kinds of mixtures follow a Law of Definite Proportions, while some do not. The definite-proportion mixtures we now call compounds. John Dalton showed that atomism explained this law very well, and even estimated their relative weights. His successors expanded on his work, and showed how various properties of gases could be accounted for by supposing them to be swarms of atoms and molecules (groups of atoms stuck together) bouncing around while seldom colliding. These included the Ideal Gas Law:

(pressure) = (number density) * k * (temperature)

But toward the end of the 19th cy., physicists started discovering evidence that atoms were composite. In 1896, J.J. Thomson had showed that “cathode rays” are composed of “electrons”, particles with a charge-to-mass ratio about 1800 times that of the highest value for a charged atom. But what was the positively-charged part like? Distributed across the atom with the electrons residing in it like plums in a plum pudding, though many physicists.

In 1909, Ernest Rutherford, Hans Geiger and Ernest Marsden decided to test that hypothesis by shooting alpha particles from radium at some gold foil. Most of the alphas went through, but some were deflected, and a few of them bounced backward. This startling result was like an artillery shell bouncing off of tissue paper, wrote Rutherford. The positively-charged part was a “nucleus” much smaller than an atom, typically around 100,000 times smaller.

But why don’t electrons spiral in to nuclei? Solving that conundrum helped quantum mechanics develop. Physicists worked out that electrons are waves as well as particles, and and their wave nature means that if they are confined to close to a nucleus, then they must move fast, pushing up their total energy. So in an atom, the electrons have spiraled in as far as they could go.

Elements and atoms now moved into the fourth stage, with quantum chemists working out how their properties are derived from the behavior of their orbiting electrons. It takes a lot of computerized number-crunching to get good numbers, but quantum chemists have risen to that challenge, getting reasonable agreement for individual atoms and small molecules.

Atomic nuclei

They quickly skipped through the first stage of their discovery and entered their second stage, as Rutherford and others showed that having nuclei was not just a quirk of gold atoms. Each element had its own kind of atomic nucleus, and Rutherford discovered in 1913 that some elements have several several kinds or “isotopes”. Rutherford also discovered in 1921 that smashing alpha particles (helium-4 nuclei) into nitrogen makes hydrogen-1 nuclei, which he named protons.

It was quickly discovered that isotopes’ masses were approximately integral multiples of the hydrogen-1 mass, and in 1921 Ernest Rutherford speculated that most nuclei contain “neutral protons”. This sent nuclei into the third stage, and they moved into the fourth stage with the discovery of these neutral protons or neutrons in 1932 by Ernest Chadwick.

This was soon followed by Carl Friedrich von Weizsaecker’s semi-empirical mass formula, which treats nuclei as liquid drops, and which has been reasonably successful. Several physicists then developed a “shell model” for nuclear structure, in analogy with with electrons in atoms; it also has had a fair amount of success. Calculating nuclear structure from the interactions of individual protons and neutrons has been very difficult, requiring a lot of number crunching, but that also has been done.

Hadrons

Hadrons entered the first stage with the discovery of the proton. When neutrons were discovered, it was quickly recognized that they and protons are held together in nuclei by a force much stronger than the protons’ electromagnetic repulsion.

Starting in the late 1940’s, more and more strongly-interacting particles were discovered, and hadrons moved into the second stage. Many of them are so short-lived that they only show up as resonances or spikes in the parent particles’ reaction rates. Enrico Fermi famously complained about this particle zoo that “If I could remember the names of all these particles, I’d be a botanist.”

Hadrons entered the third stage with Murray Gell-Mann’s and George Zweig’s quark model of 1964, though for some years afterwards, a lot of physicists had questions about what kind of entity a quark was. Were quarks real particles or some sort of theoretical abstraction? But in 1968, particle-accelerator experiments showed that protons are made of “partons”, and further experiments on them showed that partons were quarks, thus moving hadrons into the fourth stage by 1973-74.

Calculating the structures and interactions of hadrons from first principles can be done, but it requires dividing space and time into a 4D lattice and then doing a LOT of number crunching. But it has recently been possible to predict the mass of the proton to within 2%.

Standard-Model particles

The Standard Model’s particles spent much longer in the first stage than nuclei or hadrons.

The first one discovered was the photon or electromagnetic field, and its discovery followed a sequence similar to my four stages. The first electromagnetic phenomenon to be discovered was visible light, a discovery that is likely as old as humanity. Electrostatic and magnetic effects were next; one of the first to notice them was Thales of Miletus around 600 BCE. But it was not until the 19th cy. that their interconnections were discovered and mathematical descriptions worked out. Electric currents are moving electric charges. Electric charges make electric fields and interact with them. Electric currents make magnetic fields and interact with them. A changing magnetic field makes an electric field around itself. Light is polarized, and when it travels through a material with a magnetic field applied, its polarization plane can rotate (Faraday rotation).

These descriptions were unified in 1873 by James Clerk Maxwell in his famous equations, which included an additional term, the “displacement current”, in which a changing electric field makes a magnetic field around it the way that an electric current does. He discovered wave solutions, with the waves having polarization and traveling at the speed of visible light in a vacuum.

Heinrich Hertz followed up by making electromagnetic waves with macroscopic currents: radio waves. Over the next half century or so, it was discovered that molecules, atoms, and nuclei can act like miniature antennas when they change state, emitting and absorbing infrared, visible, ultraviolet, X-ray, and gamma-ray spectral lines at strengths that can be predicted.

Electrons were the next Standard-Model particle discovered, in 1896. Protons were discovered in 1921 and neutrons in 1932, but they were not shown to be composite for nearly half a century, and the next Standard-Model particle discovered was the muon in 1936. Wolfgang Pauli speculated about neutrinos in 1930, noting the missing energy and angular momentum of beta decays, and they were discovered in 1956. With speculations about quarks and W particles and the like, the Standard Model entered the second stage in the 1960’s.

From there, it gradually moved into the third stage starting in the late 1960’s, with Sheldon Glashow, Steven Weinberg, and Abdus Salam proposing the electroweak theory in 1968 and quantum chromodynamics (QCD) being developed in the late 1960’s and early 1970’s.

The electroweak theory includes the photon, of course, and the W, to explain weak-interaction decays. It predicted a neutral version of the W, the Z, and the first evidence of the Z appeared in 1973. The W and the Z were discovered more directly in 1983, by seeing decays that fitted what was predicted for those particles.

QCD states that quarks are held together by gluons, which also interact with each other. Quarks and gluons cannot go more than about 10^-15 m from each other (quark/gluon confinement), but smashing them at each other probes their behavior at smaller length scales, where they are more weakly coupled to each other. Energetic quarks and gluons make jets of hadrons as they separate, and quark-quark-gluon jet events were first observed in 1979.

Turning to quarks, the first “flavors” discovered were up, down, and strange. Protons are up-up-down and neutrons up-down-down, and the strange quark got its name because of how particles containing it decayed at weak-interaction rates rather than the much faster strong-interaction rates. The charm quark was proposed in 1965 by Sheldon Lee Glashow and James Bjorken to fit in with the weak interactions, and a particle containing it, the J/psi particle, was found in 1974. A year before, Makoto Kobayashi and Toshihide Maskawa proposed that weak-interaction CP violation implies the existence of at least six quark flavors; the bottom quark was found in 1977 and the top quark in 1995.

The Standard Model has been in the third stage since the mid to late 1970’s, but various physicists have been trying to take it into the fourth stage with Grand Unified Theories and Theories of Everything (GUT’s and TOE’s). The track record with atoms, nuclei, and hadrons suggests that there is likely such a theory, but the details are not very well constrained by the Standard Model, and it will be difficult to do particle-accelerator experiments at GUT and TOE energy scales (10^16 and 10^19 GeV — a proton’s mass is about 1 GeV).

In 2012, the Large Hadron Collider's experimenters discovered the Higgs particle, thus discovering all of the particles of the Standard Model.

I have not mentioned gravity, because it is hard to construct a self-consistent quantum theory of it, and because it has had an entirely separate discovery track. But quantum gravity is an essential part of a TOE, because nongravitational particles and interactions are successfully described by quantum field theories. The most successful TOE to date has been string theory, which incorporates gravity in a natural way, but which does not predict the Standard Model with even an extremely rough approximation of unambiguity — one can get oodles of other low-energy limits from it in addition to the Standard Model.

So we will be stuck in the third stage of the Standard Model in the near future.
 
Here is a case where there was no key discovery, just a long series of small discoveries that chipped away at an old paradigm until it was gone.

Vitalism is the theory that living things are alive as a result of some "vital force", as opposed to their being that way being an emergent property of appropriately-arranged nonliving matter (mechanism).

Vitalism is an old and popular hypothesis, perhaps an almost universal hypothesis before modern times. "Soul" or "spirit" essentially meant "vital force" in many cases. The ancient Greek atomists, well-known for their philosophical materialism, believed that there are vital-force (soul) atoms as well as other kinds of atoms. Aristotle even went so far as to identify three kinds of vital force: the vegetable soul, the animal soul, and the rational soul. However, it is nowadays completely discredited in mainstream science, though it survives as the "theoretical justification" for various "alternative medical therapies". Yes, forces like "qi / chi" and "prana" are versions of "vital force".

It's hard for me to find any good histories of that subject online; the most I've found is Carbon Chemistry. Some accounts treat Friedrich Wöhler's 1828 synthesis of urea from ammonium cyanate as a Great Turning Point. However, this feat was hardly noticed at the time, and was celebrated only long afterwards. But it was counterevidence against a common view at the time, notably advocated by chemist Jöns Jakob Berzelius, that many compounds, the "organic" ones, could only be made by living things (the others are "inorganic"). And I suspect that this experiment is remembered because it was followed by numerous other experiments that pointed in the same direction.

In 1845, one of Wöhler's students, Adolph Kolbe, succeeded in making acetic acid from inorganic compounds, and in the 1850's, Pierre Berthelot succeeded in synthesizing numerous organic compounds from inorganic precursors, including methyl alcohol, ethyl alcohol, methane, benzene, and acetylene.

But a vitalist could still claim that this is not really counterevidence, because these substances could be made by that "vital force", in addition to being makable in the lab.

One of the last reputable vitalists in mainstream biology was Hans Driesch, who in 1895 made an odd discovery: he could take a fertilized sea-urchin egg that had started dividing, split it in two, and watch the two halves develop into two complete sea urchins, instead of two halves of one sea urchin. He concluded from this that there was some "vital force" responsible for development, something he called "entelechy" (goal-seeking tendency). But Driesch had a naive concept of "cell fate", as we now call it. In their first few divisions, a sea-urchin embryo's cells are uncommitted to any particular fate, and such uncommitted or partially committed cells are nowadays called "stem cells". That commitment happens later in development, and Driesch had proposed a sort of "vital force of the gaps".

But one of his contemporaries, Eduard Buchner, discovered in 1897 that yeast-cell contents could cause fermentation in the absence of whole yeast cells. He followed up in 1903 by making the first discovery of one of the enzymes responsible (zymase).

And over the twentieth century, molecular biologists continued onward, scoring triumph after triumph, while totally ignoring the vital-force hypothesis. They have finished what Wöhler started, mapping out numerous metabolic pathways, including biosynthesis ones. And they have solved several other biological riddles, like heredity. There are still some things that have resisted molecular biologists' efforts, like how one gets from genes to macroscopic shapes, but from what can be determined about that, a vital force is totally superfluous there also.

I finally note an odd circumstance: present-day vitalists are totally apolitical about their vitalism, in strong contrast with creationists, who are sometimes shamelessly political about their beliefs. There are not many vitalists who want equal time for chi and prana in molecular-biology classes. And molecular biologists devote next to no effort to debunking chi and prana.
 
I'd be interested to know the 22nd or 23rd century perspective on the post-enlightenment era. Right now we worship science, I wonder how that will change.
Who the heck worships science? Citing the science the half the time it agrees with your ideology but ignoring or misrepresenting the science the half the time it disagrees with your ideology does not count as worshiping science.

I wouldn't take the word worship too seriously, it was mostly tongue-in-cheek. But the larger point is that those who have some semblance of scientific literacy often seem to be wowed by the potential of science, while blind to the harm it's done. Don't get me wrong, I'm not trying to take down science, I just think we're living in an era where we don't fully understand the implications of the past few centuries.

When you're living in the pre-modern era reason, and the spread of reason, looks incredibly appealing, and for good reason. But I think, as a species, we're still yet to understand and realize it's limits.
 
When you're living in the pre-modern era reason, and the spread of reason, looks incredibly appealing, and for good reason. But I think, as a species, we're still yet to understand and realize it's limits.
I just read (okay, listened to) a fascinating book by David Deutsch you might enjoy called The Beginning of Infinity, making the case that reason has no limits.
 
In 2012, the Large Hadron Collider's experimenters discovered the Higgs particle, thus discovering all of the particles of the Standard Model.

Possibly.

The Muon g-2 phenomena is not understood yet.

 
When you're living in the pre-modern era reason, and the spread of reason, looks incredibly appealing, and for good reason. But I think, as a species, we're still yet to understand and realize it's limits.
I just read (okay, listened to) a fascinating book by David Deutsch you might enjoy called The Beginning of Infinity, making the case that reason has no limits.

Reminiscent of Fukuyama's End of History. It sounds reasonable, until someone develops a super-powerful weapon that causes a mass extinction event.

Point being that this period in history is unprecedented, and we have absolutely no idea what's coming. There is really no way to predict our future based on what we know about history, beyond us lacking complete control over our fate. We like to think that the future will be incredible, but it's really impossible to know.
 
he old BBC I think called Connections hosted by a guy named Burke went through history showing how unrelated events, discoveries, inventions, and serendipity combine to create a larger step forward.

Individuals whoHve no connections with others working alone come up with something. Somebody else combines all of it.

An sample would be Maxwell. Over censures all the small steps in electricity and magnetism and math led to Maxwell's ground breaking synthesis and prediction of EM waves traveling at c.

Burke showed that is more the norm than the exception.

If you never saw the series it is probably online. Worth watching.
 
When you're living in the pre-modern era reason, and the spread of reason, looks incredibly appealing, and for good reason. But I think, as a species, we're still yet to understand and realize it's limits.
I just read (okay, listened to) a fascinating book by David Deutsch you might enjoy called The Beginning of Infinity, making the case that reason has no limits.

Reminiscent of Fukuyama's End of History. It sounds reasonable, until someone develops a super-powerful weapon that causes a mass extinction event.

Point being that this period in history is unprecedented, and we have absolutely no idea what's coming. There is really no way to predict our future based on what we know about history, beyond us lacking complete control over our fate. We like to think that the future will be incredible, but it's really impossible to know.
I agree with rousseau. Nanotechnology is just one example of coming technology that may have wild unpredicted consequences.

But we needn't worry only about future technologies: present tech is already dangerous. Production of plastics was a huge breakthrough that is already severely impacting ocean ecology. Artificial hormones invade our water supply. And the fear that "smart computers" will rise up and manipulate humans is already with us! For this one need look no further than the algorithms on Facebook which automatically exacerbate partisan divisions.

Much of the new dangerous technologies are needed to cope with the high human population. World population only tripled (from 0.2 billion to 0.6 billion) during the 17 centuries after 1 AD. This is an annual growth rate of less than 0.065% — yet lacking diversions like Netflix those older humans spent more time procreating! The human population was held in check by natural limits. But now, with the population approaching 8 billion, humanity has "painted itself into a corner." We NEED new technologies for this high population to continue, but lack the collective wisdom to choose those technologies wisely.
 
Law of Least Action (part 1)

Like most of the scientific discoveries we've been reviewing, The Law of Least Action was not a single key discovery, but a gradually evolving understanding. Its importance is suggested by the fact that a thread devoted to "the path of least action" is active in this forum as I write. The wiki article  Stationary Action Principle provides both a summary of the Principle and a look at the history of its discovery.

First note that the modern principle is labeled "Stationary Action" in contrast to the historic name "Least Action." But I won't cover this change in understanding in this post, "part 1." (If anyone wants to "pick up the baton" and post a Part 2 before I do, please feel welcome!)

As seen in the Wiki article, two of the earliest concrete statements of a Principle of Least Action were
(1) Pierre Louis Maupertuis's principle — he thought God acts to minimize E·dt (for a particular formulation of Energy)
(2) Leonhard's Euler principle — though not emphasizing the role of a Deity, he points out that physical laws serve to minimize mv·ds

These two principles seem quite different: Maupertius is minimizing a product which is Energy×Time, while Euler is minimizing Momentum×Distance. Yet both these products have the same units (kg·m2·sec-1), the dimensionality of Action. These are also the units of Planck's constant. Action really is central to physics: Does this fact lead to useful insights?

The earliest applications of a Least Action principle arose in optics, specifically reflection and refraction. It was Hero of Alexandria, already mentioned in this thread, who first observed that the path light follows, when you catch the reflection of the spy following you in a mirror, minimizes the distance that light beam traverses.

Refraction (i.e. the effect on a light beam when it passes through both air and water or glass) is more difficult. The angles associated with refraction are given by Snell's Law (Willibrord Snellius, 1580-1626); this was partly anticipated by Ptolemy (circa 90-168), the obscure Persian physicist ibn Sahl (ca 940-1000), the great Alhazen (965-1039), and perhaps Thomas Harriot (1560-1621), but none of these guessed that Snell's Law followed from a Principle of Least Action or a Least Time version of Hero's Least Distance law. (Ptolemy seemed to think, like most ancients, that vision involved rays emanating from the eye toward the looked-upon object :confused:. Alhazen wrote that glass presented more resistance to light than air did, but he didn't extend that insight to a Least Time principle.)

It was the great Pierre de Fermat (1601-1665) who first hypothesized that light traveled more slowly in glass or water than it did in air, and that Snell's Law was the mathematical solution that minimized the time a light ray spent traveling from object to eye. Fermat was quite pleased with this insight writing "The price for my work turned out to be the most extraordinary, the least expected, and the happiest there ever was." A few decades after Fermat's discovery, Christiaan Huygens (1629-1695) achieved the same result as a consequence of his wave theory of light.

This important discovery by Fermat and Huygens was dismissed by most other 17th-century physicists, including even the great Sir Isaac Newton.  Fermat's principle#Fermat_vs._the_Cartesians

René Descartes (1596-1650) was the premier authority of his time, and even now is more famous than either Fermat or Huygens. But most of his ideas of physics were wrong. Huygens tells us that he read Descartes avidly at age 15 and assumed errors in understanding were his own "but having since discovered ... things which are patently wrong [or] unlikely, ... today in all of his [Descartes'] physics ... I find almost nothing to which I can subscribe as being correct."

Descartes, like Fermat, attempted to come up with a physical explanation for Snell's Law, but he based his on the idea that light travels 1.33 times FASTER in water than in air, rather than Fermat's correct assumption that light travels 1.33 times more SLOWLY in water. Descartes' wrong ideas remained influential for a hundred years. One of the final "nails in the coffin" to Descartes' mistaken ideas came in 1736 when Pierre-Louis de Maupertuis (1698-1759) led an expedition to Lappland in the far North to measure the line of latitude there. In 1737 Maupertuis returned in triumph (and bringing the first skiis ever seen in France!) and announced that it was Isaac Newton (whose theory predicted flattened poles) who was correct rather than Descartes (who thought the poles more pointed). Among his other achievements, Maupertuis anticipated part of the Theory of Evolution. Maupertuis' sudden fame enraged the ever-jealous Voltaire. He was taken prisoner in the Seven Years' War and released because of his great fame, which Voltaire summarized with "He has been captured by some Moravian peasants, who stripped him naked and emptied his pockets of more than fifty theorems."

As mentioned above, Maupertuis was first to articulate a Least Action Principle. (He got the details wrong when he applied it to the refraction problem, and concluded with Descartes that light traveled FASTER in water or glass.) He extended the idea that God acts to minimize action to the idea that God has created the most perfect world, an idea he borrowed from Gottfried Leibniz (1646-1716). Voltaire ridiculed this idea in Candide where Dr. Pangloss is a stand-in for Maupertuis. A claim was made that Maupertuis had borrowed his Least Action Principle from Leibniz; this turned into a big controversy with Voltaire opposing Maupertuis, who apparently died a broken man. (Leibniz did hint at such a principle in an obscure letter, but never developed the idea.)

These insights, first by Pierre de Fermat and Christiaan Huygens, and then by Pierre-Louis de Maupertuis, eventually evolved into William Hamilton's Stationary Action Principle, but let's save that for part 2.
 
That's essentially an alternative formulation of Newtonian mechanics. In fact, there are two others.

Newton's original formulation (coordinate q, potential V, mass m, dot is time derivative d/dt):
\( m \ddot{q} + \frac{dV}{dq} = 0 \)

Lagrange's formulation (Lagrangian L(q,q',t)):
\( L = \frac12 m \dot{q}^2 - V\)

\( \delta \int L dt = 0 ,\ \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q}} \right) - \frac{\partial L}{\partial q} = 0 \)

Hamilton's formulation (momentum p, Hamiltonian H(q,p,t))
\( p = m \dot{q} ,\ H = \frac{p^2}{2m} + V \)

\( p = \frac{\partial L}{\partial \dot{q}} ,\ H = p \dot{q} - L \)

\(\frac{dq}{dt} = \frac{\partial H}{\partial p} ,\ \frac{dp}{dt} = - \frac{\partial H}{\partial q} ,\ \frac{dH}{dt} = \frac{\partial H}{\partial t}\)

Hamilton-Jacobi formulation (Hamilton principal function S(q,a,t), constants a, b)
\( p = \frac{\partial S}{\partial q} ,\ b = \frac{\partial S}{\partial a} ,\ \frac{\partial S}{\partial t} = - H \)
 
The q and p here are "generalized coordinates", and in the Hamiltonian formulation, it is easy to swap their roles.

All but the straight Newtonian formulation have quantum-mechanical counterparts.

Hamilton-Jacobi -> Schrödinger (or Schroedinger), with wavefunction
\( \psi \sim e^{iS/\hbar} \)
and equation
\( i \hbar \frac{\partial \psi}{\partial t} = H(q, -i \hbar \frac{\partial}{\partial q}, t) \psi \)

Hamilton -> Heisenberg (operators are functions of time)

Poisson bracket:
\( \{f, g\} = \frac{\partial f}{\partial q} \frac{\partial g}{\partial p} - \frac{\partial f}{\partial p} \frac{\partial g}{\partial q} \)

Classical limit:
\( \frac{df}{dt} = \{f,H\} + \frac{\partial f}{\partial t}\)
Heisenberg formulation:
\( \frac{df}{dt} = - \frac{i}{\hbar} [f,H] + \frac{\partial f}{\partial t}\)

Lagrangian -> path integrals

Action integral:
\( S = \int L dt \)

Path integral:
\( <f> = \frac{1}{Z} \int f(q) e^{iS/\hbar} Dq(t) \)
where Dq(t) = dq(t1) dq(t2) dq(t3) ...
and Z is the integral with f(q) = 1
 
Observation says energy distributes itself to minimize potential difference.

Water distributes itself without any peaks. Charge distributes itself analogous to water.

That energy distributes itself minimizing differences I knew. Planets and stars are spherical not triangular.

I was not aware of the history of it. Good topic.
 
In the 19th century a question was explaining the action t a distance for electric and magnetic experiment.

Multiple theories including some kind of mechanical medium.

Fields were used to model cause and effect at a distance.

From diffential equations a
filed map' is smply a pount plot of th force vectors at points.


In a static elecy=tric field move a theoretical unit charge around the space between two plates with a voltage across it a capacitor. Plot the direction and magnitude of the force exerted on the charg as it moves around. That maps a static electric field.

Do te same with a magnetic or gravitational field or flowing water.

Fields are a universal tool. It maps or models a phenomena but is not te physical phenomena.

Maxwell developed the propagating EM wave as an alternating electric and magnetic fields.
 
Thanks for the refresh. Old brains love this kind of help.

Took me back to frosh physics at UW (Seattle) in '60 where I and about 200 others attended biweekly lectures and learned such as  Field (physics).

To Isaac Newton, his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. When looking at the motion of many bodies all interacting with each other, such as the planets in the Solar System, dealing with the force between each pair of bodies separately rapidly becomes computationally inconvenient. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces. This quantity, the gravitational field, gave at each point in space the total gravitational acceleration which would be felt by a small object at that point. This did not change the physics in any way: it did not matter if all the gravitational forces on an object were calculated individually and then added together, or if all the contributions were first added together as a gravitational field and then applied to an object.[10]The development of the independent concept of a field truly began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became much more natural to take the field approach and express these laws in terms of electric and magnetic fields; in 1849 Michael Faraday became the first to coin the term "field".[10]
The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields propagated at a finite speed. Consequently, the forces on charges and currents no longer just depended on the positions and velocities of other charges and currents at the same time, but also on their positions and velocities in the past.[10]
Maxwell, at first, did not adopt the modern concept of a field as a fundamental quantity that could independently exist. Instead, he supposed that the electromagnetic field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the observed velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no experimental evidence of such an effect was ever found; the situation was resolved by the introduction of the special theory of relativity by Albert Einstein in 1905. This theory changed the way the viewpoints of moving observers were related to each other. They became related to each other in such a way that velocity of electromagnetic waves in Maxwell's theory would be the same for all observers. By doing away with the need for a background medium, this development opened the way for physicists to start thinking about fields as truly independent entities.[10]

Amazing how much gets buried over 60 years.

Doncha just love the internet? I throw a few shekels Wiki's way every now and then.
 
In summary,

ClassicalQuantum
Original Newton
LagrangePath Integral
HamiltonHeisenberg
Hamilton-JacobiSchroedinger
 
Back
Top Bottom