[Leptonica Home Page]

An Elementary Primer on Elementary Particles and their Interactions


Updated: Dec 30, 2023

Some introductory material on elementary particles
How did we get here?
What is a fundamental particle?
Quark and lepton interactions: the weak and electromagnetic forces
QED in a little more detail
A more modern view of the weak interaction
Quark interactions: the color force
So where does the old "nuclear" force come from?
Symmetry and angular momentum: particles are classified as either fermions or bosons
Internal symmetry and interactions
Tables of leptons and quarks
The four spectroscopies
For further reading

Some introductory material on elementary particles

Just as high school students for the past 60 years have been taught chemistry around the periodic table of the elements, you might expect that a table of quarks and leptons (and their interactions) would have joined inclined planes, Newton's law and magnetic induction as a part of a modern high school physics education. Sadly, this is not generally the case. High school physics is a chance for motivated students, who are learning a little calculus, to see how things work and how mathematics can be used to estimate things they are familiar with. It's also a chance to get introduced to the important discoveries in the past 50 years, and the way physicists think about the world. The following gives that subset of discoveries and models that relate to both the particles that are presently believed to be "elementary" and some of the composite particles that are built from them. The level of description is mixed; I hope there are ideas here that are of interest to people with science backgrounds ranging from zilch to physics majors.

There are many references for information and answers to questions about physics. I list some useful books at the end of this primer. Additionally, you may be amused and/or enlightened by some of the following:

How did we get here?

We will zip through the 19th and 20th centuries to get quickly to 1940. Nearly 200 years ago, Guy-Lussac found that volumes of gases combine in chemical reactions in integer ratios. The inference was made by Avogadro that the observed integer ratios reflected a simple underlying rule for combination. For example, water molecules are composed of more elementary objects, hydrogen and oxygen, that combine with a ratio of two parts hydrogen to one part oxygen. Water can be broken into oxygen and hydrogen by passing an electric current (electrolysis), and the two gases can be separately collected. At room temperature, oxygen and hydrogen are diatomic molecules in a gas state, and the number of molecules of each can be inferred from the volume they occupy as a gas. (Well, not the actual number, but the relative number. To get the actual number you need to know that 6 x 1023 molecules occupy 22.4 liters of volume at a standard temperature and pressure.)

In 1869, Mendeleev assembled the known elements into the Periodic Table, grouping them according to their chemical similarities and in order of ascending atomic weight. The table is of great use in predicting the chemical properties of atoms. However, the meaning of the ordering was not well understood until 1913, when Moseley realized that the atoms were ordered by the number of electrons, rather than the weight of the nucleus. You might find it strange that it took 44 years to finally understand the table, but consider: the electron was discovered in 1896, and the nucleus of the atom was discovered by Rutherford in 1911. And Bohr made his simple theory of the hydrogen atom in 1912, which explained the much of the absorption and emission spectrum of hydrogen, using a model where the electrons were confined to discrete orbits with angular momentum in discrete units given by Planck's constant divided by 2 pi. (Planck's constant is the mysterious fundamental unit of the quantum world, a tiny number, 6.67 x 10-27 erg-seconds, which assumed its proper place in the laws of nature only after the quantum mechanics revolution of the mid 20s. The angular momentum of the orbiting electrons was cleverly assumed by Bohr to be in units of 1.05 x 10-27 erg-seconds.) In 1913, nobody knew about the neutron. It was assumed that the nucleus was composed of a large number of positively-charged protons and a smaller number of negatively-charged electrons, such that the total number of electrons, both within and outside the nucleus, equalled the number of protons, and thus making the atom electrically neutral.

It wasn't until the 1920s, with the advent of quantum mechanics, that the periodic table was really understood. Quantum mechanics describes with great accuracy how electrons behave in an atom. One of its first successes was the enumeration of the energy levels that the electron can occupy in the simplest of all atoms: hydrogen. The properties of the wave functions for a single electron (specifically, its angular momentum and spin quantum numbers) immediately gave the magic numbers that had been observed in the periodic table: 2, 8, 18, ... These are the increments in the atomic number (number of electrons) before the chemical properties tend to repeat. The explanation was simple: the electrons in atoms heavier than hydrogen successively fill the orbits that are allowed in the hydrogen atom. Of course, there is some interference between electrons, which makes the details of the science of chemistry and atomic physics more complicated than this simple picture. But the fundamental fact is that the quantum mechanics of a single electron in the hydrogen atom gives a good first approximation to the chemical properties of the elements!

The neutron was finally discovered, in 1932, by Chadwick. This discovery cleared up some serious puzzles. One such puzzle involved the nucleus of the nitrogen atom. Protons and electrons were known to have "spin 1/2," which means, they act as if they are little spinning tops with angular momentum equal to half of Planck's constant (divided by 2 pi). The nitrogen nucleus acted as if it had an even number of spin 1/2 particles in it, whereas the pre-1932 theory required 21 of these particles inside (14 protons and 7 electrons, to give a +7 total charge, which was balanced by 7 negatively charged electrons outside the nucleus that are responsible for all the chemistry of nitrogen). Finally, with Chadwick's discovery, it was understood that there were no electrons in the nucleus: nitrogen has 7 protons and 7 neutrons, for a total of 14, which is an even number, as had been observed. Another puzzle that was resolved by the neutron involved the assumed confinement of electrons within the nucleus. This was problematical due to Heisenberg's uncertainty principle, formulated in 1925. We will get to that later.

In that same year, 1932, the positron, which is the antimatter version of the electron, was discovered by Carl Anderson. The positron dramatically confirmed Dirac's relativistic quantum theory, which he had developed 3 years earlier and which predicted the existence of the positron, as well as the fact that both the electron and positron should have spin 1/2. Thus by 1932, there were five particles, all supposedly elementary, that were known:

We'll talk about the meaning of the spin below; for now, it can be understood as an intrinsic, non-classical (i.e., purely quantum mechanical) property of all particles, whereby they have an internal angular momentum, given in integer or half-integer multiples of Planck's constant (divided by 2 * pi).

Further advances were made in the 1930s. Some nuclei are unstable, and they were found to decay in one of three ways:

It was through this study of nuclear physics that physicists began to understand the usefulness of categorizing these processes in terms of the underlying "force" or interaction. In the 1930s, physics got much more complicated. Whereas previously there were only two known forces, electromagnetism and gravity, with the addition of the strong and weak nuclear forces, there now were four fundamental interactions. With the exception of the electromagnetic interaction, which depends on the charges on the particles, and gravity, which depends on the mass of the particles and was well-described by Einstein's classical General Theory of Relativity, these interactions were poorly understood.

In weak interactions, it was found that the energy of the emitted electron or positron was not constant. There was a distribution of energies, with nonzero probability for all energies between a lower energy bound and an upper energy bound. Now, a basic postulate of quantum mechanics is that each state of the nucleus, before and after decay, has a definite energy and is labelled uniquely by a set of quantum numbers. Conservation of energy demands that the energy of the emitted particle(s) is equal to the difference of nucleus energies before and after the decay. Thus if the electron can have a variable amount of energy when emitted from a nucleus in a known quantum state, there are only two possibilities: either (1) there is another, unobserved particle, that carries away some energy, or (2) energy is not conserved in the weak interactions. Some distinguished physicists wasted time on the second choice. Wolfgang Pauli made the first and more likely choice (more likely because conservation of energy is related to a fundamental assumed symmetry of nature: that the laws of physics do not change over time!). He named the unobserved particle the neutrino, which means "little neutral particle." Enrico Fermi then postulated a universal weak interaction that was responsible for all weak decays. These "weak decays" also tend to be very "slow," by which we mean it can take a relatively long time (seconds, days, years, or millions of years) before the nucleus emits the particle. This is in contrast to the electromagnetic decays, where a photon is typically emitted in a tiny fraction of a billionth of a second. The strong interaction decays, where helium nuclei are thrown out of the nucleus, are typically even faster than the electromagnetic decays, which is one reason they were called "strong." Although these decay times are typically ordered as "strong" << "electromagnetic" << weak, there are many complicating factors involved, depending on the various quantum states of the nuclei, "selection rules" that determine which transitions are "allowed" between the states, etc.

A typical heavy nucleus such as uranium will undergo a cascade of decays of different types, shedding protons and neutrons with alpha decay, energy only with gamma decay, and changing neutrons into protons (or v.v.) with beta decay. Uranium ends up as lead, converting about 0.1 percent of its original mass, or 1/4 the mass of a proton, into energy. The overall decay half-life of uranium is about 4 billion years, and essentially all of that half-life is spent waiting for a weak decay. Much of the phenomenology of nuclear physics is contained in a graph called the curve of binding energy . The binding energy (energy per nucleon) is a maximum for the most stable element nucleus, which is iron. Elements that are heavier than iron will give up energy if they break up to form iron; lighter elements will gain energy if they "fuse" to form iron. For this reason, iron is the "end point" in the cycles of nuclear burning in stars. As might be expected, most of the stellar energy is liberated in the fusion of four hydrogen nuclei to form helium; relatively little additional energy is given off as the helium combine to form carbon, oxygen and, eventually, iron.

The strong force was particularly puzzling. In analogy with the electromagnetic interaction, which was fairly well understood quantum mechanically in the 1930s, Yukawa guessed that the strong force was due the exchange of a medium weight particle, which he named a meson, between two nucleons. (The proton and neutron were called "nucleons".) This idea about exchanging unseen particles comes naturally from a description of physical laws that incorporates quantum mechanics and relativity. It is called quantum field theory because it permits particles to interact via a "field" (see below) of "quantized particles", and it requires the generation and destruction of these quanta, according to rules that depend on the specific interaction. From his theory and from measurments of the range of interaction of the strong force, Yukawa predicted that his meson would have a mass of about 200 times that of the electron mass, and about 1/10 the mass of the proton. Soon thereafter, in 1936, a cosmic ray particle was discovered with this approximate mass. (Cosmic rays are charged particles, mostly from the Sun, which enter the Earth's atmosphere. The cosmic rays that are observed at ground level are primarily particles that are created in the upper atmosphere and travel down from there.) Although this was initially taken to be a verification of Yukawa's theory, it was quickly discovered that the particle, now called a muon, is really a heavy cousin of the electron, and has nothing to do with the strong force. (Its detection at the Earth's surface was, however, a striking confirmation of Einstein's special theory of relativity. The only way a short-lived muon was likely to travel several miles through the Earth's atmosphere was for its internal "clock" -- i.e., a hypothetical clock zipping along with the muon -- to be going very slowly, as observed on the Earth, because the muon was traveling at nearly the speed of light through the Earth's atmosphere.) Soon thereafter, another particle, the pion or "pi meson," was observed in experiments with cyclotron accelerators. It had the predicted mass and interacted with protons and neutrons by the strong interaction. This was the particle that Yukawa had predicted to mediate the strong interaction. And for the next 30 years, from 1940 to 1970, the pion was considered to be one of the fundamental particles responsible for the strong force. Because the negatively charged pion decays "slowly", in a few billionths of a second, via the weak interaction, into an electron and a neutrino, it is a little disconcerting to consider it truly elementary, but in that period physicists were willing to make a liberal interpretation of what constituted an elementary particle. (Roughly speaking, if it lived long enough to decay by the weak interaction, it was elementary; otherwise, if it decayed by the strong interaction it was called a resonance. Resonances were created in particle accelerators, and the existence of a resonance was inferred from an increased probability for scattering when the incoming particles had enough energy to create the "resonance" particle. In fact, there was a distribution of energies in which the probability was larger for making a resonance particle, and the lifetime of the particle was so short that it could only be inferred, via the Heisenberg Uncertainty Principle, through the width of that energy distribution. The shorter the lifetime, the greater the variation in the energy of the resonance -- also spracht Heisenberg.)

But as we now understand it, the pion is not fundamental, and the strong force of nuclear physics is not the real strong force of the universe.

We now abandon a chronological overview, and zip forward to 1970, picking up some of the discoveries between 1932 and 1970 retrospectively.

What is a fundamental particle?

The most recent watershed in the understanding of the basic constituents of the universe occurred in the years surrounding 1970. Before 1970, the smallest particles of matter known had been assembled into two families of leptons (the electron and its neutrino; the muon and its neutrino) and two very different types of relatively heavy particles, mesons and baryons, collectively known as hadrons. As it turned out, none of the hadrons is elementary -- they are all made out of particles with fractional charges named quarks by Murray Gell-Mann in 1964. The mesons are made from quark/anti-quark pairs, and the baryons, of which the lightest are the proton and neutron, are composed of 3 quarks. Initially, nobody, including Gell-Mann, believed in the actual existence of quarks. They were a useful construct that explained the properties of several hundred mesons and baryons that had been observed. As mentioned above, many of these hadronic "particles" were called resonances because they existed for a very short period of time, less than a billionth of a trillionth of a second, about the time it takes light to cross the nucleus of an atom! However, in the decade between 1964 and 1974, the evidence built up for the existence of quarks, and since the mid-70s, it has been generally accepted that the best model we have for the fundamental building blocks of matter is that everything is composed of leptons and quarks, plus the particles that provide the interactions between them.

What distinguishes a fundamental particle from one that is not? The leptons and quarks are fundamental because they are believed to be point particles that are not composites of other more fundamental particles. Of course, the history of physics is replete with claims, eventually shown to be false, that the most basic, fundamental and indivisible particles had been found. Like the mesons and baryons, leptons and quarks may eventually be shown to be composites of other particles, but the description we now have appears to be valid to energies several thousand times that of the proton mass.

Leptons and quarks have a lot in common. They are both assumed to obey the same equation of motion (the Dirac equation), and they both have an intrinsic angular momentum ("spin") equal to 1/2 the fundamental quantum unit of spin. They are called fermions and obey the Pauli Exclusion Principle, which keeps the electrons in an atom (and hence the atom itself) from collapsing into a very low energy state. There are three known families for both leptons and quarks. Leptons in the first family are the electron and its neutrino, a chargeless and nearly massless version of the electron, and their anti-particles the positron and the anti-neutrino. There are also two quarks in the first family, designated "up" and "down". But the quarks have properties so strange that it took nearly 10 years from the time they were proposed in 1964 until they were widely accepted as existing. First, they have non-integral charges of -1/3 and +2/3, in units of the charge on the electron, and nobody has ever seen a free particle with fractional charge. Thus, the quarks must be permanently trapped within particles like the proton and neutron. And to keep them trapped, they must interact by an extremely strong force with the unusual property that the force between quarks increases with distance(!), which is unlike any of the better understood forces such as gravity and electromagnetism.

All of this was a lot to understand, but the most amazing thing is that there exists a theory, called the Standard Model, that follows from a relatively small set of constraints called symmetry principles and explains virtually every experiment that has been done -- with a few fitted parameters, of course!

Lepton and quark interactions: the weak and electromagnetic forces

To understand more about the difference between leptons and quarks, and about the properties of quarks, we need to understand how they interact with each other. Not counting gravitation, for which there is today no well-understood quantum theory, and before the Standard Model, it was useful to describe the interactions between particles as falling into three categories: electromagnetic, weak and strong. All charged particles interact electromagnetically, and the theory, called Quantum Electrodynamics (QED) was worked out in detail by 1948. In the basic theory of QED, the photon, which is the quantum of light, plays a major role: two charged particles interact by "exchanging" a photon. This appears to be quite different from the coulomb force law, which says the force between charged particles is proportional to the product of their charges and varies inversely with the square of the distance between them. In the classical field view, two particles interact at a distance through a field: each particle creates a field around it, and the field interacts with the other particle. But QED is a local quantum field theory (see below). QED makes predictions, such as the size of the magnetic moment of the electron, that have been verified to better than one part in a billion, a stupendous level of agreement between experiment and theory! The weak interaction was postulated in the 1930s to account for radioactive decay of nuclei. For example, the neutron was observed to decay, with a half-life of about 10 minutes, into a proton and an electron. There was also some missing energy in some nuclear decays, which was very disturbing. Bohr was willing to renounce conservation of energy, but Pauli wisely guessed that a small neutral particle was emitted but not observed. In 1934 Enrico Fermi used this electrically neutral and perhaps masless neutrino as part of a fundamental new force, the weak interaction, that was responsible for the observed slow radioactive decays of nuclei. Not having any charge, the neutrino could only interact through matter via the weak interaction, which was very weak indeed. A neutrino has nearly zero probability of interacting with any particle as it goes through the earth! Consequently, even though the existence of the neutrino was generally accepted, it took over 20 years before a neutrino was "observed" due to its interaction with another particle.

QED in a little more detail

Now, in QED, the electromagnetic interaction arises from a photon being emitted at one point in space time and absorbed at another. The photon is virtual, in that it has no real existence as a free particle, but is in a sense bound to the emitting and absorbing particles. In fact, it is not possible to say which particle emitted the photon, because the two events (of emission and absorption) are related in space-time in such a way that two observers in two different "inertial reference frames" (i.e. moving at some constant velocity with respect to each other) could disagree about which event happened first! This is all a consequence of special relativity: the constancy of the speed of light as observed in all reference frames.

For example, here are the two second-order Feynman diagrams for scattering of two electrons:

This is called "second-order" because there are two elementary vertices where a lepton (solid line) enters and leaves, and a photon (wiggly line) either enters or leaves. The virtual photon is emitted by one electron and then absorbed by the other. Time goes forward in the upward direction. There is an indeterminacy about the trajectories of the electrons. Because all electrons are the same, when the electrons are detected after interacting, it is impossible to tell which initially had p1 and which had p2. That is why there are two different diagrams that ``contribute" to the scattering. They contribute to the overall event by each having an amplitude for occurring. A basic assumption in quantum mechanics is the superposition principle: each possible way that something that can happen has an amplitude, which is just a complex number, and the probability of a measurable event is found by first summing over the amplitudes of all the possible ways the event can occur (this sum over complex numbers gives the wave interference behavior) and then squaring the result to get the probability of the event (a real number not larger than 1).

There is a fundamental conservation law: the conservation of leptons. Lepton conservation must happen at each vertex. Because an electron is a lepton, you see at each vertex that an electron goes in and comes out. The number of leptons in the universe can never change, because all lepton interactions go through 3-point vertices such as the ones of QED or the electroweak interaction (see below).

Here are the diagrams for electron-positron annihilation:

In the diagram on the left, the electron with momentum p1 appears to emit a photon of momentum k1; sometime later it emits a second photon and goes backwards in time. This may seem strange. But in QED, a positron moving forwards in time is equivalent to an electron moving backwards. So another interpretation of this Feynman diagram is that an electron emits a photon, moves on, hits a positron and they disappear with the emission of a second photon. Now, these photons are real, not virtual. They can be detected away from the interaction. The virtual particle here is the electron that moves horizontally on the diagram. We see two photons emerge from the annihilation. But how do we know which photon was emitted first? We can't know, so there is a second diagram (on the right) where the electron with momentum p1 first emits a photon with momentum k2, and then emits the other photon when it annihilates with the positron. And we could just as easily describe this from the viewpoint of the positron. On the left, the positron of momentum p2 emits the k2 photon, travels a short distance as a virtual positron and annihilates with the electron, emitting the k1 photon. On the right, the positron emits the k1 photon, and travels as a virtual positron until it hits the electron (p1) and emits the k2 photon. So we have many descriptions. And perhaps you've noticed that we can't say which photon was emitted first in either of the diagrams! In the theory of relativity, the two vertices, which are locations in 4-dimensional space-time, are separated by a spacelike distance, which means that light cannot travel from one to the other. It also means that some (moving) observer will see the two vertices as happening at the same time, but other observers will disagree and say that one or the other vertex occurs first! Ah, but if light isn't fast enough to travel from one vertex to the other, and light is faster than all particles that have mass, how can the electron or the positron make the trip? That is one of the mysterious properties of quantum mechanics: virtual particles have some wiggle-room to perform such feats.

And finally, here are the diagrams for Compton scattering of an electron by a photon:

In the diagram on the left, the electron emits a photon, travels as a virtual particle, and finally absorbs the incident photon. On the right, the electron first absorbs the incident photon, travels, and then emits the "scattered" photon. Although I just described the interactions as happening sequentially in time, in fact for each diagram, depending on the observer, the relative time order of photon absorption and emission are not fixed. The Feynman diagrams are so useful because they represent the interactions in a general way that doesn't depend on the motion of the observer. One of the requirements of all theories is that the description is valid for all observers; i.e., that the description is invariant no matter what the velocity of the observer. This property is called Lorentz invariance because Lorentz invented the space-time Lorentz transformation laws, without understanding the deeper significance, before Einstein's 1905 relativity theory. See below for more on Lorentz invariance.

A more modern view of the weak interaction

But Fermi's description of the weak interaction involved 4 fermions interacting at a point, without any intermediary virtual particle like the photon. This was clearly an approximation, but if the weak interaction was due to the exchange of a particle, as in QED, that particle had to be extremely massive -- many times the mass of the proton! By contrast, the photon has zero mass. This can be understood by a basic principle: the mass of the exchanged particle is inversely proportional to the range of the interaction. The force between charged particles, like the force of gravity, falls off with the square of the distance. At any given distance, the force is finite; hence, the force has infinite range, and the photon has zero mass. The weak interaction falls off exponentially with distance, approaching zero in a very short distance, because the particles being exchanged (the W and Z) are nearly 100 times as massive as the proton. In the Standard Model, the weak and electromagnetic interaction are in fact part of a single electroweak interaction. The weakness of the weak part is due to the low probability that the very heavy W and Z particles will come into existence at any instant. This is related to another version of the principle given above: in the transitory existence of a virtual particle, such as the W or Z, which violates conservation of energy, the time that the particle can exist is inversely proportional to the energy of the particle.

These "principles" are just versions of Heisenberg's Uncertainty Principle. Here's what is important: a force is generated between two particles (such as two electrons, or an electron and a neutrino) when virtual particles (such as the photon, or the W or Z) are exchanged (i.e., emitted and absorbed).

Quark interactions: the color force

Yes, but what about the quarks? Here's a table of the first of three families of leptons and quarks, for reference:
Particle                 Charge    Type of interaction(s)       
----------------------------------------------------------------
lepton
    electron               -1      electromagnetic, weak         
    electron neutrino       0      weak                           

quark
    up                     2/3     electromagnetic, weak, color
    down                  -1/3     electromagnetic, weak, color
----------------------------------------------------------------

What is new for the quarks is the color force. As mentioned above, until the 1970s, when it was finally understood that the hadrons (mesons and baryons) were composed of quarks, they were thought to be fundamental. The entire field of nuclear physics was concerned with how the neutrons and protons combined to form the observed nuclei. The nuclear force was observed to be an attractive force between all hadrons, that fell off quickly in a distance comparable to the size of an atomic nucleus (about 10-15 meters). Nuclear physics is concerned with attractive binding energies of about 10 Mev (10 million electron volts) per nucleon, approximately a million times greater than the atomic binding energy per electron of molecules and solids. The nuclear force was also studied with large accelerators, in collisions of protons with very high kinetic energies of up to 100,000 Mev. There were many phenomenological descriptions of the nuclear force over this large range of energies, but it was not understood in a fundamental way. The force was so strong that the interaction between nucleons could not be expanded in a perturbation series, as was done with QED. As a result, even the simplest calculations, such as the bound states of the deuterium nucleus (one proton, one neutron) could not be calculated with precision from first principles. In analogy with QED, where the force between charged particles is due to exchange of virtual photons, the nuclear force between hadrons was postulated to be due to exchange of virtual mesons, such as pions. Like photons which have spin 1, the mesons are bosons; the lightest mesons, the pions, have spin 0. But the dissatisfaction with this theory was so great that people came up with some pretty wild ideas in the 1960s, such as the "bootstrap" theory of Geoffrey Chew. In the bootstrap theory, hadron egalitarianism was taken to the extreme: none of the hadrons were fundamental because each baryon and meson was surrounded by a cloud consisting of all the hadrons, and somehow the observed hadrons were the mysteriously allowed composites of these clouds -- the hadrons were generated from all other hadrons by "pulling themselves up by their bootstraps." Likewise, the force between hadrons came out of these clouds by exchange of hadrons (presumably both mesons and baryons). Unfortunately, this scheme was so complex that little could be calculated.

I give you this background in some detail so that you can appreciate the breakthrough that occured with quarks. The force between quarks is much stronger than the nuclear force between nucleons. And yet it is possible, at least in principle, to calculate the forces between quarks within a meson or baryon. (In the lingo of quantum field theory, the interaction between quarks is renormalizable, just as QED and the electroweak interactions are. The reason these interactions are renormalizable is very deep, and depends on some abstract symmetries.) The basic interaction between quarks is called quantum chromodynamics (QCD), in analogy to QED. But unlike QED, where an uncharged particle (the photon) is responsible for the force between charged particles, in QCD, "charged" gluons are interchanged between quarks. But these gluons do not have electric charge; they have a different type of charge that comes in three different values, called "colors." These colors are whimsically called "red", "green" and "blue." Each quark has one of these color charges. The analogy to colors is as follows: it is observed that all observable hadrons are "colorless." There are two ways a hadron can be colorless. For a meson, with a quark-antiquark pair, one quark can be (say) "red" and the antiquark can be "antired", giving no net color. For a baryon with 3 quarks, each quark can be a different color, and, in analogy to color theory where a combination of three primary colors results in white (no color), the baryon is colorless. No other combination of colored quarks is colorless, and in fact, no other combination of quarks is actually observed. Now, a peculiar thing about QCD is that the gluons have color charge; in fact, each gluon has one color charge and one anticolor charge. The fundamental gluon interaction between quarks causes a transfer of color charge from one quark to another. For example, a red and blue quark exchange colors. The Feynman diagram for the color exchange looks like this:

     ^                     ^
    b \                   / r
       \                 /
        \               /
	 | = = = = = = |
	/    r, _b      \
       /                 \
    r /                   \ b

This is a picture of the gluon exchange process. Think of time going upward. On the left, a red quark turns into a blue quark, emitting a (red, antiblue) gluon to conserve color charge at the "vertex." On the right, a blue quark absorbs the (red, antiblue) gluon to become a red quark. No statement is made about which of these processes happens first; the two events (gluon creation and absorption) occur in such a way that either one of them can happen first. (Note the close analogy to QED, where a virtual photon is transferred between two electrons.) In QCD, the virtual gluon is labeled as (red, antiblue), but it could just as well be labelled (antired, blue).

The consequences of having colored gluons mediate the color force between quarks are startling. The force is short range, like the weak force, but it increases with distance. The analogy is to a tube of energy: because the gluons are charged, and color charges attract, the gluons (and the field energy) are confined to a narrow tube between the quarks. If you try to pull two quarks apart, the tube gets longer, and the energy increases with the length. If the tube gets too long, it breaks with the generation of a quark/antiquark pair at the break, resulting in the creation of a meson (not the created pair, however, because they're now tied to separate gluons.) Try to draw this, using the diagram above, and with the assignment of the correct color to each of the quarks generated in the break in the gluon line. In a very high energy collision, it is common for the strong interaction to generate many mesons and baryons in this manner. For example, when electrons and their anti-paricles (called positrons) collide at high energy at the Stanford Linear Accelerator (SLAC), they annihilate to form jets of hadrons. A jet is a set of particles all moving in roughly the same direction from the same initial vertex. These jets are in fact a verification of the unusual property of the color force. The electron and positron annihilate to form a virtual photon, which immediately forms a very energetic quark/anti-quark pair. If the quark and anti-quark were to separate and become free, they would have naked color, which is not allowed. So as they separate, each one creates a jet of baryons moving generally in their original direction, by the QCD process described above. There are even 3-jet events, where, additionally, a gluon tries to escape and gets converted into a jet of hadrons.

So where does the old "nuclear" force come from?

We've seen that the strong force is really a color force between quarks. How does this give rise to the old "nuclear" force between hadrons? After all, if the baryons and mesons are all colorless, how can there be a force between them?

The answer is interesting. It turns out that the nuclear force is a very weak effect due to fluctuations in the color fields. Even though each hadron is colorless in some time-averaged sense, at any moment in time if you are very close to it, it will appear to have a small amount of color charge. Suppose you are right next to a proton. At some instant of time, the red quark might be closer than the blue and green quarks, so the proton will appear to be slightly reddish. Now, that net red color will attract the red quark in a neighboring proton, so that it will appear slightly red to the first proton as well. Another way of describing this is that a fluctuation in the color charges of one proton causes a corresponding color polarization in the second, and the two slightly polarized protons will then experience an attractive force. There is an analogous situation with the electromagnetic force: two neutral atoms will weakly attract each other at very close distances, from mutual polarization due to coherent charge fluctuations. This is called the van der Walls effect, after the physicist who described the weak electrostatic force between electrically neutral gas molecules that causes a real gas to behave slightly differently from an ideal, non-interacting gas. van der Waals modified the pV = nRT equation of state for an ideal gas to account for such interactions, as well as for the small volume occupied by each molecule.

Symmetry and angular momentum: particles are classified as either fermions or bosons

We've talked about the leptons and quarks, and about the particles like the photon, W, Z and gluons that are responsible for the interactions between the leptons and quarks. Let's go back and look at these particles in a new way, depending on the amount of intrinsic angular momentum (also called spin) that they possess.

There is something very deep and important about these groups of particles. As mentioned above, the leptons and quarks are fermions. They have intrinsic angular momentum. The classical analogy is to a spinning top, but the angular momentum of these particles is a quantum mechanical property, and defies our classical intuition in a number of ways. Planck's constant is the fundamental unit of angular momentum (or spin). All leptons and quarks have 1/2 unit of spin; all the interaction particles have 1 unit of spin. Particles with half-integral spin are called fermions; particles with integral spin are called bosons. Unlike fermions, bosons such as photons have a tendency to get into the same quantum state. A laser is a device for putting trillions of photons into exactly the same quantum state, which can make a very intense, monochromatic and collimated light wave. These interaction particles are referred to as intermediate vector bosons. They intermediate between the fermions to cause the interaction; and they are vector particles because they have spin 1 and transform in a particular way under a Lorentz transformation (see below). They are bosons because the spin is an integral multiple (namely, 1) of Planck's constant. There is another fundamental vector boson, the quantum of gravity (graviton). This is expected to exist with spin 2, but it has not been observed because the interaction with matter is too weak at energies attainable (now or in the future) in particle accelerators.

To get an idea of how fundamental the fermion spin is, consider this. Suppose you have a particle with this spin 1/2 property. In quantum mechanics, if you try to measure the spin, you will find either +1/2 or -1/2 along any direction you choose: you choose a direction and it will be either "up" or "down." Nothing in between! In non-relativistic quantum mechanics, a particle with spin 1/2 will generally exist in a mixture of these two pure spin states. In a relativistic description, the electron wave function will have not 2, but 4 components. The great triumph of the Dirac equation was that it not only explained the electron spin but also predicted the existence of the positron. The wave function needed 4 components to describe both electrons and positrons! And Dirac needed 4 components to describe a wavefunction that satisfied the basic symmetry requirements of Einstein's relativity: energy conservation, momentum conservation, and something called Lorentz invariance , which just means a consistent description in different moving frames of reference (called Lorentz frames) of a spin 1/2 particle. So you see that there is a kind of inevitability: the symmetry requirements and special relativity forced Dirac to come up with his equation -- he really didn't have any latitude. There was only one unspecified parameter, the mass of the particle. And thus the Dirac equation describes the six leptons and the six quarks, all of which have different masses, plus their anti-particles.

Thus, it turns out that characterizing particles by their intrinsic angular momentum is very useful. The spin 1/2 fermions (leptons and quarks) are called spinors, and their wave function, which is the solution to the Dirac equation, has 4 components. A scalar particle has spin 0 and only one component. No fundamental scalar particles are known, though people have been searching for about 20 years for one called the Higgs boson . The Higgs particle is postulated to give mass to all the particles by interacting with them. All the particles responsible for interactions (except for gravity), such as the photon, the electroweak W and Z, and the color gluons, have spin 1. They are vectors in 4-dimensional space-time. The graviton, which is the quantum particle responsible for the gravitational interaction, is a tensor boson, having spin 2. These terms -- scalar, vector, tensor and spinor -- are well-defined mathematically, and describe how the wave functions for the particles change under the basic symmetry operations of translation, rotation, and boost. A boost is where you change the relative speed of the reference frames. The rotations and boosts are combined into a set called the Lorentz transformations; combining these with symmetry under translation in space and time (which are equivalent to momentum and energy conservation) gives you a larger set of symmetry operations called the Poincaré symmetry group. All the fundamental equations of physics must be invariant (i.e., not change in form) under all changes described by this group: translation, rotation and boosts.

The Lorentz transformations act on particles, but they were intitially discovered as a transformation on space and time in 1900 by Lorentz (who else?). However, their proper (and much simpler) interpretation was given in 1905 by Einstein, who at the same time derived the famous relation E=mc2, and also showed that Maxwell's equations of electromagnetism obey the basic symmetries required by special relativity.

Did Einstein get a Nobel prize for the special theory of relativity? No, he received a Nobel prize nearly 20 years later for his explanation of the photoelectric effect (electrons are emitted from a metal with an energy proportional to the inverse wavelength of the light, but independent of the intensity of the light), which he also produced in 1905. Einstein's starting point for relativity was to accept the results of two American scientists in Cleveland, Ohio. In the late 1880s, Michelson (who founded the physics department at Case and was the first American to receive a Nobel Prize in science) and Morley (a chemistry professor at Western Reserve) attempted to measure the speed of the Earth through the postulated "ether" that filled the universe. As the Earth goes around the sun, the speed of the Earth relative to this ether should change. They used an interferometer to compare the speed of light in two perpendicular directions, but to their surprise, they were unable to detect any difference regardless of the Earth's location in its orbit around the sun. Einstein accepted these measurements at face value and used their result in his basic postulate: that all frames of reference are equivalent. A corollary of this is that the speed of light is measured to be a constant independent of the motion of the source or the receiver. And the consequences of that are that space and time cannot be described independently. This is far from intuitive, because we live in a world where everything we see (except for light) goes much slower than the speed of light. Einstein's special theory of relativity was so different from the previous picture, where space and time were independent, that it took many years to become fully accepted. The greatest irony is that as late as the 1930s, there was still one prominent hold-out in the United States: Dayton C. Miller, the person who had been the chair of the physics department at Case Western Reserve for over 40 years! Miller spent his career trying to prove that both Einstein and (especially) his great predecessor Michelson were wrong; that there was an ether; that there was an absolute frame of reference in which the ether is stationary; and that the speed of light depends on the motion relative to the stationary ether. One might say that Miller was to physics as Florence Foster Jenkins was to opera.

Internal symmetry and interactions

Just as the conservation of energy and momentum comes out of basic symmetry requirements on the laws of physics in general (namely, the same laws should apply at any time and any location), and the basic equation for leptons, the Dirac equation, comes almost entirely out of the symmetry requirements of Lorentz invariance (namely, the same laws should apply in any non-accelerated frame of reference) -- you might wonder if the interactions between particles (the electroweak, the strong and gravity) also can be derived out of a symmetry principle.

And the answer is: yes!. The vector bosons that provide the electromagnetic forces (the photon), the weak forces (the Z0, W+ and W-), and strong forces (the gluons) all can be generated by requiring that the laws of physics are invariant to some new, internal symmetries. This is why the Standard Model has been such a success: it shows that (1) those symmetries generate the observed forces and quanta and (2) with such special symmetry-generated forces, the amplitudes and probabilities for all interactions are finite and properly behaved. Most theories of interacting particles are not well-behaved: they give infinities when you try to calculate energies and probabilities.

These internal symmetries are called gauge symmetries, and in 1971 t'Hooft and Veltman proved that gauge symmetric theories are renormalizable; i.e., well-behaved when you try to calculate measurable things such as scattering cross-sections. (A cross-section is the area that a particle appears to have in a collision with another particle.)

It had been known for many years that the theory of QED satisfied two gauge invariances. There is a global invariance that works like this. Suppose you multiply every lepton wavefunction in the universe by a constant phase, which is a complex number of unit length. (You can think of this complex number as a vector that points from the origin to somewhere on a circle of radius 1.) If you require that the laws of physics are independent of this global phase (or global gauge), you get the law of conservation of charge! And there is a local gauge invariance that is even stranger. If you assume that the laws of physics must not change if you multiply the wavefunction of every lepton in the universe by a phase that is an arbitrary function of position and time, you must have some new field to compensate for the rate of change of this phase with position or time, and that field is exactly the field of electromagnetism! (The electric and magnetic fields, which are directly measurable, are space and time derivatives of this gauge field.) So this internal symmetry that the wavefunction must satisfy automatically leads to a modification of the Dirac equation to include the interaction of leptons with photons. Everything seems to come out of symmetry.

What about the weak and strong interactions? The answer is that here, too, the Standard Model prescribes a local gauge symmetry (though more complicated than the one for electromagnetism) for both the weak and strong force. It also explains how the electromagnetic and weak forces are really part of a higher symmetry, the electroweak symmetry. The electromagnetic and weak forces only appear to be two separate forces because at low energies ("low" being less than about 100 GeV -- 100 times the energy of the proton, which is about 100 billion times the energy of a photon of visible light!) the symmetry is broken by a field that is postulated but has not yet been observed. This is the field of a particle called the Higgs boson, a very strange beast because if it exists it will be the only known elementary particle of spin 0. It is hoped that if the Higgs boson exists, it is not too heavy to be produced in the next generation of particle accelerators under development at CERN. These accelerators will be able to produce particles of several trillion electron volts (TeV), which is several thousand times the mass of the proton.

The strong interaction with its 3 colors and 8 gluons comes from a yet more complicated gauge symmetry, called SU(3) color, that is derived by assuming that the color force is independent of color. Then the red-red force is the same as the red-green force, etc. Historically, SU(3) was the symmetry group that Gell-Mann and Ne'eman had invoked in 1961 to explain in an approximate way the huge zoo of mesons and baryons that had been produced by that time within accelerators. In his colorful fashion, Gell-Mann called it the Eightfold-Way, because of the way the symmetry grouped some of the mesons and baryons into octets of related families. The Eightfold-Way theory was a mathematical description of the observed particles. Three years later Gell-Mann realized that the octets could be interpreted as a result of having the hadrons made up of fractionally charged entities, which he named quarks. The SU(3) symmetry came from the assumption that you had three types of quarks in the hadrons, labeled by their three flavors (up, down and strange), and the interactions between the quarks were assumed to be independent of their flavor. This was similar to the old "isospin" symmetry in the nuclear force between neutrons and protons. In this case, a force that is independent of flavor is said to have a SU(3) flavor symmetry. The SU(3) flavor symmetry is only approximate, but the Standard Model takes this same symmetry group to explain the strong force. It is called SU(3) color, and in the Standard Model it is an exact symmetry.

Tables of leptons and quarks

For reference, here are tables of the three families of leptons and the three related families of quarks. In 2002, the electron and muon neutrinos were observed to have a finite mass. The muon and tau both decay to the electron by the electroweak interaction. (*)The stability of the three neutrinos is problematic, because the different flavors can change into each other. (This is the observation upon which the neutrino mass has been inferred.)
   Lepton               Charge    Spin     Mass(Mev)   Lifetime(sec)
=======================================================================
   electron               -1      1/2      0.511        stable
   electron neutrino       0      1/2      < 0.00002    stable? (*)
-----------------------------------------------------------------------
   muon                   -1      1/2      105.7        2.2 x 10^(-6)
   muon neutrino           0      1/2      < 0.16       stable? (*)
-----------------------------------------------------------------------
   tau                    -1      1/2      1777         2.9 x 10^(-13)
   tau neutrino            0      1/2      < 18         stable? (*)
-----------------------------------------------------------------------

The charm, strange, top and bottom quarks all decay to up and down quarks by the electroweak interaction. The quantum numbers for the quarks -- S(strangeness), C(charm), T("truth" or topness), B("beauty" or bottomness) -- are conserved in all strong interactions, but they are not conserved in weak interactions. For strangeness, this "explains" (1) the associated production of strange hadrons -- hadrons with at least one strange quark -- by the strong interaction (i.e., they're always produced in pairs, having opposite S quantum numbers, to conserve total S), and (2) the weak decay of strange hadrons to protons, neutrons and pions, which are composed of up and down quarks only. The weak decay causes all quarks, which are always trapped in mesons or baryons, to eventually decay to up and down flavors. An isolated neutron (1 up, 2 down) is unstable because the down quark has slightly more mass than the up quark, allowing it to decay with a lifetime of 15 minutes to a proton (2 up, 1 down) plus an electron and an electron antineutrino.

  Quark flavor   Quantum #   Charge    Spin      mass(Gev)     Lifetime
=========================================================================
   down            --        -1/3       1/2        0.31         stable
   up              --         2/3       1/2        0.31         stable
-------------------------------------------------------------------------
   strange        S = -1     -1/3       1/2        0.5          decays
   charm          C = +1      2/3       1/2        1.6          decays
-------------------------------------------------------------------------
   bottom         B = -1     -1/3       1/2        4.6          decays
   top            T = +1      2/3       1/2        180          decays
-------------------------------------------------------------------------

What about the quark mass? Because quarks cannot be isolated, the quark mass cannot be measured. We give here the so-called constituent mass, which allows an estimate of the mass of the ground state hadron composites (i.e., the baryons and mesons with the lowest energy that are composites of these quarks) as the sum of these constituent values.

The four spectroscopies

Why did I qualify the use of constituent mass to estimating the ground states of the hadrons? Remember the particle zoo, the proliferation of particles that were observed in the 1950s and 1960s? Most of those particles are resonances (i.e., excited states) of quark composites. For example, the lowest energy of the delta resonance is a set of four particles that are excited states of the proton, having charges -1 (ddd), 0 (ddu), 1 (duu) and 2 (uuu). In the delta, all three quarks have spin in the same direction, giving them a net spin of 3/2, unlike the proton and neutron which have net spin of 1/2. The delta has a mass of about 1230 Mev, considerably larger than the proton mass of 911 Mev. There are even higher energy delta resonances where the quarks have orbital angular momentum as well as their intrinsic spin.

We are talking here about spectroscopy, the quantized energy levels of composite systems: systems composed of two or more components that are bound together. There are four distinct spectroscopies in matter, starting at lower energies and proceeding to higher ones.

  1. Molecular spectroscopy. This is the study of the energy levels of molecules and the transitions between them. Molecules, being composites of atoms, can rotate and vibrate in many ways, giving rise to a discrete set of excited states. The excitations can often be induced and detected by (infrared) light at specific wavelengths corresponding to the differences in energy levels of the molecules; hence the name "spectroscopy".
  2. Atomic spectroscopy. This probes the excited states of atoms. Energy is absorbed when the electrons within an atom are excited above the ground state (i.e., the state of lowest energy). When they make the transition in the other direction, giving up energy, photons (typically visible, ultraviolet or xray) are emitted.
  3. Nuclear spectroscopy.. This describes excited states of the nucleons (neutrons and protons) within an atomic nucleus, and the transitions between these states. These transitions are mediated by high energy photons (gamma rays), electrons or positrons (beta rays) and helium nuclei (alpha rays). (This quaint greek terminology has been preserved since the discoveries of these mysterious rays 100 years ago!) Nuclear physics attempts to estimate and explain these excited states, as well as the stable states of atomic nuclei.
  4. Hadron spectroscopy. This describes the excited states of quarks within (3 quark) baryons or (quark/anti-quark) mesons. An example of these excited states is the set of resonances of the proton, such as the delta described above. This hadron spectroscopy is analogous to atomic spectroscopy, but much harder to compute because the color force of QCD is much more complicated than the electromagnetic force of QED.

At each level, we go to smaller length scales and higher energies. The following table summarizes this semi-quantitatively. The range of transition energies is typical of the spectroscopy, and the photon wavelength corresponds to those energies.

   Spectroscopy | Particles of |   Transition    |      Photon
       Type     |  Composite   |    Energies     |  Wavelength (cm)
=======================================================================
   Molecular    |  atoms       |  0.01 - 0.1 eV  |   10^-2 - 10^-3
   Atomic       |  electrons   |  0.001 - 10 KeV |   10^-4 - 10^-8
   Nuclear      |  nucleons    |  0.1 - 10 MeV   |   10^-9 - 10^-11
   Hadron       |  quarks      |  10 - 1000 MeV  |   10^-11 - 10^-13
-----------------------------------------------------------------------
Note: there exist transition energies in atomic and molecular spectroscopy that are at very much lower energies than given above. These are due to weak coupling between the field from moving electrons (electrons with orbital angular momentum) and the magnetic moments of the electrons and the nuclei, called fine structure and hyperfine structure, respectively. It is also useful to apply static electric and magnetic fields to the atomic systems. For example, nuclear magnetic resonance is due to transitions between electromagnetic energy levels of the atomic nucleus that are split by the application of an external magnetic field.

For further reading

The primer was intended to quickly familiarize you with both the basic concepts and the terminology of particle physics. You may now want a more leisurely approach, set within a broader historical background. There exist several good elementary descriptions, mostly without equations, of the physics of fundamental particles and interactions. I recommend the following:
  1. The discovery of subatomic particles by Steven Weinberg. Published by W. H. Freeman, 1990. A nice history, by the co-discoverer of the electroweak unification in the Standard Model, and one of the great physicists of the century. Weinberg saves most of the equations for the appendix, which is at kept at an elementary level.

  2. Interactions: A journey through the mind of a particle physicist and the matter of this world by Sheldon Glashow. Published by Warner Books, 1988. An enjoyable first-person romp through particle physics in the second half of the twentieth century, very well written, from the viewpoint of a theoretician. Glashow is, with Weinberg and Salam, a co-discoverer of electroweak unification in the Standard Model, and I highly recommend this book for anyone with an interest in physics.

  3. The God particle by Leon Lederman. Published by Delta, 1993. No, this is not one of those silly books that pretend to use physics to prove that God exists. It is an excellent description of the excitement of experimental physics by the discoverer of the muon neutrino. This book was written partly as an argument for continuing to build the superconducting super-collider in Texas. (The SSC was cancelled.) The "God" particle is the elusive Higgs, and Lederman does his best to tell why it is important. He also talks about the connection between particle physics and the universe (cosmology).

  4. The hunting of the quark by Michael Riordan, Published by Simon & Schuster, 1987. A personal account of the search for quarks within protons, from the point of view of a young experimentalist at Stanford. Gell-mann had predicted quarks but didn't believe in them (well, now he says he always did, but ...). Feynman and Bjorken made predictions on what experiments to do and how to interpret them. The physics is a bit muddy at times, but the cross-continental rivalries between MIT and Stanford are fascinating, and the personalities come alive. Riordan came out from MIT to do the experiments. The bi-coastal competition is carried on through the "November 1974 revolution" -- the discovery of charmed mesons -- and into the mid-1980s, with the excitement of challenging the new Standard Model.

  5. QED, The strange theory of light and matter by Richard Feynman. Published by W. H. Freeman, 1985. Four lectures on the quantum theory of light, without equations, in Feynman's unique style. In the last chapter he brings in the quark families and their interactions.

  6. In search of the ultimate bulding blocks by Gerard 't Hooft. Published Cambridge Univ. Press, 1997. This book is difficult to characterize, because it is a first-hand account written at many different levels, somewhat like the blurb above on this web page. It describes 't Hooft's discovery of the renormalizability of interactions that are "gauge invariant." (That means there is a special abstract symmetry, which should perhaps be called "phase invariant," that underlies all interactions, and all the vector bosons responsible for the interactions can be generated from this symmetry.) This discovery in 1971, with his teacher Veltmann, led directly to the revolution known as the Standard Model, because it made gauge theories much more interesting, and soon thereafter the electroweak theory (with photons, W and Z intermediating the combined electromagnetic and weak force) and QCD (with gluons intermediating the quark color force) were formulated.
There is a fine introduction to particle physics at a conceptual level requiring some physics background and using very little mathematics:
  1. The ideas of particle physics: an introduction for scientists by Guy Coughlan and James Dodd. Published by Cambridge University Press, 2nd edition, 1991. A unique introduction to particle physics, covering the Standard Model and including more recent speculation on strings, supersymmetry, and inflation in big-bang cosmology. If you want to go beyond the descriptive accounts in the lay books in the previous category, I can't recommend this highly enough. There is a 3rd edition (2006) that I have not seen.
There exists a wonderful introduction to particle physics at the senior or first-year graduate level, that will enable you to understand and calculate many of the predictions of the Standard Model:
  1. Introduction to particle physics by David Griffiths. Published by Wiley, 1987. Griffiths takes you on a quantitative romp through the Standard Model, without the formal machinery of quantum field theory, and requiring as prerequisites only quantum mechanics, electromagnetism and special relativity at the undergraduate level. Delightfully informal, this book is a masterpiece. If you read only one technical book on particle physics, this should be the one; if you read more than one book, this should be the first. It has been updated in 2008 with a second edition.
Quantum field theory, the inevitable marriage of quantum mechanics and special relativity, is difficult. There are many fine texts on QFT, such as Ryder's introduction and the Weinberg trilogy, but there is a special one that I must recommend. In 2003, A. Zee produced a book on quantum field theory that is a masterpiece of clear technical writing. Zee brings both a broad view and deep perspective to the subject. This nutshell is not for the faint of heart, but it brings the most insight into QFT that I've seen.
  1. Quantum field theory in a nutshell by A. Zee. Published by Princeton UP, 2003 (updated with 2nd edition in 2010). With delightful informality and humor, Zee explores the basis and path integral formalism of QFT, including collective phenomena in solids and liquids, and even the connection with gravity (supersymmetry, string theory). It takes you to the frontier as easily as possible.
Here are some technical historical reviews of particle physics, written with equations included, that can be recommended:
  1. The rise of the standard model, edited by Lillian Hoddeson, Laurie Brown, Michael Riordan and Max Dresden. Published by Cambridge University Press, 1997. This has contributions from many of the contributors to the Standard Model, describing ideas, events, puzzles and discoveries from their personal viewpoints. It gives a good selection of the experimental and theoretical activities that proved fruitful.

  2. Inward bound by Abraham Pais. Published by Oxford University Press, 1986. A fine summary of twentieth century particle physics, from the discoveries of X-rays in 1896 and the electron in 1897 through the development of the Standard Model. Includes a synopsis of the book in the form of a detailed chronology.

  3. QED and the men who made it: Dyson, Feynman, Schwinger, and Tomonaga by Silvan Schweber. Published by Princeton University Press, 1994. Like Pais, Schweber is a theorist who has recently done work as an historian. This is a detailed history of the development of quantum field theory, and the events that led to simultaneous but different formulations of QED in 1948, including the extensive contributions of Freeman Dyson.
There are some excellent historical accounts of scientists and their contributions, written for non-physicist audiences. I recommend highly the following:
  1. Genius, the life and science of Richard Feynman by James Gleick. Published by Pantheon, 1992. Finest non-technical biography of this most interesting and unusual physicist.

  2. Quantum Man by Lawrence Krauss. Published by Norton, 2011. Subtitled "Richard Feynman's Life in Science,", this is a technical biography that has a few Feynman diagrams and no equations. For beautiful explanations of Feynman's contributions to many areas of physics, this book has no equal.

  3. Surely you're joking, Mr. Feynman! and What do you care what other people think? by Richard Feynman and collected by Ralph Leighton. Published by Norton, 1985 and 1988. Leighton writes in the preface to the first, "That one person could have so many wonderfully crazy things happen to him in one life is sometimes hard to believe. That one person could invent so much innocent mischief in one life is surely an inspiration!" I've included these here because the anecdotes in these two books include the most enjoyable stories I've ever read. If you haven't seen them, you're in for a treat.

  4. Strange Beauty: Murray Gell-Mann and the Revolution in Twentieth-Century Physics by George Johnson. Published by Knopf, 1999. Johnson has written a definitive history of Gell-Mann's contributions to our understanding of quarks, weak interactions and the standard model. Even though he didn't really believe in quarks as anything more than a mathematical abstraction, Gell-Mann's "Eightfold Way" organized the jumble of mesons and baryons into symmetry classes, and predicted the highly "strange" omega minus. His "Current Algebra" approach to understanding interactions was used by many theoreticians in the turbulent sixties and beyond.

  5. Neils Bohr's Times by Abraham Pais. Published by Oxford University Press, 1991. Pais is a theoretical physicist who became a fine scientific biographer in his later years. Pais writes technical biographies, but this one has little mathematics because Bohr's great contributions were in concepts, such as quantizing the angular momentum of electron orbits to give a simple model of the hydrogen atom, and the "Copenhagen" interpretation of the quantum theory of measurment, an interpretation that Einstein never accepted. An interesting story of physics in the first half of the 20th century and the difficulties that physicists had in understanding the new physics. You won't find any quarks here.

  6. Adventures of a Mathematician by Stanislaw Ulam. Published by Scribners, 1976. Ulam was quite a character, and he inhabited the worlds of both academic mathematics and practical physics. The book is a series of anecdotes and pensées about mathematicians, physicists and ideas. Ulam knew as well as anyone how important it is to "pose problems and ask the right kinds of questions."

  7. The Matter of Everything by Suzie Sheehy. Published by Knopf, 2023. This is unusual in that it is a history of experimental physics from Roentgen to the present, answering the question "How did we find out what we understand about the structure of matter?" It takes you without equations into the world of the experimental physicist, describing the historical setting, motivations, experimental methods and discoveries over the past 125 years -- leading up to our present understanding of the Standard Model and the open, unanswered questions. This is a complement to Pais's Inward Bound, which emphasizes the theoretical view including the mathematical framework.

  8. The Double Helix by James Watson. Published by Atheneum, 1968. This is not about physics, but I couldn't resist including it. This is a very honest and exciting first-hand account of one of the greatest scientific discoveries of all time. If you like it, you should also read What Mad Pursuit by James Crick, published by Basic Books, 1988. Crick is the co-discoverer, with Watson, of the DNA structure. In this short book, Crick takes you through that discovery and then through the next 12 years when the basic outline of the transcription machinery was worked out. Crick was in the thick of it, and he describes the blind alleys they went through figuring out how the information from the DNA gets outside the cell nucleus and directs the manufacture of proteins.


[Leptonica Home Page]

Creative Commons License
This documentation is licensed by Dan Bloomberg under a Creative Commons Attribution 3.0 United States License.

© Copyright 2001-2023, Leptonica