Free Novel Read

Cosmology_A Very Short Introduction Page 7


  Steady State theory

  In the Steady State cosmological model, advanced by Gold, Hoyle, Bondi, and Narlikar (amongst others), the universe is expanding but nevertheless has the same properties at all times. The principle behind this theory is called the Perfect Cosmological Principle, a generalization of the Cosmological Principle that says that the Universe is homogeneous and isotropic in space to include also homogeneity with respect to time.

  Because all the properties of Steady State cosmology have to be constant in time the expansion rate of this model is also a constant. It is possible to find a solution of the Einstein equations that corresponds to this. It is called the de Sitter solution. But if the Universe is expanding, the density of matter must decrease with time. Or must it? The Steady State theory postulates the existence of a field, called the C-field, which creates matter at a steady rate to counteract the dilution caused by cosmic expansion. This process, called continuous creation, has never been observed in the laboratory but the rate of creation required is so small (about one atom of hydrogen per cubic metre in the age of the Universe) that it is difficult to rule out continuous creation as a possible physical process by direct observation.

  The Steady State was a better theory in the eyes of many theorists because it was easier to test than its rivals. In particular, any evidence at all that the Universe was different in the past to what it is like now would rule out the model. From the late 1940s onward, observers attempted to see if the properties of distant galaxies (which one sees as they were in the past) were different from nearby ones. Such observations were difficult and problems of interpretation led to acrimonious disputes between advocates of the Steady State theory and its rival, exemplified by a bitter feud between the radio astronomer Martin Ryle and Fred Hoyle when the former claimed to have found significant evolution in radio source properties. It was not until the mid-1960s that an accidental discovery shed independent and crucial light on the argument.

  The smoking gun

  In the early 1960s two physicists, Arno Penzias and Robert Wilson, were using a curious horn-shaped microwave antenna left over from telecommunications satellite tests to study the emission produced by the Earth’s atmosphere. The telescope had been designed to study possible sources of interference that might cause problems for planned satellite communication systems. Penzias and Wilson were surprised to find a uniform background of noise, which would not go away. Eventually, after much checking and the removal of pigeons that had been nesting in their telescope, they accepted that the noise was not going to go away. Coincidentally, just down the road in Princeton New Jersey, a group of astrophysicists including Dicke and Peebles had been trying to get together an experiment to detect radiation produced by the Big Bang. They realized they had been beaten to it. Penzias and Wilson published their result in the Astrophysical Journal in 1965, alongside a paper from the Dicke group explaining what it meant. Penzias and Wilson were awarded the Nobel Prize in 1978.

  Since its discovery the microwave background has been subject to intense scrutiny, and we now know much more about it than was the case in 1965. Penzias and Wilson had noticed that their noise did not depend on the time of day, which one would expect if it were an atmospheric phenomenon. Indeed the very high degree of uniformity of the microwave background radiation shows that it is not even associated with sources within our galaxy (which would not be distributed evenly on the sky). It is definitely an extragalactic background. More importantly, it is now known that this radiation has a very particular kind of spectrum called a black body. Black-body spectra arise whenever the source is both a perfect absorber and a perfect emitter of radiation. The radiation produced by a black body is often called thermal radiation, because the perfect absorption and emission brings the source and the radiation into thermal equilibrium.

  13. The spectrum of the cosmic microwave background. This graph shows the measured intensity of the cosmic microwave background as a function of wavelength. Both theory and measurement are plotted here; the agreement is so good that the two curves lie on top of one another. This perfect black-body behaviour is the strongest evidence that the Universe began with a hot Big Bang.

  The characteristic black-body spectrum of this radiation demonstrates beyond all reasonable doubt that it was produced in conditions of thermal equilibrium in the very early stages of the primordial fireball. The microwave background is now very cold: its temperature is just less than three degrees above absolute zero. But this radiation has gradually been cooling as an effect of the expansion of the Universe, as each constituent photon suffers a redshift. Turning the clock back to earlier stages of the Universe’s evolution, these photons get hotter and carry more energy. Eventually one reaches a stage where the radiation starts to have had a drastic effect on matter. Ordinary gas is made from atoms that consist of electrons orbiting around nuclei. In an intense radiation field, however, the electrons are stripped off to form a plasma in which the matter is said to be ionized. This would have happened about 300,000 years after the Big Bang, when the temperature was several thousand degrees and the Universe was about one thousand times smaller and a billion times denser than it is today. At this period the entire Universe was as hot as the surface of the Sun (which, incidentally, also produces radiation of near black-body form). Under conditions of complete ionization, matter (especially the free electrons) and radiation undergo rapid collisions that maintain thermal equilibrium. The Universe is therefore opaque to light when it is ionized. As it expands and cools, the electrons and nuclei recombine into atoms. When this happens, photon scattering is much less efficient. In fact the Universe becomes virtually transparent after recombination, so what we see as the microwave background today is the cool relic radiation that was last scattered by electrons at the epoch of recombination. When it was finally released from scattering processes, this radiation would have been in the optical or ultraviolet part of the spectrum, but since that time it has been progressively redshifted by the expansion of the Universe and is now seen at infrared and microwave wavelengths.

  Because of its near-perfect isotropy on the sky, the cosmic microwave background provides some evidence in favour of the Cosmological Principle. It also provides clues to the origin of galaxies and clusters of galaxies. But its importance in the framework of the Big Bang theory far exceeds these. The existence of the microwave background allows cosmologists to deduce the conditions present in the early stages of the Big Bang and, in particular, helps account for the chemistry of the Universe.

  Nucleosynthesis

  The chemical composition of the Universe is basically very simple. The bulk of known cosmic material is in the form of hydrogen, the simplest of all chemical materials, containing a nucleus of a single proton. More than 75 per cent of the matter in the universe is in this simple form. Aside from the hydrogen, about 25 per cent of the material constituents (by mass) of the Universe is in the form of helium-4, a stable isotope of helium which has two protons and two neutrons in its nucleus. About one hundred thousand times rarer than this come two more exotic elements. Deuterium, or heavy hydrogen as it is sometimes called, has a nucleus consisting of one proton and one neutron. The lighter isotope of helium, helium-3, is short of one neutron compared to its heavier version. And finally there comes lithium-7, produced as a tiny trace element with an abundance of one part in ten billion of the abundance of hydrogen. How did this chemical mix come about?

  It has been known since the 1930s that stars work by burning hydrogen as a kind of nuclear fuel. As part of this process, stars synthesize helium and other elements. But we know that stars alone cannot be responsible for producing the cocktail of light elements I have just described. For one thing, stellar processes generally involve a destruction of deuterium more quickly than it is produced, because the strong radiation fields in stars break up deuterium into its component protons and neutrons. Elements heavier than helium-4 are made rather easily in stellar interiors but the percentage of helium-4 observed is too high to be e
xplained by the usual predictions of stellar evolution.

  It is interesting that the difficulty of explaining the abundance of helium by stellar processes alone was recognized as early as the 1940s by Alpher, Bethe, and Gamow who themselves proposed a model in which nucleosynthesis occurred in the early stages of cosmological evolution. Difficulties with this model, in particular an excessive production of helium, persuaded Alpher and Herman in 1948 to consider the idea that there might have been a significant radiation background at the epoch of nucleosynthesis. They estimated that this background should have a present temperature of around 5 K, not far from the value it is now known to have, although some fifteen years were to intervene before the cosmic microwave background radiation was discovered.

  The calculation of the relative amounts of light nuclei produced in the primordial fireball requires a few assumptions to be made about some properties of the Universe at the relevant stage of its evolution. In addition to the normal assumptions going into the Friedmann models, it is necessary also to require that the early Universe went through a stage of thermal equilibrium at temperatures of more than a billion degrees. In the Big Bang model this would have happened very early on indeed, within a few seconds of the beginning. Other than that, the calculations are fairly straightforward and they can be performed using computer codes originally developed for modelling thermonuclear explosions.

  Before nucleosynthesis begins, protons and neutrons are continually interconverting by means of weak nuclear interactions (the nuclear interactions are described in more detail a bit later on). The relative numbers of protons and neutrons can be calculated as long as they are in thermal equilibrium and, while the weak interactions are fast enough to maintain equilibrium, the neutron-proton ratio continually adjusts itself to the cooling surroundings. At some critical point, however, the weak nuclear reactions become inefficient and the ratio can no longer adjust. What happens then is that the neutron-proton ratio is ‘frozen out’ at a particular value (about one neutron for every six protons). This ratio is fundamental in determining the eventual abundance of helium-4. To make helium by adding protons and neutrons together, we first have to make deuterium. But I have already mentioned that deuterium is easily disrupted by radiation. If a deuterium nucleus gets hit by a photon, it falls apart into a proton and neutron. When the Universe is very hot, any deuterium is destroyed as soon as it is made.

  This is called the deuterium bottleneck. While this nuclear traffic jam exists, no helium can be made. Moreover, the neutrons which froze out before this start to decay with a lifetime of around ten minutes. The result of the delay is therefore that slightly fewer neutrons are available for the subsequent cooking of helium.

  When the temperature of the radiation bath falls below a billion degrees, the radiation is not strong enough to dissociate deuterium and it lingers long enough for further reactions to occur. Two deuterium nuclei can weld together to make helium-3, with the ejection of a neutron. Helium-3 can capture a deuterium nucleus and make helium-4 and eject a proton. These two reactions happen very quickly with the result that virtually all neutrons end up in helium-4, and only traces of the intermediate deuterium and helium-3 are produced. The abundance by mass of helium-4 that comes out naturally is about 25 per cent, just as required. Likewise, the amounts of intermediate nuclei are also close to the observations. All this is done in the first few minutes of the primordial fireball.

  This seems like a spectacular success of the theory, which it is indeed. But agreement between detailed calculations of the nuclear fallout from the Big Bang and observed element abundances is only reached for a particular value of one crucial parameter, the baryon-to-photon ratio of the Universe. The whole thing only works if this number is around one in ten billion. That is one proton or neutron for every ten billion photons. We can use the known temperature of the microwave background to work out how many photons there are in the Universe. This can be done very accurately. Since we know the baryon-to-photon ratio required to make nucleosynthesis work we can use the appropriate value to calculate the number of baryons. The result is tiny. The amount of matter in the form of baryons can only be a few per cent of the amount of mass required to close the Universe.

  Turning back the clock

  The production of the microwave background during the epoch of recombination and the synthesis of the elements during the nuclear fireball are two major successes of the Big Bang theory. The way observations tally with detailed calculations provides firm support for the model. Buoyed by these successes, cosmologists have since tried to use the Big Bang to explore other consequences of matter at very high density and temperature. In this activity, the Big Bang exploits the connection between the world of the very large and that of the very small.

  The further into the past we travel, the smaller and hotter the Universe becomes. We are living now at an epoch about 15 billion years after the Big Bang. The microwave background was produced about 300,000 years after the Big Bang. The nuclear furnace did its cooking in the first few minutes. Pushing our understanding of the Universe to earlier times requires knowledge of how matter behaves at energies above those achieved in nuclear reactors. Experiments that can probe such phenomenal scales of energy can only be constructed at enormous cost. Particle accelerators such as those at CERN in Geneva can recreate some aspects of the primeval inferno, but our knowledge of how matter behaves under these extreme conditions is still fragmentary and does not extend to much earlier periods than the epoch of nucleosynthesis.

  In the early days physicists saw the Big Bang as a place where they could apply their theories. Now, with theories of particle physics still largely untested elsewhere, it has become a testing-ground. To see how this has happened, we have to understand the development of particle physics over the last forty years.

  14. Looking back in time. As we look further out into space, we look further back in time. Relatively nearby we see galaxies. Further away we can see highly active galaxies known as quasars. Beyond that there are the ‘dark ages’: the look-back time is so great that we are viewing the Universe before galaxies formed. Eventually we look so far that the Universe was so hot that it was an opaque fireball much like the central parts of a star. The fireball radiation comes to us through the expanding Universe and arrives as the microwave background. If we could see further than this, we would see nuclear reactions happening, as they do in stars. At earlier times the energies become so high that we have to rely on guesswork. Finally, we reach the edge of the Universe … when quantum gravity becomes important we know nothing.

  The four forces of nature

  Armed with the new theories of relativity and quantum mechanics, and in many cases further spurred on by new discoveries made possible by advances in experimental technology, physicists in this century have sought to expand the scope of science to describe all aspects of the natural world. All phenomena amenable to this treatment can be attributable to the actions of the four forces of nature. These four fundamental interactions are the ways in which the various elementary particles from which all matter is made interact with each other. I have already discussed two of these, electromagnetism and gravity. The other two concern the interactions between the constituents of the nuclei of atoms, the weak nuclear force and the strong nuclear force. The four forces vary in strength (gravity is the weakest, and the strong nuclear force is the strongest) and also in the kinds of elementary particles that take part in the interactions they control.

  The electromagnetic force holds electrons in orbit around atomic nuclei and is thus responsible for holding together all material with which we are familiar. However, it was realized early in the twentieth century that, in order to apply Maxwell’s theory in detail to atoms, ideas from quantum physics and relativity would have to be incorporated. It was not until the work of Richard Feynman and others, building on the work of Dirac, that a full quantum theory of the electromagnetic force, called quantum electrodynamics, was developed. In this theory, usually abbreviated
to QED, electromagnetic radiation in the form of photons is responsible for carrying the electromagnetic interaction between particles of different charges.

  Before discussing interactions any further it is worth mentioning some of the properties of the elementary particles between which these forces act. The basic properties are particles called fermions. These are distinguished from the force carriers, the bosons (such as the photon), by their spin. The fermions are divided into two classes, the leptons and the quarks, each of these classes is divided into three generations, and each generation contains two particles. Altogether, therefore, there are six leptons (arranged in three pairs). One of each lepton pair is charged (the electron is an example), while the other carries no charge and is called a neutrino. While the electron is stable, the other two charged leptons (called the mu and the tau) decay very rapidly and are consequently much more difficult to detect.

  The quarks are all charged and the three families of them are also arranged in pairs. The first family contains the ‘up’ and the ‘down’; the second pair is the ‘strange’ and the ‘charmed’; the third contains the ‘bottom’ and the ‘top’. Free quarks are not observed, however. They are always confined into composite particles called hadrons. These particles include the baryons, which are combinations of three quarks, the most familiar examples of which are the proton and the neutron. There are many other hadron states, but most of them are very unstable. They might be produced in accelerator experiments (or in the Big Bang) but do not hang around for long before decaying. Using our current understanding it seems that within one millionth of a second of the beginning of time quarks have sufficient energy to tear themselves free. At earlier times than this, the familiar hadronic particles dissolve into a ‘soup’ of quarks.