Cosmology_A Very Short Introduction Read online

Page 6


  But in fact the theory helps here too. Thinking about velocities and distances of sources is unnecessarily complicated. While the redshift is usually thought of as a Doppler shift, there is another way of picturing this effect, which is much simpler and actually more accurate. In the expanding universe, separations between any points increase uniformly in all directions. Imagine an expanding sheet of graph paper. The regular grid on the paper at some particular time will look like a blown-up version of the way it looked at an earlier time. Because the symmetry of the situation is preserved, one only needs to know the factor by which the grid has been expanded in order to recover the past grid from the later one. Likewise, since a homogeneous and isotropic universe remains so as it expands, one only needs to know an overall ‘scale factor’ to obtain a picture of the past physical conditions from present data. This factor is usually given the symbol a(t) and its behaviour is governed by the Friedmann equations discussed in the previous chapter.

  Remember that light travels with a finite speed. Light arriving now from a distant source must have set out at some finite time in the past. At the time of emission the Universe was younger than it is now and, since it has been expanding, it was smaller then too. If the Universe has expanded by some factor between the emission of light and its detection at a telescope, the light waves emitted would be stretched by the same factor as they travelled through space. For example, if the universe expanded by a factor three then the wavelength would triple. This is a 200 per cent increase and the source consequently is observed to have redshift 2. If the expansion factor were only by 10 per cent (i.e. a factor 1.1) then the redshift would be 0.1, and so on. The redshift is due to the stretching of space-time caused by cosmic expansion.

  8. Redshift. As light travels from a source galaxy to the observer it gets stretched by the expansion of the Universe, eventually arriving with a longer wavelength than when it started.

  This interpretation is so simple that it eluded physicists for many years. In 1917, Wilhem de Sitter had published a cosmological model in which he found light rays would be redshifted. Because he had used strange coordinates in which to express his results, he didn’t realize his model represented an expanding Universe and instead he sought to explain what he had found as some kind of weird gravitational effect. There was considerable confusion about the nature of the ‘de Sitter effect’ for many years, but it is now known to be extremely simple.

  It is important also to stress that not everything takes part in the expansion. Objects that are held together by forces other than gravity do not participate. This includes elementary particles, atoms, molecules, and rocks. These instead remain at a fixed physical size as the Universe swells around them. Likewise, objects in which the force of gravity is dominant also resist the expansion. Planet, stars, and galaxies are bound so strongly by gravitational forces that they are not expanding with the rest of the Universe. On scales even larger than galaxies, not all objects are moving away from each other either. For example, the Andromeda galaxy (M31) is actually approaching the Milky Way because these two objects are held together by their mutual gravitational attraction. Some massive clusters of galaxies are similarly held together against the cosmic flow. Objects larger than this may not necessarily be bound (like individual galaxies are), but their gravity may still be strong enough to cause a distortion of Hubble’s Law. Although the linearity of the Hubble Law is now well established out to quite large distances, there is considerable ‘scatter’ about the straight line. Part of this represents statistical errors and uncertainties in the distance measurements, but this is not the whole story. Hubble’s Law is only exactly true for objects moving in an idealized homogeneous and isotropic Universe. Our Universe may be roughly like this on large enough scales, but it is not exactly homogeneous. Its clumpiness deflects galaxies from the pure ‘Hubble flow’ causing the scatter in Hubble’s plot.

  But on the largest scales of all, there are no forces strong enough to counteract the global tendency of the Universe to expand with time. In a broad-brush sense, therefore, ignoring all these relatively local perturbations, all matter is rushing apart from all other matter with a velocity described by Hubble’s Law.

  9. The Hubble diagram updated. A more recent compilation of velocities and distances based on work by Allan Sandage. The distance range covered is much greater than in Hubble’s original diagram. The small black rectangle in the bottom left of the diagram would entirely cover Hubble’s 1929 data.

  The quest for Ho

  So far I have concentrated on the form of the Hubble Law, and how it is interpreted theoretically. There is one other important aspect of Hubble’s Law to discuss, and that is the value of the constant Ho. The Hubble constant Ho is one of the most important numbers in cosmology, but it is also an example of one of the failings of the Big Bang model. The theory cannot predict what value this important number should take; it is part of the information imprinted at the beginning of the Universe where our theory breaks down. Obtaining a true value for Ho using observations is a very complicated task. Astronomers need two measurements. First, spectroscopic observations reveal the galaxy’s redshift, indicating its velocity. This part is relatively straightforward. The second measurement, that of the distance, is by far the more difficult to perform.

  Suppose you were in a large dark room in which there is a light bulb placed at an unknown distance from you. How could you determine its distance? One way would be to attempt to use some kind of triangulation. You could use a surveying device such as a theodolite, moving around in the room, measuring angles to the bulb from different positions, and using trigonometry to work out the distance. An alternative approach is to measure distances using the properties of the light emitted by the bulb. Suppose you knew that the bulb was, say, a 100-watt bulb. Suppose also that you were equipped with a light meter. By measuring the amount of light you receive using the light meter, and remembering that the intensity of light falls off as the square of the distance, you could infer the distance to the bulb. If you didn’t know in advance the power of the bulb, however, this method would not work. On the other hand, if there were two identical bulbs in the room with unknown but identical wattage then you could tell the relative distances between them quite easily. For example, if one bulb produced a reading on your light meter that was four times smaller than the reading produced by the other bulb then the first bulb must be twice as far away as the second. But you still don’t know in absolute terms how far it is to either of the bulbs.

  Putting these ideas into an astronomical setting highlights the problems of determining the distance scale of the Universe. Triangulation is difficult because it is not feasible to move very much relative to the distances concerned, except in special situations (see below). Measuring absolute distances using stars or other sources is also difficult unless we can find some way of knowing their intrinsic luminosity (or power output). A feeble star nearby looks the same as a very bright star far away, since stars, in general, cannot be resolved even by the most powerful telescopes. If we know that two stars (or other sources) are identical, however, then measuring relative distances is not so difficult. It is the calibration of these relative distance measures that forms the central task of work on the extragalactic distance scale.

  To put these difficulties into perspective, one should remember that it was not until the 1920s that there was even a rough understanding of the scale of the Universe. Prior to Hubble’s discovery that the spiral nebulae (as they were then called) were outside the Milky Way, the consensus was that the Universe was actually very small indeed. These nebulae, now known to be spiral galaxies like the Milky Way, were usually thought to represent the early stages of formation of structures like our Solar System. When Hubble announced the discovery of his eponymous law, the value of Ho he obtained was about 500 kilometres per second per Megaparsec (the usual units in which the Hubble constant is measured). This is about eight times larger than current estimates. Hubble had made a mistake in identifyi
ng a kind of star to use as a distance indicator (see below) and, when his error was corrected in the 1950s by Baade, the value dropped to about 250 in the same units. Sandage, in 1958, revised the value still further to between 50 and 100 and present observational estimates still lie in this range.

  Modern measurements of Ho use a battery of distance indicators, each one taking one step upwards in scale, starting with local estimates of distances to stars within the Milky Way, and ending at the most distant galaxies and clusters of galaxies. The basic idea, however, is still the same as that pioneered by Hubble and Sandage.

  First, one exploits local kinematic distance measures to establish the scale of the Milky Way. Kinematic methods do not rely upon knowledge of the absolute luminosity of a source, and they are analogous to the idea of triangulation mentioned above. To start with, distances to relatively nearby stars can be gauged using the trigonometric parallax of a star, i.e. the change in the star’s position on the sky in the course of a year due to the Earth’s motion in space. The usual astronomers’ unit of distance – the parsec (pc) – stems from this method: a star one parsec away produces a parallax of one second of arc when the Earth moves from one side of the Sun to the other. For reference, one parsec is around three light years. The important astrometric satellite Hipparchos was able to obtain parallax measurements for thousands of stars in our galaxy.

  Another important class of distance indicators contains variable stars of which the most important are the Cepheid variables. The variability of these objects gives clues about their intrinsic luminosity. The classical Cepheids are bright variable stars known to display a very tight relationship between the period of variation P and their absolute luminosity L. The measurement of P for a distant Cepheid thus allows one to estimate its L, and hence its distance. These stars are so bright that they can be seen in galaxies outside our own and they extend the distance scale to around 4 Mpc (4,000,000 pc). Errors in the Cepheid distance scale, due to interstellar absorption, galactic rotation, and, above all, a confusion between Cepheids and another type of variable star, called the W Virginis variables, were responsible for Hubble’s large original value for Ho. Other stellar distance indicators allow the ladder to be extended slightly to around 10 Mpc. Collectively, these methods are given the name primary distance indicators.

  The secondary distance indicators include HII regions (large clouds of ionized hydrogen surrounding very hot stars) and globular clusters (clusters of around one hundred thousand to ten million stars). The former of these has a diameter, and the latter an absolute luminosity, which has a small scatter around the mean for these objects. With such relative indicators, calibrated using the primary methods, one can extend the distance ladder out to about 100 Mpc. The tertiary distance indicators include brightest cluster galaxies and supernovae. Clusters of galaxies can contain up to about a thousand galaxies. One finds that the brightest elliptical galaxy in a rich cluster has a very standard total luminosity, probably because these objects are known to be formed in a special way by cannibalizing other galaxies. With the brightest galaxies one can reach distances of several hundred Mpc. Supernovae are stars that explode, producing a luminosity roughly equal to that of an entire galaxy. These stars are therefore easily seen in distant galaxies. Many other indirect distance estimates have also been explored, such as correlations between various intrinsic properties of galaxies.

  10. The Hubble Space Telescope. This photograph was taken as the shuttle was deployed from the Space Shuttle in 1990. One of the most important projects the Hubble Telescope has undertaken has been to measure distances to stars in distant galaxies in order to measure Hubble’s constant.

  So there seems to be no shortage of techniques for measuring Ho. Why is it then that the value of Ho is still known so poorly? One problem is that a small error in one rung of the distance ladder also affects higher levels of the ladder in a cumulative way. At each level there are also many corrections to be made: the effect of galactic rotation in the Milky Way; telescope aperture variations; absorption and obscuration in the Milky Way; and observational biases of various kinds. Given the large number of uncertain corrections, it is perhaps not surprising that we are not yet in a position to determine Ho with any great precision. Controversy has surrounded the distance scale ever since Hubble’s day. An end to this controversy seems to be in sight, however, because of the latest developments in technology. In particular, the Hubble Space Telescope (HST) is able to image stars, particularly Cepheid variables, directly in galaxies within the Virgo cluster of galaxies, an ability which bypasses the main sources of uncertainty in the calibration of traditional steps in the distance ladder. The HST key programme on the distance scale is expected to fix the value of Hubble’s constant to an accuracy of about 10 per cent. This programme is not yet complete, but latest estimates are settling on a value of Ho in the range 60 to 70 kilometres per second per Megaparsec.

  11. Cepheids in M100. These pictures were taken with the Hubble Telescope; the three images indicate the presence of a variable star now known to be a Cepheid. Hubble has been able to measure the distance to this galaxy, directly bypassing the indirect methods in use prior to the launch of this telescope.

  The age of the Universe

  If the expansion of the Universe proceeded at a constant rate then it would be a very simple matter to relate the Hubble constant to the age of the Universe. All the galaxies are now rushing apart, but in the beginning they must have all been in the same place. All we need to do is work out when this happened; the age of the Universe is then the time elapsed since this event. It’s an easy calculation, and it tells us that the age of the Universe is just the inverse of the Hubble constant. For current estimates of Ho the age of the Universe works out around 15 billion years.

  This calculation would only be true, however, in a completely empty universe that contained no matter to cause the expansion to slow down. In the Friedmann models, the expansion is decelerated by an amount depending on how much matter there is in the Universe. We don’t really know exactly how much deceleration needs to be allowed for, but it’s clear the age will always be less than the value we just calculated. If the expansion is slowing down, it must have been faster in the past, so the universe must have taken less time to get to where it is. The effect of deceleration is, however, not particularly large. The age of a flat universe should be about 10 billion years.

  An independent method for estimating the age of the Universe is to date objects within it. Obviously, since the Big Bang represents the origin of all matter as well as of space-time, there should be nothing in the Universe that is older than the Universe. Dating astronomical objects is, however, not easy. One can estimate ages of terrestrial rocks using the radioactive decay of long-lived isotopes, such as uranium-235, which have a half-life measured in billions of years. The method is well understood and similar to the archaeological use of radiocarbon dating, with the only difference being the vastly larger timescale needed for the cosmological application requiring the use of elements with much longer half-lives than carbon-14. The limitation of such approaches, however, is that they can only be used to date material within the Solar System. Lunar and meteoritic rocks are older than terrestrial material, but they may have formed very recently indeed during the history of the Universe so are not useful in the cosmological setting

  12. The age of the Universe. Whether they are open, flat, or closed the usual Friedmann models are always slowing down. This means that the Hubble time, 1/Ho, always exceeds the actual time elapsed since the Big Bang (to).

  The most useful method for measuring the age of the Universe is less direct. The strongest constraints come from studies of globular star clusters. The stars in these clusters are thought to have all formed at the same time, and the fact that they generally are stars of very low mass suggests that they are quite old. Because they all formed at the same time, a collection of these stars can be used to calculate how they have been evolving. This puts a lower limit on the age of
the Universe, because one must allow some time since the Big Bang to form the clusters in the first place. Recent studies suggest that such systems are around 14 billion years old, though this has become controversial in recent years. One can see that this poses immediate problems for the flat universe model. Globular cluster stars are simply too old to fit in the short lifetime of such a universe. This argument has lent some support to the argument that we in fact live in an open universe. More recently, and more radically, the ages of old stars also seem to fit neatly with other evidence suggesting the Universe might have been been speeding up rather than slowing down. I’ll discuss this more in Chapter 6.

  Chapter 5

  The Big Bang

  While the basic theoretical framework of the Friedmann models has been around for many years, the Big Bang has emerged only relatively recently as the likeliest broad-brush account of how the contents of the Universe have evolved with time. For many years, most cosmologists favoured an alternative model called the Steady State model. Indeed, the Big Bang itself had a number of variants. A more precise phrasing of the modern theory is to call it the ‘hot Big Bang’ to distinguish it from an older rival (now discarded), which had a cold initial phase. As I have mentioned already, it is also not entirely correct to call this a ‘theory’. The difference between theory and model is subtle, but a useful definition is that a theory is usually expected to be completely self-contained (it can have no adjustable parameters, and all mathematical quantities are defined a priori) whereas a model is not complete in the same way. Owing to the uncertain initial stages of the Big Bang, it is difficult to make cast-iron predictions and it is consequently not easy to test. Advocates of the Steady State theory have made this criticism on many occasions. Ironically, the term ‘Big Bang’ was initially intended to be derogatory and was coined in a BBC radio programme by Sir Fred Hoyle, one the model’s most prominent dissidents.