B00B7H7M2E EBOK, page 27
In the late 1970s and 1980s, when Sandage had settled on a Hubble constant close to 50 and an age of the universe of 15 to 20 billion years, Gerard de Vaucouleurs of the University of Texas, for one, took serious issue with those numbers. Shortly before Freedman made her findings public in October 1994 there were other studies whose results implied that current estimates of the universe’s expansion rate and age might be headed for another revision. A team led by Robert Kirschner of the Harvard-Smithsonian Center for Astrophysics, using the Cerro Tololo Inter-American Observatory in Chile, measured the expanding debris from five supernovae and judged the universe might be from 9 to 14 billion years old. But Freedman’s team’s calculations, based on data from the Hubble telescope, were more convincing than any of these other challenges to the older numbers.
Astronomers leave a wide margin for error in calculations like these, and the immediate temptation is to wonder whether the numbers are sufficiently fuzzy to allow the universe to be just barely old enough and the oldest stars just barely young enough. However, stars didn’t pop into existence the instant the universe began. Estimating that stars are the same age as the universe would be unsatisfactory. There must be a cushion of at least a billion years after the beginning to leave comfortable time for them to form. The leeway in Freedman’s numbers and in current estimates of the age of the oldest stars is not enough. For reference: a Hubble constant of around 50 indicates an age for the universe of around 15 billion years; a Hubble constant of around 70 or 80, a much younger universe – about 10 billion years or less.
A negative reaction to Freedman’s team’s announcement came almost immediately, and not unexpectedly, from Sandage, whose office was right down the hall from Freedman’s at the Carnegie Observatories. Sandage had served under Hubble himself in this same observatory when it was known as Mount Wilson. According to Sandage, the glitch was being grossly overpublicized and its importance exaggerated – mostly media hype. There were plenty of possibilities of error in the Hubble team’s results. In their measurements of the apparent magnitudes of the Cepheids, for instance, and in their assumption that the galaxy where these Cepheids are is actually in the centre of the Virgo supercluster. Perhaps instead it is in the foreground, nearer our own Galaxy. Arguing for that is the fact that M100 is a spiral galaxy, and it is elliptical galaxies, not spirals, that are more commonly found in the centres of clusters like Virgo. Freedman countered that her team had already taken that possibility into account in assigning a wide margin of error to their estimate. What’s more, as they had also reported in their original paper, the relationship of Virgo to a more distant cluster, the Coma cluster, had made it possible to step out to there and calculate the Hubble constant at that distance – a calculation that gave the same result.
The question also arose whether the rate at which Virgo is moving away is a dependable indicator of the recession rate of the universe as a whole. Tammann reported that his studies indicated that Virgo is actually moving away more rapidly than the rest of the universe. Here, again, was the perennial difficulty of sorting out the actual ‘Hubble flow’ from all the other movement that’s going on among and within clusters and superclusters. How to extract from this complicated picture the part of all that motion that is directly attributable to the expansion of the universe? Any sample of the universe is likely to give a faulty reading unless it is an extremely large sample indeed. No one knows for certain how large a sample that would have to be.
Freedman and her team hadn’t claimed that their result settled once and for all the value of the Hubble constant, but neither were they convinced by the opposition. Sandage’s own measurements had recently been challenged on another front. He had been using Type Ia supernovae for making his distance measurements. Some of the measurements that gave Sandage and his collaborators Hubble constants of around 50 were based on the assumption that these all reach the same maximum brightness and are good standard candles. However, Mark M. Phillips, an astronomer at the Cerro Tololo Inter-American Observatory in Chile, had recently found that not all Type Ia supernovae do have the same brightness characteristics. Brighter ones appeared to occur in spiral galaxies or galaxies with many bright stars. Phillips had developed a technique for analysing the light curves (how the supernova brightens and dims) to recognize these differences and make allowances for them, but, as of 1994, Sandage had not corrected his data. Ominously for Sandage, Robert Kirschner and his colleagues at the Harvard-Smithsonian Center for Astrophysics had corrected theirs, and their estimation for the Hubble constant had risen from 55 to around 67.
At the American Astronomical Society meeting in January 1995, just two months after Freedman’s announcement, the new measurements and the controversy they’d stirred up about the value of the Hubble constant and the age of the universe were centre stage. Mark Phillips and Mario Humay, Phillips’s colleague in Chile, were there reporting their measurement of 25 supernovae, some as far away as one billion light years. Compensating for the differences in maximum brightness that Phillips had discovered, they’d arrived at a Hubble constant of 60 to 70, in the middle range between Sandage’s measurements and those of Freedman’s team. Freedman announced that the Hubble telescope had now measured distances to 40 more Cepheids in M100 and distances to two other galaxies in the Virgo cluster, M101 and NGC925. The new data were consistent with her team’s earlier results.
Eight months passed, and in September 1995, Nial Tanvir at Cambridge University, with colleagues at Durham University in England and the Space Telescope Science Institute in Baltimore, estimated – based on fresh Hubble observations of Cepheids – a distance of 38 million light years to the M96 galaxy, in the direction of the constellation Leo. From this they inferred a distance to the much more remote Coma cluster. The team’s calculation gave the universe an age of 9.5 billion years, give or take a billion.
In March of 1996, a year and a half after Freedman’s initial announcement, Sandage and colleagues had rallied and were ready to report the results of their ongoing supernova study. Sandage, whistling into the wind, it seems, told an interviewer, ‘We believe that this marks the end of the “Hubble wars”.’ In 1990 there had been a Type Ia supernova in the galaxy NGC4639. Sandage’s team had been observing the light curve of this supernova since 1992. What was particularly significant about this investigation was that the Hubble telescope had been able to see individual stars in this same galaxy and 20 of them were Cepheids. From their brightness, Sandage’s group had been able to calculate the distance of the NGC4639 galaxy as 82 million light years, and from that they knew the absolute magnitude of the Type Ia supernova in a way that didn’t depend on comparing that supernova’s brightness with the brightness of other Type Ia’s. Applying this fresh knowledge to previous measurements of the apparent peak brightnesses of six other Type Ia supernovae, Sandage recalculated their distances and came up with a Hubble constant of 57, in the range he had been insisting on all along.
Sandage considered the case closed. Freedman and her colleagues were not won over. Sandage’s new results, compelling as they seemed, didn’t make the team’s own Hubble findings go away or point up any flaw in their calculations. The Freedman team’s numbers did come down a little. At the Princeton ‘250th Birthday Conference’ in June 1996, she reported a value of 73, based on a combination of the distance measurements to Cepheids, the study of Type Ia and Type II supernovae, the Tully-Fisher relation and surface-brightness fluctuations. Freedman said that with so much data accumulating so rapidly, the debate would be settled in the next three years. Sandage was sure that it will finally be settled close to his own number, but perhaps not until well into the next century, too late for him personally to enjoy his victory. He has said of de Vaucouleurs’s death in 1995 (the de Vaucouleurs who was his critic), ‘Very unfortunate. Anybody in the middle of a crisis should live to see the resolution of the crisis.’
What about the age of stars? There has been less controversy about that than there has been about the age of the universe. In the winter of 1996, study of some of the most distant galaxies ever observed showed them to be as much as 14 billion years old, shoring up faith in earlier calculations. However, in the late summer and fall of 1997, physicists at Case Western University in Ohio, led by Lawrence M. Krauss, re-examined the age of some of the oldest and most distant stars, using measurements from the Hipparcos satellite. They recalculated the age of globular clusters previously thought to be as much as 15 billion years old or even older. Hipparcos’s measurements, made with unprecedented precision, revealed that these globular clusters are further away than earlier estimates had put them, and so, in order for them to appear as bright as they do, they must also be brighter than previously thought. And if they are brighter, that means they are burning faster and have evolved more quickly, making them younger – perhaps 11 billion rather than 15 billion years old.
Hipparcos cannot directly measure the distance to these globular clusters. They are in the Milky Way’s halo, outside the Galaxy’s main disc, too far away for parallax measurements even with Hipparcos. Instead, researchers used Hipparcos to measure the distance and brightness of other stars and compared them with stars of similar composition in globular clusters. Catherine Turon of the Paris-Meudon Observatory, who along with others has calculated 12.8 to 15.2 billion years for the age of a globular cluster known as M92, has admitted there are difficulties with such measurement: the stars used for comparison are often dim stars that have no metals or other heavy elements. Getting models adapted to such extreme objects with low metallicity is problematical. Processes such as fast rotation or metals sunk out of view into the star could skew the conclusions. Michael Perryman, a project scientist for Hipparcos, is even more sceptical. Hipparcos’s own data have shown that some of the stellar models are spectacularly wrong. These new calculations of the age of stars did not put the age-of-the-universe paradox to rest.
However, also in the autumn of 1997, astronomers using the Hubble telescope to watch the collision of two galaxies called the Antennae observed at least 1,000 clusters of newborn stars forming from giant hydrogen clouds in the centre of the merging galaxies, indicating that globular clusters are not all so ancient. Some at least are emerging out of more recent galactic collisions.
Perhaps old assumptions about the history and ages of stars are not unshakeable after all. But Freedman points out that though not all globular clusters are as old as previously thought, a great many in our Galaxy are. Also, continuing studies of faint galaxies in the Hubble Deep Field argue for an older universe. They show that some elliptical galaxies were already well advanced in years at a red shift of 1.2. It doesn’t help that some things are younger than previously thought.
Sandage, Freedman and their associates were treading on the frontiers of science – very recent and ongoing science. The Cepheids in the Virgo cluster that stirred up the controversy couldn’t be seen at all before the Hubble Space Telescope, in fact not before its faulty optics were corrected in December of 1993, less than a year before Freedman’s team’s discovery. Some of the measuring techniques being used were barely past the experimental stage, yielding data whose implications no one fully understood. It would have been the stuff of tabloids to declare anything settled. Those who are uncomfortable when science yields paradoxes rather than certainties would have to go on being uncomfortable for a while.
But suppose the Hubble constant does end up indicating that the universe is younger than some of its stars. It may seem relatively easy to think of the rapidity with which the universe is expanding as being the result only of a two-way tug-of-war between gravity (which is working to make the universe contract) and the expansion energy resulting from the Big Bang (which is working to make it expand). There may, or may not, be another player involved, and that player is Einstein’s old ‘mistake’.
Popular science books and articles habitually describe the cosmological constant as a repulsive force that might counter the effect of gravity. It would be well to acquire a little more sophistication and realize that the cosmological constant can actually work either way.
If the cosmological constant is a positive number, then it will indeed counter gravity, joining in the struggle on the side of expansion. However, if it is a negative number, then the effect will be to weigh in on the side of gravity. If it is zero, then it will do neither. Think of it then as a theoretical property of the vacuum of space that, if it exists and isn’t zero, might act to stretch space and thus counteract gravity’s contracting power, or might do the opposite. To put that another way, imagine yourself facing a dial. If the arrow is pointing to zero, then the cosmological constant is having no effect at all. If you move it to the minus side of the zero, the further you turn it the more it will contribute to the contraction of the universe. If you move it to the plus side of the zero, the further you turn it the more it will contribute to the expansion of the universe.
Even this slightly more sophisticated view of the cosmological constant does not begin to do justice to the complications involved when physicists and astrophysicists play with its value. The cosmological constant can seem to be working both ways at the same time, allowing us to have our cake and eat it too. However, what seems a contradiction is not, because of the way the cosmological constant fits into the equation for omega.
Theories of quantum mechanics – the study of the very small (atoms, molecules and particles) – have it that everywhere in the universe particles are spontaneously popping into and out of existence. Their life spans are unimaginably short. Nevertheless, ‘empty space’ seethes with this energy, and ‘empty space’ does not mean only what is out there dark and remote between the stars. This quantum energy fills the enormous amount of empty space within the atoms that make up chairs, tables, human bodies and all other things familiar and unfamiliar. ‘Emptiness’ is full of energy. Theory suggests that the energy of the cosmological constant might be this energy of virtual particles which wink in and out of existence at all times and everywhere in the universe.
For Einstein, the cosmological constant was only a mathematical device, and not long after he put it into his equations in order to avoid the implication that the universe must be either expanding or contracting, he decided it had been a mistake – for of course the universe is expanding. After visiting Hubble at Mount Wilson in 1931, Einstein rejected the whole idea, calling it ‘theoretically unsatisfactory anyway’. But the cosmological constant didn’t go away. Lemaître in particular enjoyed fooling around with it and adjusting its value, discovering that by fiddling with this theoretical dial he could construct universes that started out very slowly and then sped up, or universes that started out fast and then slowed down, or universes that began expanding, stopped and then expanded again. Something like that stop-and-start version was evoked in the 1940s as a possible remedy when new discoveries indicated that the universe was younger than the solar system. When it then turned out that the Hubble constant had been overestimated, the cosmological constant wasn’t required after all and was packed away once again.
In 1948, researchers detected the effects, on atoms, of the vacuum energy decreed by quantum mechanics, but no one went on to study its possible influence on the universe as a whole until 19 years later when Zel’dovich, the Soviet theorist, realized that this vacuum energy would enter into Einstein’s equations in just the same way that the old cosmological constant did. Before long it became evident that if Einstein had been right about mass causing spacetime to curve, and if all this vacuum energy really does exist, then the vacuum energy ought long ago to have curled the universe up into a tiny ball or something even smaller, or else driven the expansion so that even atoms – much less galaxies – could never have formed. Even by making the cosmological constant extremely small, Zel’dovich couldn’t show how the universe could have turned out to be the way it is. So it seemed the value must be zero, and that is what most theorists since have been assuming. That zero does not, by the way, mean that there is no vacuum energy, only that by some truly remarkable coincidence, all the positives and negatives in that vacuum energy cancel out exactly.
As we’ve seen, the cosmological constant is still with us, hovering like a ghost in the equation for omega. If its value is zero we could write it off and forget it, but the symbol for it would still be sitting there. I am reminded of a recipe one of my more eccentric friends gave me for broccoli soup. The recipe had been passed from cook to cook, many times. Always the list of ingredients included a can of won ton soup, with parenthetical instructions: ‘Don’t put this in.’
Even before the recent discoveries that the universe may be expanding much faster than previously supposed, late-20th-century astrophysicists had been feeling once again an itch to reach for the cosmological constant dial. There was a possibility it might offer solutions to some intractable problems such as the still-missing dark matter. Nearly everyone was approaching the idea super-cautiously, having been burned twice before. John Noble Wilford commented in a New York Times article that one thing that makes physicists particularly uneasy about assigning the cosmological constant a value other than zero is that this reminds them too much of the way medieval astronomers designed increasingly complicated celestial mechanisms to explain the planets’ motions, in order to preserve their beloved earth-centred Ptolemaic universe. As was true with those mechanisms, there was nothing to indicate that a cosmological constant value other than zero was wrong. But there was also nothing to indicate it was right. The best argument for it was that it allowed physicists to cling to theories in which they had a vested interest! Being able to turn the cosmological constant dial to a number (negative or positive) of their choice that ended up supporting the currently favoured version of the Big Bang theory was just a little too easy and allowed for too much leeway. With a friend like the cosmological constant, did a theory need enemies?
