B00b7h7m2e ebok, p.26

B00B7H7M2E EBOK, page 26

 

B00B7H7M2E EBOK
Select Voice:
Brian (uk)
Emma (uk)  
Amy (uk)
Eric (us)
Ivy (us)
Joey (us)
Salli (us)  
Justin (us)
Jennifer (us)  
Kimberly (us)  
Kendra (us)
Russell (au)
Nicole (au)



Larger Font   Reset Font Size   Smaller Font  



  Solving the equation for omega requires knowing four numbers, three of which are currently not known with certainty. The four are the speed of light, the cosmological constant (the theoretical constant Einstein suggested and that he hoped would allow the universe not to expand or contract), the Hubble constant and the deceleration parameter. The third and fourth need introduction:

  The Hubble constant or Ho (pronounced ‘H nought’) denotes the rate at which the universe is expanding. However, it isn’t a direct indication of the speed at which everything out there is rushing away. To give a specific example: if the Hubble constant is 50, that indicates that there is an increase in the recession velocity of 50 kilometres per second for every megaparsec of distance from the observer doing the measuring. Taking the simplest case and remembering that there are no receding galaxies this close to Earth, if the Hubble constant is 50 and Galaxy A is one megaparsec away from Earth, Galaxy A should be receding at a velocity of 50 kilometres per second. If Galaxy B, out beyond Galaxy A, is two megaparsecs away from Earth, Galaxy B should be receding at a velocity of 100 kilometres per second. A galaxy three megaparsecs away . . . 150 kilometres per second, and so forth. This also means that if the Milky Way, Galaxy A and Galaxy B were all lined up in a straight line, and if you and I were in Galaxy A, we would find Galaxy B receding at a rate of 50 kilometres per second in one direction and the Milky Way Galaxy receding at 50 kilometres per second in the other. Notice that this does follow the raisin bread rule of twice as far, twice as fast, no matter what value the Hubble constant has, and no matter where you and I stand to watch other galaxies recede.

  The deceleration parameter measures the rate at which the expansion is slowing down due to the gravitational attraction among all the clusters of galaxies. It might seem astrophysicists ought already to know what that is. If it’s true that when we observe very distant galaxies and galaxy clusters we are seeing them as they were billions of years ago, why isn’t it possible to compare the rate at which they are receding with the rate at which nearer galaxies are receding, find out whether the expansion has slowed down and, if so, how much? Astrophysicists are trying to do just that, but it isn’t easy. Some have suggested that one reason why it’s difficult may be that the universe is perfectly balanced between the mass density that would allow it to expand forever and the mass density that would cause it to collapse to a ‘Big Crunch’. In other words, perhaps the very difficulty of making that determination is a clue that omega equals one and the universe is expanding at precisely the right rate to go on forever, always slowing down its expansion but never stopping the expansion and collapsing.

  One source of complication is that the mass density of the universe is changing over time. Unless new matter or energy is appearing on the scene (which the Steady State theory proposed but most physicists don’t believe is happening) things inevitably grow less and less dense in an expanding universe. They thin out.

  It is the interrelatedness of these numbers, or values, that’s laid out concisely in the equation for omega. It doesn’t take much expertise to see that there are relationships, that one thing depends on another. The equation shows precisely in what manner and to what degree they are related. At the risk of sending a great many readers running for cover, here (Figure 8.1) is the formula for omega. Consider it a souvenir, something a patient reader is owed for having made it so far with this book! We will not proceed to solve it.

  How far have researchers got in the process of discovering the unknown numbers in that equation? They know the speed of light. What a pleasure to be able to plug in one actual number here! No one yet knows the value for the deceleration parameter or the cosmological constant. There is disagreement over the Hubble constant. Modern scholars find themselves in very much the same situation their forebears were in when they had Kepler’s laws but not Cassini’s and Flamsteed’s measurements of the distance to Mars. Here is a formula – but not all the numbers to put into it.

  Figure 8.1 The Formula for Omega

  One serious problem in estimating how much matter there is in the universe is that there actually doesn’t seem to be enough of it around. In the 1930s, Swiss astronomer Fritz Zwicky discovered that galaxies in the ‘great cluster’ in the constellation Coma Berenices were moving too rapidly, relative to one another, to be bound together by their mutual gravitational attraction. Given the way gravity works, and what can be seen of this cluster of galaxies, the arrangement should be flying apart. Searching for an explanation, Zwicky thought of two possibilities. What appeared to be a cluster might instead be a short-term random grouping of galaxies; or there might be more to these galaxies than met the eye or the telescope. In order to provide the amount of gravitational attraction required to bind the cluster together, it would have to contain much more matter than we observe. No one was willing to consider the third possibility that physicists might have made an egregious error in figuring out how gravity operates, or in assuming that it operates the same everywhere.

  With Zwicky’s discovery was born the puzzling notion that it may be impossible to observe more than a tiny fraction of all the matter in the universe. In the years since he first speculated about it, plenty of support has emerged for the existence of ‘dark matter’. That support has been both observational and theoretical. For everything to work as it appears to do, there has got to be much more matter in the universe than present technology is able to detect. By some calculations, from 90 to 99 per cent of the matter in the universe is not radiating at any wavelength in the entire electromagnetic spectrum. While other pieces of the Big Bang picture fell into place, the missing matter remained a mystery.

  There is an example much closer to home than the constellation Coma Berenices: the mass and distribution of observable matter in the Milky Way Galaxy isn’t sufficient to account for the way the Galaxy rotates. What would it take to cause the Milky Way to rotate as it does? The matter should be mostly outside the visible disc of the Galaxy, it ought to extend well beyond the edge of the observable disc, and much of it should not be level with the disc but ‘above’ and ‘below’ it. If all that were the case, then the rotation would make sense. The suspicion is that the Galaxy must be surrounded by a halo of dark matter that is much larger than the observable mass of the Galaxy. The total diameter of the Galaxy might be four or five times what it is possible to observe in any range of the spectrum. Dark matter might also provide an explanation for the ‘hat brim’ tilt of the Galaxy’s thin gas disc.

  There is no way to investigate dark matter directly, only by watching how it affects other things – that is, what its gravitational effect is on other matter and radiation. Sometimes it gives its presence away by the manner in which it bends the paths of light. Such paths through spacetime are bent by the presence of massive objects (‘benders’) such as stars, planets, galaxies and galaxy clusters. This happens regardless of whether or not the benders are themselves detectable at any wavelength. When the distortion is too great to be caused by the observable matter in the bender, or when there is no observable bender at all, researchers know they are not observing everything that’s out there between them and the background. They suspect the presence of dark matter.

  The mystery of dark matter lies at the heart of the problem of measuring the age and the future of the universe. Calculating roughly whether there is sufficient observable matter in the universe to produce the gravitational attraction necessary to keep the universe at critical density, omega-equals-one, shows that the amount of matter observed directly with present technology falls far short. But the discussion doesn’t end there, because dark matter does exist and because no one is yet certain how much there is or what it is.

  Big Bang theory in its most straightforward form has it that even a microscopic deviation from omega-equals-one would have caused the universe very early on to recollapse or made it expand so rapidly that stars could never have formed. Inflation theory has proposed a solution to that fine-tuning problem, but the question right now is: Is the universe all that fine-tuned? It seems things would be dramatically different if it weren’t, but it isn’t actually obvious how it is. No measurement of existing mass density comes anywhere near critical density, which means that the universe should have expanded too fast for stars to form. It didn’t. What is it we don’t know yet?

  Candidates for dark matter range from still-hypothetical mysterious exotic particles to black holes a billion times more massive than the sun. Primordial black holes (tiny ones formed in the early universe), planets, dwarf stars too dim to have been observed, massive cold gas clouds, comets and asteroids, and an assortment of dead or failed stars make up a broad middle ground of possibilities. Some physicists insist on tossing in a few copies of the Astrophysical Journal.

  In 1998, hard-to-detect particles known as neutrinos moved to the short list. The existence of neutrinos is not a new idea. They were first suggested in 1930 by Wolfgang Pauli as a way to explain a mysterious loss of energy in some nuclear reactions, but it was not until 1956 that observations by Frederick Reines and Clyde Cowan at the Los Alamos National Laboratory in New Mexico confirmed their existence outside of theory.

  No one now questions the existence of neutrinos, but they remain notoriously difficult to study. They rarely interact with any kind of matter. A typical neutrino can pass through a piece of lead a light year thick without hindrance. Clues to their existence come on those rare occasions when a neutrino does happen to collide with an atom, but even then the evidence is indirect.

  Whether neutrinos have any mass at all has been in question, and of course if they have no mass they cannot be contributing to the mass density of the universe. There have been a number of claims in the last few years of the discovery of neutrino mass, but much stronger evidence came in June of 1998, from a team of Japanese and American physicists at an observatory in Takayama, Japan.

  Their detector was a tank the size of a cathedral containing 12.5 million gallons of ultra-pure water, inside a deep zinc mine one mile inside a mountain. The rationale for the experiment ran like this: one of the ways neutrinos are produced is when cosmic ray particles from deep space slam into the Earth’s upper atmosphere. Experimenters hoped to compare neutrinos that came from the upper atmosphere directly over the detector (a short distance) with those that were coming up from under the detector after having passed through the Earth (a long distance). Neutrinos from both sources, moving through the water, would occasionally collide with an atom. The result of such a collision is a scattering of debris, and the particles of that debris race through the water creating cone-shaped flashes of blue light called Cherenkov radiation. The light is recorded by 11,200 20-inch light amplifiers that line the inside of the tank. Researchers analyse the cones of light, finding the proportions of different sorts of neutrinos coming from each direction, and attempt to determine whether the neutrinos, which come in three types, change type on their journey from the upper atmosphere. If neutrinos can make this change, that means they must have mass.

  Dr Yoji Tkotsuka, leader of the team and director of the Kamioka Neutrino Observatory, the site of the detector, announced that the evidence was strong for neutrino mass. ‘We have investigated all other possible causes of the effects we have measured,’ he reported, ‘and only neutrino mass remains.’

  Calculations based on these findings show neutrinos might (not everyone agrees they do) make up a significant part of the mass of the universe. Not that a single neutrino amounts to much. The mass of a neutrino turns out to be almost infinitesimal – about of the mass of an electron. But neutrinos nevertheless pack considerable clout by dint of their numbers, for there are about 300 of them in every teaspoonful of space. They outnumber other particles in the universe by a billion to one. In fact, the discovery of that tiny mass by the team in Takayama adds considerably to the mass density of the universe, by some calculations more than doubling it at a stroke.

  As promising as this might appear, the discovery that neutrinos have mass can’t account for all the missing matter that calculations show ought to exist. And though the combined mass of all those neutrinos may be enough to slow the expansion of the universe, it isn’t likely to be enough to stop it or turn it around. The puzzle still has not been solved. The discovery of neutrino mass hasn’t revealed the future of the universe.

  In the mid-1980s, a panel of astronomers reviewing plans for use of the Hubble Space Telescope decided that the determination of an absolute distance scale outside the Galaxy and the discovery of the expansion rate of the universe – the Hubble constant – should be among the highest-priority projects undertaken by the telescope. A team of astronomers from the United States, Canada, Great Britain and Australia, led by Wendy Freedman of the Carnegie Observatories in Pasadena, California, received the largest allocation of time on the Hubble Space Telescope for a period of five years. The Extragalactic Distance Scale Key Project, as they call their work, involves trying to determine distances to nearby galaxies more accurately than ever before. These galaxy distances will then form the underlying basis for a number of other methods that can be applied at more remote distances, making possible several independent measurements of the Hubble constant.

  Wendy Freedman is a native of Toronto, Canada, and one of her most vivid childhood memories is of a trip with her father to northern Canada, where they watched the stars and he explained to her how long it takes their light to reach us on the Earth. When Freedman entered the University of Toronto in 1975 she intended to study biophysics, but she soon switched to astronomy. She got her doctorate in astronomy and astrophysics from Toronto in 1984, then received a Carnegie Fellowship at the Carnegie Observatories, and in 1987 was the first woman to join the permanent faculty there, where she remains to this day.

  At the heart of Freedman’s Extragalactic Distance Scale Key Project lies the effort to measure Cepheid distances to 20 galaxies with the Hubble telescope. It is these distances that are then expected to provide an absolute scale for other methods which give only relative distances (Type Ia supernovae, Type II supernovae, the Tully-Fisher relation, and surface-brightness fluctuations – see here).

  In 1994, Freedman and her team were attempting to measure more precisely the distance to the centre of the Virgo supercluster. They found twenty Cepheids in the spiral galaxy M100 in the Virgo cluster, at the core of the supercluster, the first sure identification of Cepheids that far away. The Hubble data indicated that these Cepheids are approximately 56 million light years from earth. That was nearer than earlier estimates put the centre of the Virgo supercluster.

  From this new distance measurement and M100’s recession velocity (learned from its red shift), Freedman and her colleagues calculated a new value for the Hubble constant, about 80 kilometres per second per megaparsec. Experts led by Allan Sandage had previously calculated that its value was about 50, nowhere near 80. Thus began one of the most heated debates in modern astronomy – either an extremely significant controversy or much media hype about nothing, depending on which side of the issue you stand.

  Freedman’s announcement came as a shock. The Hubble findings were an embarrassment. Deciding between values of 50 and 80 was not mere nit-picking: if the universe is expanding so much more rapidly than previously thought, it follows that less time has elapsed since the Big Bang than the 10 to 20 billion years most experts had settled on. Depending on the density of matter in the universe, a Hubble constant of 80 means the universe must be only eight to twelve billion years old, probably nearer eight.

  It isn’t uncommon in science for new findings to challenge earlier thinking, sometimes eventually undermining what nearly everyone has been assuming was virtually unassailable scientific knowledge. But this challenge was one of the most disquieting so far in the 20th century, for astronomers were fairly certain, based on what they considered sound understanding of nuclear physics and the rate at which hydrogen converts to helium in stars, that some of the oldest stars in the Milky Way are 14 billion years old, probably even older. The universe can’t be younger than the stars in it.

  The glitch made newspaper front pages all over the world. Scientists ground their teeth. Since the future of astrophysics and astronomy depends on massive public spending, these branches of science have an enormous stake in maintaining their credibility. Researchers wonder what will motivate the allocation of funds for science now that the old Cold War rivalry is history. If public opinion is to favour continuing support for this wondrous but expensive adventure, it really doesn’t do to have announcements indicating that tax money is buying nonsense! Like the Church in Galileo’s day, the modern scientific establishment has much to lose if simple faith – in science this time round – is undermined.

  Freedman and her team were young astronomers. Those whose numbers they were questioning were some of the most highly – and deservedly – respected older members of the astronomy community. Sandage had spent the best part of a lifetime developing new measuring techniques and making careful observations to arrive at the Hubble constant value of 50. But iconoclastic as the team’s announcement was, it didn’t come entirely out of the blue, nor was such a conundrum unprecedented. In 1929, Hubble himself calculated the Hubble constant to be 500 kilometres per second per megaparsec, making the universe younger than geologists knew the Earth was. Baade refigured Ho at 250. Sandage reduced it still further to 180, then to 75, then (in the mid-1970s) to 55 plus or minus 10 per cent. These corrections succeeded in making the universe old enough to allow for the formation of even the most ancient stars and globular clusters, but not before discussions had taken place that resembled those that now followed Freedman’s announcement. Nor had Sandage and Tammann’s value for the Hubble constant previously gone unchallenged.

 

Add Fast Bookmark
Load Fast Bookmark
Turn Navi On
Turn Navi On
Turn Navi On
Scroll Up
Turn Navi On
Scroll
Turn Navi On
183