How to Destroy the Universe Read online

Page 2


  Mind the gap

  Suddenly you lurch forward. The brakes are on and the ride is over almost as quickly as it began. As you disembark you try not to look too disheveled in front of the people queuing up for their turn. But in reality, your internal organs feel like they’ve been through a food mixer, your head is pounding and you could swear you have bruised ribs from strapping yourself in too tightly. You vow to have another go before the day is out.

  CHAPTER 2

  How to predict the weather

  • Weather watching

  • How to read a weather map

  • Predicting the weather

  • Number crunching

  • Climate modeling

  • Chaos theory

  • Strange attractors

  • Super crunchers

  On the night of October 15, 1987, the worst storm in 284 years tore across the south of England, battering homes and property and causing damage totaling £2 billion. Winds reached hurricane force and downed an estimated 15 million trees. And yet, just 24 hours before it struck, weathermen were laughing off suggestions that we might be in for a rough night. They predicted that the storm would fail to make landfall, and would bluster harmlessly up the English Channel. Way-off weather forecasting seems an all-too-common occurrence. But why is it so hard? And what can be done to improve it?

  Weather watching

  Human beings are obsessed with the weather. It dominates our small talk, stops us getting to work in the winter and regularly ruins public holidays in the summer. Hardly surprising then that our best brains have been trying to distinguish the makings of a balmy Sunday from those of a wet weekend for thousands of years.

  In 1835, US scientist Joseph Henry used the newly invented long-distance electronic telegraph to set up a network of weather-monitoring stations across the United States, the readings from which were wired instantaneously to a central office at the Smithsonian Institution in Washington DC. Weather-monitoring stations use a variety of instruments to gather data such as temperature, air pressure, wind speed, humidity and rainfall. Today, the findings of ground stations are supplemented by ships, together with a host of eyes in the sky such as weather balloons, aircraft and satellites, which scan the state of the planet’s atmosphere from all angles to get a handle on what the weather is doing now—and what it’s going to do next.

  How to read a weather map

  Sometimes it is easy to draw up a basic picture of how the weather is going to behave. For example, if a ground station in Florida is registering low pressure and a ship off the coast in the Atlantic is reading high pressure, it’s a good bet that Florida is due for strong winds as air rushes from the area of high pressure to the low. (Over larger scales winds are deflected by the Coriolis effect, caused by the planet’s rotation—see How to stop a hurricane.)

  Lines of constant pressure on a weather map are called isobars. They can be thought of as rather like contour lines on the 3D landscape you get by graphing the pressure at every point on Earth’s surface. Pressure differences can sometimes be predicted from thermal effects. Hot air rises and the updrafts act to lower the pressure over warm regions, while cool downdrafts tend to create regions of high pressure. Warm updrafts carry with them moisture that forms clouds as it condenses at high altitude. Temperature differences sometimes appear on weather maps as warm fronts, denoted by a line of red semicircles, and cold fronts, marked out by blue triangles. The arrival of a cold front can cause rainfall—or “precipitation,” as meteorologists like to call it. Warm, moist air rises up above the advancing cold front where it condenses into clouds and then drops back to the ground as rain. When conditions are exceptionally cold the water can fall instead as snow or hail.

  Predicting the weather

  This broad-brush analysis is fairly straightforward, and allows forecasters to provide the public with a very general impression of the weather along the lines of “tomorrow’s going to be windy.” But what if we need more details—such as how fast the gusts will be in each area, what time of day they’ll be at their worst, or indeed whether hurricane-force winds will plow up the English Channel or veer inland to wreak havoc? Predicting the weather in this much detail means solving the mathematical equations that govern the physics of the planet’s atmosphere. These equations are fiendishly complicated, coupling together processes such as the fluid dynamics of the air and oceans, heat transfer, atmospheric chemistry and the physics describing the radiation arriving from the sun. In fact, they are so abstruse they’re nigh on impossible to solve—at least by the conventional methods most of us used to solve equations in math classes at school. Worse still, the equations are highly non-linear, meaning that small variations in the input variables can bring about wholesale shifts in the outputs, which makes it hard to even solve them approximately.

  Number crunching

  Physicists attack mathematical problems such as this using the only option left at their disposal: brute force. Or in other words, solving the equations “numerically.” This works by shoving best-guess numbers into the formulae and then tweaking their values by trial and error until the equations all balance up. The first person to suggest doing this for the weather was the British physicist and mathematician Lewis Fry Richardson. In 1922, he published a book called Weather Prediction by Numerical Process. In it, he imagined a vast hall filled with “human computers”: people armed with pen and paper all busily grinding out numerical solutions to the equations describing the weather. A central “conductor” would collate their results and then issue them with new instructions. There was just one snag. Richardson calculated that keeping up with the world’s weather in real time would require 64,000 of these mathematical drones—equivalent to the entire population of Palo Alto, California. It seemed the only way to realize Richardson’s vision was to come up with a machine that could carry out the calculations automatically. And so it was that numerical weather prediction was put on hold for 20 years, pending the invention of the electronic computer.

  Climate modeling

  The first computer-based weather simulation was run in 1950 on a computer called ENIAC (Electronic Numerical Integrator and Computer) at the US Army Ballistic Research Laboratory in Maryland, where it had initially been used for working out artillery-shell trajectories. ENIAC’s early weather models used an extremely simplified picture of the atmosphere, where the air pressure at any point is determined simply by the density. Gradually meteorologists built more sophistication into their models to account for the processes of heating and atmospheric circulation that generate our complex real-world weather phenomena.

  Computer weather models are set up by dividing the atmosphere into a three-dimensional grid. British mathematician Ian Stewart, in his book Does God Play Dice?, likens it to a 3D chess board. The weather at each precise moment in time is determined by assigning each cube in the grid a set of parameters defining the temperature, pressure, humidity and so on within that cube. These numbers can be thought of as rather like the chess pieces. The computer then evolves the board forward according to the rules of the game, encoded in the physics equations describing the weather. The results amount to moving the pieces around on the board rather like moves in a game.

  In each cube, the computer takes the values of all the weather parameters and crunches them through the equations to work out the rate of change of each parameter at that instant in time. The rate of change allows all of the parameters to be evolved forward by a short interval, known as the “time step.” Now the new values for all the parameters can be fed back into the computer again and used to work out a new set of rates of change, which can then be used to evolve the whole system forward by the next time step, and so on. The process repeats iteratively until enough time steps have been accumulated to reach the point in the future for which the forecast is needed. For a model of global weather systems, the time steps might be ten minutes or so, but for simulations of the weather over small regions they can be as small as a few seconds. After each t
ime step, the parameter values in each cell are meshed together to ensure continuity. The result is a model of Earth’s weather that can be advanced as far into the future as needed.

  Chaos theory

  However, the model cannot just advance into the future. Something was still missing. The predictions of the computer weather models were still only good for a few days, after which time, they became hopelessly inaccurate. The reason why was uncovered in the 1960s by the US mathematician Edward Lorenz. What he found would revolutionize not just how we think about the weather, but pretty much the whole of math and physics.

  In 1963, Lorenz carried out a detailed study of the equations describing a key element to how the weather behaves: convection. This is the process that makes hot air rise and cold air sink. The same process happens in a pan of cold water that’s heated from below on a stove. Even this small subset of weather math was too difficult to solve on paper, so Lorenz put the equations on a computer. But when he did this he found something curious. If he stopped his simulation halfway through and wrote down the values of all the parameters, and then fed these back in manually to finish the simulation off, he got an answer wildly different from what he got by just letting the simulation carry on running in the first place. Lorenz eventually isolated the problem. Although the computer’s memory was storing the numbers to an accuracy of six decimal places, it was only displaying its results to three decimal places. So, for example, if a number in the memory was 0.876351, the computer would only display 0.876. When Lorenz fed this truncated number back in, the loss of accuracy brought about by sacrificing those last three digits was skewing his results. So sensitive are the equations of convection to the initial conditions of the system that changing these conditions by just a few hundredths of a percent was bringing about wildly different behavior. Lorenz had discovered a phenomenon known as “chaos”: extreme sensitivity of a system to its initial state, meaning that tiny differences in that initial state become magnified over time. The main reason why forecasting the weather tomorrow is so difficult is because we cannot measure the weather today accurately enough. Lorenz even coined a term to describe the phenomenon—the “butterfly effect,” the idea that the tiny perturbations caused one day by a butterfly beating its wings could be amplified over time to create dramatic shifts in the weather days down the line.

  Strange attractors

  Today, chaos is known to crop up in all kinds of physical systems—including quantum mechanics, relativity, astrophysics and economics. Mathematicians spot the presence of chaos by drawing a diagram called a “phase portrait,” which shows how the system evolves with time. They look for areas of the phase portrait called “attractors,” to which the system’s behavior converges. Non-chaotic systems have simple, well-defined attractors. For instance, the phase portrait of a swinging pendulum is just a plot of the pendulum bob’s position against its speed, and the attractor takes the form of a circle.

  Chaotic systems have attractors with bizarre, convoluted forms known as “fractals”—disjointed shapes that appear the same no matter how closely you zoom in on them. The simplest fractal is made by removing the middle third from a straight line and then repeating the process ad infinitum on the remaining segments. Edward Lorenz found that the attractor in the phase portrait of convection was indeed a fractal—a kind of distorted figure 8, which has since become known as the “Lorenz attractor.”

  The simplest fractal is obtained by removing the middle third from a straight line and repeating the process.

  Super crunchers

  Improved computing power is now enabling the future evolution of chaotic systems to be predicted more reliably by storing the system parameters to a greater number of decimal places. The most powerful scientific computer is a modified Cray XT5, known as Jaguar, at the National Center for Computational Science in Tennessee. It has the same number-crunching capacity as about 10,000 desktop PCs. In truth, it’s unlikely the weathermen will ever be able to tell us with 100 percent certainty whether it’s going to be sunny at the weekend. But disastrous misforecasts such as those that were issued prior to the Great Storm of ’87 should at least become a thing of the past. Or so they tell us.

  CHAPTER 3

  How to survive an earthquake

  • What is an earthquake?

  • The magnitude scale

  • Tsunamis

  • Quake-proof buildings

  • Mass dampers

  • Earthquake prediction

  Earthquakes are one of the most destructive forces in the natural world, equivalent in power to an atomic bomb. The quake that struck Haiti in 2010 killed over 200,000 people, and as cities in earthquake zones grow larger, it is becoming increasingly likely that a future quake could claim not thousands but millions of lives. Or is it? Are new technologies to mitigate the effects of earthquakes, ranging from giant pendulums inside skyscrapers to rubber feet under buildings, finally about to tame this awesome force of nature?

  What is an earthquake?

  Earthquakes occur when the tectonic plates that make up Earth’s crust grate and grind against one another as they move. Tectonic plates are vast interlocking slabs of rock that float on the liquid layers of molten metal and rock that lie below them. As these liquids roll and froth, stirred up by the heat of the planet’s interior, they drag on the plates above, pulling them this way and that. There are seven major tectonic plates—African, Antarctic, Eurasian, Indo-Australian, North American, Pacific and South American—and very many smaller ones. The boundaries where two plates meet are known as “fault lines” and they come in a variety of different forms, depending on the relative motion of the two plates.

  When the two plates are slipping past one another horizontally, the boundary is referred to by geologists as a “transform fault.” As the plates jostle together, friction at the fault prevents them from slipping by smoothly. Instead they move in a jerking, juddering motion known as “stick-slip.” First, the rock at the fault sticks because of friction. It deforms as the plates move, as if it were made of rubber. Over time the stress on the fault increases until eventually friction is overcome and the plates quickly slip past each other as the rock suddenly snaps back into shape.

  An earthquake results when millions of tons of rock all rebounding in this way unleashes a violent mechanical wave that spreads out through the land, a bit like the ripple on the surface of a pond when you’ve dropped a rather large stone in it. This wave, called a “seismic wave,” can have the power to bring down bridges and buildings, cause landslides and induce “soil liquefaction”—where agitated soil assumes a liquid-like consistency, into which buildings and other structures can sink. Transform faults can spawn some truly destructive earthquakes, including the 1906 quake that devastated San Francisco, a city that lies next to the San Andreas Fault at the boundary between the Pacific and North American plates.

  This is the view from above a geological fault line. Over many years, movement of tectonic plates deforms the landscape at the fault. When the build-up of elastic energy in the rock becomes great enough, it suddenly slips. This is an earthquake.

  The magnitude scale

  Seismic waves generated during an earthquake come in two different forms, called P waves and S waves. P waves are compression waves, rather like the waves you get on a stretched spring. The disturbance caused by P waves is parallel to their direction of motion. S waves, on the other hand, are more like water waves, where the disturbance is at right angles to the wave’s motion, creating an S-shaped pattern of peaks and troughs as the wave passes. P waves travel roughly 1.7 times faster than S waves and scientists can use this fact to determine the distance to the earthquake’s source, called the “hypocenter.” Roughly speaking, eight times the time gap in seconds between the arrival of P waves and S waves gives the distance to the hypocenter in kilometers. By triangulating measurements made at a number of observing stations, the location of the hypocenter can be pinpointed. Most quakes happen within a few tens of kilometers of th
e surface, but the deepest ones can be located hundreds of kilometers down. The point on Earth’s surface directly above the hypocenter is known as the “epicenter.”

  Seismologists gauge the power of an earthquake by taking its “moment magnitude,” which is a measure of the amount of energy the earthquake releases. This is an updated version of the Richter magnitude scale, first put forward by US physicist Charles Richter in 1935. Each increment in the scale corresponds to an increase in the energy of the quake by a factor of 101.5 (about 31.6). In other words, an earthquake with a moment magnitude of 6 is 1,000 (31.62) times more powerful than a magnitude-4 quake. The 1906 San Francisco quake had a moment magnitude of 7.8, while Haiti in 2010 was magnitude 7. The most powerful earthquake on record, in Chile in 1960, measured a collossal 9.5. By comparison, the largest nuclear bomb ever detonated, the Russian Tsar Bomba in 1961, gave out energy equivalent to a magnitude-8 quake.