Shopping Cart

Weather Data, Physics and Alien Hunters: How Attribution Science Works

An Excerpt from Angry Weather

THE HUMAN FACTOR
Calculating the Influence of Climate Change on the Weather

What would the world look like today if climate change had never happened? This might sound like the concept for a utopian novel, but it is, in fact, the basis for our work. Only by simulating a world without climate change can we determine how it influences the weather of today. To do this, we compare the weather that can occur in a world without climate change with the weather possible in our modern world. It’s like taking a template of the weather possible in one world, placing it over a template of another world, and looking for bits that don’t match—that is, whether the weather has become more or perhaps less extreme.

The starting point—the world untouched by climate change—is a purely hypothetical world that has never existed. After all, we are not asking how likely a weather event would have been in a world without humans. We are asking how likely a weather event would have been in a world like ours but without human-caused climate change. In a world without humans, the atmosphere would be free from all the additional greenhouse gases we have pumped into it over the centuries. The vegetation would also be totally different; most of the Earth’s surface would be covered in forests or in jungle very different from what we see today. For thousands of years, we have chopped down and reforested trees and, by traveling widely, have helped tree species spread from one part of the world to another. But the larger a forest is, the more it influences the climate and, by extension, the weather.

A world without climate change is therefore not the same as the world that existed centuries ago. Our model is no untouched, primeval world; it is a world of humans, just without additional greenhouse gases. Our hypothetical world remains in the Anthropocene era, so it’s a world in which humanity has at least made its mark—“Anthropocene lite,” if you will. In a simulated world like this, the Industrial Revolution based largely on the burning of fossil fuels never took place. This is, of course, unrealistic; our planet would have developed differently without the Industrial Revolution—whether for better or worse is anyone’s guess. But at least we wouldn’t be grappling with the issue of global warming. And that is the sole focus of our counterfactual model.

Extreme weather attribution is based on the concept of simulating possible weather in this fictional world and comparing it with possible weather in the real world. The definition of an event (like a drought or hurricane) in the real world acts as a blueprint. We then search both worlds to determine how many weather events fit this pattern. What precisely does this mean? First, we ascertain what weather is possible in a particular region under current conditions—for example, how much rain does Houston get on average? How intense is the extreme rain that falls there every ten years? Every fifty years? Every hundred years? How likely is the torrential rain we have observed? We then ask the exact same questions again, this time simulating possible weather for a world without climate change. If a weather event is more (or less) likely in one scenario than in the other, then the difference is clearly due to climate change; this is the only difference between the two worlds for which we have run the weather simulation. If, for example, an extreme event occurs every ten years on average in our current climate but only every hundred years in the world without climate change, then climate change has made the event ten times as likely.

First we need to reconstruct the weather in the real world, which will allow us to simulate potential weather in the modern world and then in the fictional world. To do this, we need to know about the weather occurring in the real world—that is, we need as many weather observations as possible dating back as far as possible, including temperatures, wind strength, rainfall amounts, and more. On the one hand, we need these data to know what has actually happened: When and where did it rain, and how much, before floods occurred like those in Houston? On the other hand, we need weather data to calculate the probability of the weather event in question. These data then serve as the basis for defining the event; for the Russian heat wave of 2010, we used the highest July average temperatures throughout a large area around Moscow (35° to 55° east longitude and 42° to 60° north latitude). Once we have clearly outlined the event, we can simulate it in the climate models—once in the modern world and once in the hypothetical world without climate change. We also use weather data to check our models. Not all models can realistically simulate the weather events in which we are interested. In short, weather data are the link between climate models and the real world. Without weather data, we would have only a vague idea of our weather’s characteristics. Without many years of weather data, nobody would know that Kiel gets 27.5 inches of rain per year on average or how long the high-pressure areas that cause heat in the summer and cold in the winter persist in a particular region. Observed weather data are indispensable and precious.

However, even the best observation data reflect only the weather that has actually occurred. The data don’t reflect all possible weather scenarios. But how can we talk meaningfully about an event to be expected every ten thousand years when we only have observation data from the last hundred years or so? We can’t. It’s as simple as that. That would be like rolling a die ten times, getting a six five times and using this information alone to calculate the probability of rolling a six. So what do we do? If we know the distribution of the weather data, we can effectively extend the observed data. We may not know exactly how the data are distributed, but we can at least make justified assumptions and use them to gauge the likelihood of the event. And so we need statistics. Whether extreme or normal, our daily weather is only ever one of the many possibilities within our climatic conditions. To determine the likelihood of an event, we need to consider both the actual and possible weather. If we want to calculate the probability of a six solely by reading the number on the die, we need to roll it more frequently. In principle, this is exactly what we do with our climate models: we roll the weather dice. To simulate possible weather under the given climatic conditions, we need statistical models and climate models.

Now it gets even more complicated. Our next step is to leave the world in which we live and enter a world without climate change. This means we have nothing with which to compare our models. This is where attribution actually begins. To gauge the likelihood of a weather event in a world without climate change, we need to measure possible weather under climatic conditions we have never observed. While we know for certain that we have a one in six chance of rolling a six, we don’t know at the outset what weather is possible in an atmosphere that hasn’t been manipulated. To find out which weather is possible, it is not enough to simply use a climate model and simulate possible weather once or twice on every single day in the past ten years. Instead, we need to simulate possible weather several hundred times. For example, simulating just two possible summers would be like picking two clovers, one with three leaves and one with four. If you had never seen a clover before, you wouldn’t know that three-leaf clovers are normal and four-leaf clovers are the extreme event. This shows why we need to simulate possible weather so many times. These massive simulations would be impossible without the rapid development of computer technology with more efficient processors and larger memories. We simply couldn’t afford to calculate such “ensembles” of model simulations. The British climate portal Carbon Brief has calculated that “a global climate model typically contains enough computer code to fill 18,000 pages of printed text . . . and it can require a supercomputer the size of a tennis court to run.” In fact, we still can’t afford the enormous processing power required for our attribution studies. Essentially, we are only able to do what we do because of a very special, adventurous community of people who hunt for aliens in the vast expanse of space. I’ll come back to them later. First, we need to immerse ourselves in two fictional worlds: a world with the potential weather of today and the world as it would have been without climate change. To enter these worlds, we need climate models—and physics.

 

A World Without Climate Change

A climate model is a mathematical representation of the climate system. We can use it to recreate the climate system and create a sort of artificial Earth in which we can run experiments—after all, medical students practice on dummies before operating on actual people. Like all physical systems, the climate system is determined by laws of conservation for energy, mass, and momentum. In accordance with these laws, energy, mass, and momentum are neither created nor destroyed, but simply convert themselves into different forms within a closed system. The Earth’s climate system is not a closed energy system; its energy comes from outside, from the sun. But because its energy is not destroyed, equal amounts of energy must enter and leave the system. This is the first key equation on which every climate model is based: the law of energy conservation. A simple climate model of this type can be used to calculate how much the average global temperature changes when more greenhouse gases enter the atmosphere or a volcanic eruption adds more sulfur particles. While the average global temperature is crucial, we want to know more; when consulting with a patient, a physician has to do more than simply take their temperature.

The next, more complex stage of climate modeling also incorporates mass conservation. The Earth can be viewed as a closed system for mass; the mass of the particles that enter and leave the atmosphere is so low compared to the mass of the atmosphere itself that we can ignore it. The conservation of mass means that if the pressure in the ocean or atmosphere decreases in one region, thus altering the number of water or air molecules, the pressure must increase in another area of the system because molecules can’t disappear. Mass conservation can also be calculated using a simple equation. This can be used to divide the climate system into very large sections. The Earth is separated into boxes: one box for the tropical atmosphere, one for the temperate zone, one for the poles, and one for the oceans. Each box contains pressure and temperatures that change if, for example, more energy enters from outside. We can calculate how the differences in pressure between the boxes balance each other out—in other words, air circulation and oceanic currents on huge scales.

These (still very simple) models can be simulated quickly and often to calculate the different speeds at which the land, atmosphere, and oceans are warming up. However, we still cannot use these models to simulate weather. For that we need the conservation of momentum, Newton’s second law, which states that force is equal to mass times acceleration. If you know the force to which air molecules are subjected at every point on Earth, you will know how these air molecules will move and, therefore, how the wind will blow. When they first derived what would later be known as the Navier-Stokes equation, physicists Claude-Louis Navier and George Gabriel Stokes described the four forces that act upon and accelerate air molecules: the Earth’s rotation (Coriolis force), the differences in atmospheric pressure (pressure-gradient force), the force of friction, and the force of gravity.

We now have a complete general circulation model with which to simulate weather—at least in principle. We need to simplify these equations describing the conservation of energy, mass, and momentum in the climate system because models cannot solve all equations for every point in the atmosphere or ocean; that would require infinite computing time. Statistician George Box is considered the first to have said that all models are wrong, but some are useful—and he was right. While all climate models are simplified representations of the actual weather, they are certainly useful if they correctly reproduce key characteristics.

The greatest simplification in climate models is to consider the atmosphere not as a continuum, but as divided into noncontinuous three-dimensional pieces, forming a grid or network covering the globe. The simplified equations are solved at every grid point. In old climate models, the distance between grid points could be several hundred miles; in newer models, the distance is around 62 miles at the equator and less at the poles. Meanwhile, the vertical distance between grid points increases the higher they are in the atmosphere. This is because our weather takes place in the lower atmosphere and a lot happens in a small amount of space; farther up, the pressure is so low that there are very few molecules whirling around and changes play out on a larger scale. The greater the distance between horizontal and vertical grid points, the faster the models can calculate—but less realistically. At every grid point, the model calculates the temperature, pressure, wind speed, and wind direction (along with many other meteorological variables) based on the prescribed time frame—for example, every fifteen or thirty minutes. To make this happen, we need to feed the equations in the model with initial values for all variables before starting the model. Instead of setting these values to zero, which would require immense computing power, we provide the model with initial values for variables such as temperature and wind speed so that it can calculate the changes. These are based on values that have been observed. In addition to these initial conditions, the model still needs to know what factors are driving the climate system outside its scope of calculation: current solar radiation, the concentration of greenhouse gases in the atmosphere, and the concentration of tiny dust particles (“aerosols”)—what scientists call the forcing conditions. If the model’s grid points were just a few feet apart, the model would now be complete: physical equations, initial conditions, forcing conditions.

However, the grid points are often tens of miles apart, yet weather occurs on a much smaller scale. We’ve all visited a city and found it to be raining at one end and sunny at the other. Therefore, this form of rough climate model cannot resolve rainfall. As a result, rainfall does not form part of the equations based on physical laws—just like cloud coverage and many other variables that exhibit small-scale changes. There is, however, a solution: parameterizing the variables. This means that whether and how much it rains at a grid point is determined not by a physical equation, but by an empirical equation; we are looking for a link between the variables that the model can calculate by applying physics and the variables we need that cannot be calculated using physics. Models used to predict economic developments are based solely on such empirical equations. There is no law to prove that unemployment drops when the gross national product rises. Empirically, however, this is certainly what happens, which is why economists use current unemployment figures as a parameter to predict economic growth.

In principle, we follow exactly the same procedure when calculating rainfall in climate models—comparing climate models with weather records and testing which parameters we need to modify to better simulate the actual weather. For example, we need to test how large a drop of water needs to be in a cloud before it falls as rain. It is rare for parameters to realistically represent the weather all around the world. A parameter value will often simulate rainfall very realistically over Germany, but also turn the rainforest into a desert. We might be able to tolerate this if we only wanted to forecast European weather, but if we want to learn about changes to the global climate, then we need compromise parameters—parameters that generate fairly realistic weather for the entire world but do not necessarily fit in with the physics (for example, raindrops that are much larger than possible in the real world but generate realistic volumes of rain in the model). Even without these simplifications, models would not be perfect because the climate system is chaotic. Even a minor change in the initial conditions might completely alter weather developments—the proverbial butterfly effect. Despite these uncertainties, climate models can achieve a great deal. Global warming itself has shown that climate models work: forecasts from the early 1990s predicted the average global temperature from 1990 to the present very accurately considering the difference in actual and predicted greenhouse gas emissions—and despite the climate models of the time working with much rougher spatial resolutions. What our models cannot do is precisely predict the weather in ten, twenty, or thirty years. The climate system is too chaotic for that. But it doesn’t actually matter whether it will rain in Oxford on January 30, 2050. All we want to know is how likely it is that January 30, 2050, will see as much rain as in every average January of the last two hundred years. Has this probability changed, and if so, by how much? Our answers are constantly improving.

To answer this question, we need more than one climate model so that we can identify errors in individual climate simulations caused by the model parameters. The more models return the same result, the more confident we are that this result reflects reality. And the more simulations we run in a model, the better we can gauge the likelihood of a weather event. For a good attribution study, the models need to translate the physical equations and empirical equations into computer code in as many different ways as possible. This, however, requires enormous computing power, producing files of several terabytes, most in different formats.

Ultimately it’s all about the numbers, lots and lots of numbers— so many that it can take up to two hours to import files containing one year of simulations from just one simple model. Unless we are studying something like heat waves that span a whole continent, we need models with a high spatial resolution—a dense grid—to calculate weather. We obtain these model simulations from the climate models of major meteorological computing centers like the European Centre for Medium-Range Weather Forecasts (ECMWF), the U.S. National Center for Atmospheric Research (NCAR), and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC). It can take two weeks to simulate one model year. We would need plenty of patience and money to run these models on just one server. And we don’t have either of those things.

 

Learning From Alien Hunters

This dilemma was solved by alien hunters, of all people. In the 1990s, UC Berkeley found itself with a problem: huge quantities of audio recordings from space that had been collected via radio telescopes and might contain proof of extraterrestrial life. Since nobody knows what this proof might sound like, the laboratory team couldn’t let a machine listen to the recordings because they had no examples with which to teach it. Human labor was required. Since the scientific team was far too small to trawl through the masses of data, they started looking for helpers. They developed a piece of software, known as boinc, that sent audio recordings to private computers all around the world owned by people who volunteered to listen and report anything interesting they found. The seti@home project was born, and it’s still going on today. We follow a similar pattern. We even use the same software to solve our modeling problem with the aid of thousands of volunteers around the world. There’s just one difference: instead of donating their time and actively searching for aliens, our volunteers provide us with time on their computers and, essentially, money by paying slightly higher electricity bills. This gives us access to by far the largest server in the world. Thanks to the dedicated volunteers who support the climateprediction.net project, we don’t pay a cent to use this computing power. In 2015 alone, these computers racked up 120,000 years of processing time. Even the cheapest cloud service would charge us $6 billion for that. Our program runs in the background on volunteers’ computers. Anyone who essentially uses their computer like a typewriter—giving the processor very little to do—can lend us their processing power. If a computer reaches capacity, our model stops. Participants don’t need any technical or scientific knowledge; all they do is download the boinc software, link up with the climateprediction.net project, and let their computer calculate weather for a model year. When the results are ready, the file is sent back to us for analysis. For security reasons, we never have access to the volunteer’s computer. We also offer a screen saver for most of our experiments and anyone interested can watch the calculations run and the changes in temperature, pressure, or rainfall. Some of our around forty thousand volunteers keep hold of their old computers just so climateprediction.net can use them. Participants come and go over the years, some joining in when we run an experiment focusing on their region of the world. Over the project’s life span, 700,000 people have volunteered. Without this project, attribution science for extreme events would probably have come to fruition several years later.


Older Post Newer Post