NOAA ENSO Blog: How Does El NiƱo Influence Snowfall Over the United States?

WeatherBrains | | Post Tag for WeatherWeather
U.S. winter (Dec-Feb) precipitation compared to the 1981-2010 average for the past 7 strong El NiƱo events. Details differ, but most show wetter-than-average conditions across some part of the South. NOAA Climate.gov image, based on data from NOAA Physical Science Lab online tool.

After the lastĀ threeĀ wintersĀ of La NiƱa conditions (werenā€™t we all ready for a change!), the tropical Pacific is looking much different this year, with a strong El NiƱoĀ likely this winterĀ (1). Historically, how has El NiƱo shaped precipitation (rainfall + snowfall) over the U.S.? Letā€™s dig in and find out!

What happened during December-February for previous strong El NiƱos?

For the 7 strongest El NiƱo events since 1950, wetter-than-normal conditions occurred along the West Coast and southern tier of the U.S., especially in the Southeast. This is expected because El NiƱo causes theĀ jet stream to shift southward and extend eastward over the southern U.S. However, there are clearly some differences among the events if you look at the details in the maps. For instance, the 2015-16 and 1957-58 strong El NiƱos were not as wet as expected over the southern U.S. and were even dry in some locations. What is the story there?

The devil is in the details

When forecasters put together a prediction, one consideration is the forecasts generated by climate models, such as from theĀ North American Multi-Model Ensemble (NMME). You might think that the NMME produces a single forecast map for the upcoming winter, but nope! Each month, the NMME produces hundreds of forecast maps from several different models. Why so many maps? Well, the short version of the story is that theĀ chaos of weatherĀ can have big consequences for our seasonal predictions (head over to footnote #2 if you would like a few more details). We cannot possibly say what the weather will be like on January 1stĀ based on model forecasts that were made in early November. So, by running models many times, we are simulating a lot of different possible ā€œweather outcomesā€ that can occur over a season.

The easiest way to examine these model predictions is not by staring at hundreds of maps (trust me, weā€™ve tried) but rather by examining the average of all the maps (this is called an ā€œensemble meanā€). The average isolates the seasonal forecast signal (like El NiƱo) while removing the noise of chaotic weather. This map of forecast averages (3) is shown in the top left panel below, and because we are expecting a strong El NiƱo this winter, the map, not surprisingly, bears a striking resemblance to the expected winter El NiƱo precipitation pattern to its right (4).

(top, left to right) The precipitation forecast for this coming winter (Dec-Feb 2023-24) based on the average of all the individual models in the North American Multi-model Ensemble forecast system. The geographic pattern of precipitation we’d expect based on averaging past El NiƱo winters from 1952-2022. (bottom, left to right) An individual model forecast that is a reasonably good match to the expected pattern. An individual model forecast that deviates significantly from the typical pattern. NOAA Climate.gov image, based on analysis by Nat Johnson.

However, this is a bit misleading because, as just noted, there are actually hundreds of forecasts, and this map is just an average of all of them! We know, because of chaotic weather, that the upcoming reality could more closely mimic any of the hundreds of individual forecasts. And these forecasts can differ considerably from each other.

For example, the map in the bottom left represents one forecast that looks quite similar to the NMME average. On the other hand, the forecast to its right, which was taken from the same model from the same starting month with basically the same El NiƱo, has almost the opposite pattern! And we cannot rule out either outcome actually happening for the upcoming winter!

This, in a nutshell, is the curse ofĀ internal variability. Basically, a single model, run forward with slightly different initial states, can lead to very different forecasted outcomes for the upcoming El NiƱo winter.

So, whatā€™s the point of making winter predictions?

If Iā€™m basically saying that anything can happen this winter, then why do we bother to produce seasonal predictions? Well, as we have emphasized on the blog, although almost anythingĀ canĀ happen in a given winter, El NiƱo or La NiƱa canĀ tilt the odds in favor of a particular outcome, meaning that those hundreds of predictions may lean in a certain direction. Additionally, the stronger the El NiƱo, the more likely the U.S. winter precipitation pattern will match both the average of the computer model forecasts and the typical El NiƱo precipitation pattern. Because there are higher chances of certain outcomes (e.g., a wetter winter), the presence of El NiƱo can help users assess risk and make plans.

One way to evaluate forecast skill is pattern matchingā€”an overall correlation “score” that describes how well the actual winter precipitation (such as the precipitation for winter 1997-98, a strong El NiƱo year, at left) matched individual model forecasts (top right), the NMME average (middle right), or the typical El NiƱo pattern (lower right). A score of 1 means a perfect match, a score of 0 means no matches at all, and a score of -1 means an inverse match, or a mirror image, such as you might expect to see during a La NiƱa winter. NOAA Climate.gov image, adapted from original by Michelle L’Heureux and Nat Johnson.

Not convinced yet? We can put my claim to the test by assessing how well the typical or expected El NiƱo winter precipitation pattern matched up with what actually occurred for past winters. We can do that by examining all previous U.S. precipitation forecasts produced by the NMME, the hundreds of individual forecasts, and the multi-model average for all past winters from 1983-2022. The schematic above breaks down these evaluations.

I have taken every winter precipitation pattern from this period (like the 1997/98 winter pattern shown on the left) and calculated how well that pattern matched the individual NMME forecasts for that winter (top right), the NMME average forecast (middle right), and the expected El NiƱo precipitation pattern (bottom right) (5). The values in this evaluation (6) range from -1 to +1, with values closer to +1 indicating a good match with the actual observed pattern, values near 0 indicating no match, and negative values closer to -1 indicating an inverse match (ā€œmirror imageā€). All these calculations for all 40 winters are presented in a single plot and arranged from left to right according to the strength of the La NiƱa (strongest farthest left) or El NiƱo (strongest farthest right), as shown in the bottom left of the schematic.

Show me the data!

Thereā€™s a lot to take in from these comparisons, but there are three main takeaways. Weā€™ll break it down into a sequence of three steps, starting with a focus on the NMME forecast performance.

Correlations (pattern match “scores”) between forecasts from the North American Multi-model Ensemble (NMME) and observed precipitation for all winters (Dec-Feb) from 1983-2022. Each column shows scores for individual models (small gray dot) and the score for the ensemble average (dark gray dot). Instead of being arranged chronologically, winters are placed from left to right based on the strength of the sea surface temperature (SST) anomaly in the tropical Pacific NiƱo-3.4 region that winter. Sorting this way shows the overall linear pattern: forecast scores get better (closer to 1, a perfect match) the stronger the La NiƱa or El NiƱo. NOAA Climate.gov figure, based on analysis by Nat Johnson.

The first plot in this sequence reveals two of the main takeaways.

  • The stronger theĀ El NiƱo or La NiƱa, the more likely that the actual winter pattern will match the average model forecast pattern.Ā This is why seasonal predictions work and why we care so much about ENSO! This point is made by the upward slope of the green dashed line in the figure, which represents the tendency for the average model forecast to perform better at stronger NiƱo-3.4 index values. In fact, by this metric, the forecasts have performed quite well for most (but not all! ā€“ more on that below) moderate-to-strong El Ninos.
  • For a given winter forecast, chaotic weather causes a wide range of performance among individual model forecasts.Ā This second takeaway, which causes the most wailing and gnashing of teeth among forecasters and their users, is brought out by the vertical stripes that represent the performance of individual model forecasts. In fact, for a given winter, there are usually some forecasts that perform quite well and some that perform quite poorly, even though there are no major differences in the modelsā€™ ENSO forecast between the high- and low-performing forecasts. Instead, the main difference is what we saw in those two forecast maps above unpredictable, chaotic weather. Unfortunately, itā€™s likely impossible to distinguish those high- and low-performing model forecasts well in advance. Again, thatā€™s the curse of internal variability.

In the second step of this sequence, we now include with red and blue diamonds how well the observed precipitation pattern matched the expected El NiƱo or La NiƱa precipitation pattern.

Correlations (pattern match “scores”) between winter (Dec-Feb) precipitation and the geographic pattern we’d expect during El NiƱo (red diamonds) and La NiƱa (blue diamonds) based on composites of past events. The overall linear pattern is similar to the pattern shown in the figure above: the match between observations and the expected pattern get better (closer to 1, a perfect match) the stronger the La NiƱa or El NiƱo. NOAA Climate.gov figure, based on analysis by Nat Johnson.

This addition reveals the third takeaway.

  • The average model forecast closely resembles the ā€œexpectedā€ El NiƱo/La NiƱa precipitation pattern for most winters. This point comes out when we consider that the dark grey circles representing the average model forecasts are usually close to the red or blue diamonds that represent the El NiƱo (right) or La NiƱa (left) precipitation pattern for a given NiƱo-3.4 index value. This is the modelsā€™ way of agreeing with what weā€™ve been claiming at the ENSO Blog for years: ENSO is the major player for predictable seasonal climate patterns over the U.S. If there were another more important source of predictability, we would expect a bigger separation between those colored diamonds and the dark grey dots.

The comparison between the two biggest previous El NiƱos in this record, the winter of 1997/98 (a forecasting success) and 2015/16 (widely regarded as a forecast ā€œbust,ā€ or how forecasters generally describe a bad forecast), are a great illustration of this final point. Check out footnote #7 for the details, but the upshot is that the influence of chaotic weather variability could have reduced the 1997/98 forecast performance much more than it did, and it likely was a factor in why the 2015/16 forecast performed so much worse.

Finally, letā€™s put these comparisons in the context of the forecast for the upcoming winter.

Two kinds of correlation scores: gray dots show how well the NMME forecasts (light gray dot=individual models, dark gray=average) matched actual winter precipitation; diamonds show how well the observed precipitation matched the geographic pattern we’d expect from averaging past La NiƱa (blue) or El NiƱo (blue) winters. The overall linear pattern is similar for both: pattern matches get better (closer to 1, a perfect match) the stronger the La NiƱa or El NiƱo. This winter’s forecast for a strong El NiƱo means the winter has a higher chance of matchingā€”on average for the U.S.ā€”the typical El NiƱo pattern. NOAA Climate.gov figure, based on analysis by Nat Johnson.

The likelihood of a strong El NiƱo increases the chance that the precipitation pattern for the upcoming winter will match both the NMME average and the expected El NiƱo pattern reasonably well, but, as I have been emphasizing, we cannot rule out the possibility that reality will have other plans.

Thatā€™s awfully convenient!

At this point, you might be saying, ā€œHold on, Nat, youā€™re telling me that ENSO is the main driver of the winter precipitation outlook, and if it busts, we can just blame it on the noise of chaotic, unpredictable weather. That sounds like a cop-out (and a little suspicious coming from a writer for the ENSO Blog).ā€ Thatā€™s a fair question! As scientists, we need to continuously reevaluate our assumptions, check for blind spots, and tirelessly strive to improve our understanding of our forecast models. I can assure you that these efforts are being made, especially when the seasonal conditions deviate from expectations, and hopefully, they will lead to better seasonal predictions with higher probabilities.

The main point Iā€™m trying to make, however, is thatĀ when a forecast busts, it isnā€™t necessarily because there is a clear reason, a model bug, or a misunderstanding of the drivers.Ā It could just be because there is a certain amount of unpredictable, chaotic weather that we cannot predict in advance. That means that we must remember that seasonal outlooks are always expressed as probabilitiesĀ (no guarantees!) and that we need toĀ play the long gameĀ when evaluating seasonal outlooks ā€“ a single success or bust is not nearly enough.

The official CPC seasonal outlook for this upcoming 2023-24 winter has been updated, and you can find it here. We need to keep in mind that other climate phenomena (e.g.,Ā MJO,Ā polar vortex) that could shape conditions this winter are mostly unpredictable for seasonal averages but are more predictable on the weekly to monthlyĀ time horizons (8). That means you may want to consider shorter-range forecasts, like CPCā€™sĀ Monthly,Ā Week 3-4,Ā 8 to 14 Day, andĀ 6 to 10 DayĀ outlooks. We at the ENSO Blog will be closely monitoring how conditions evolve this winter, and weā€™ll be sure to keep you updated!

Footnotes

  1. Following the description inĀ Emilyā€™s post, we consider El NiƱo to be strong when the Oceanic NiƱo Index for the season exceeds 1.5Ā° C. As of CPCā€™s November ENSO forecast update, the probability that the Oceanic NiƱo IndexĀ will exceed 1.5Ā° CĀ for December ā€“ February 2023/24 is 73%.
  2. One of the main sources of uncertainty in our seasonal predictions stems from the state of our climate system (ocean, land, and atmosphere) at the start of a forecast (ā€œinitial conditionsā€). Why does this matter? Well, we do not perfectly observe and understand the current oceanic, land, and atmospheric conditions over the entire globe at any single moment in time. Because of this uncertainty at the very start of a forecast, we run prediction models from slightly different starting conditions, making hundreds of different predictions at any one time (ā€œan ensemble of many different membersā€). Those tiny differences in starting conditions can lead to very distinct seasonal predictions through theĀ chaos of weather (as Emily wrote, think of a difference equating to a flap of a butterflyā€™s wings in Brazil at the start of a forecast leading to a tornado forming in Texas several weeks later). Another source of uncertainty is the imperfect way that climate models represent physical processes relevant to our weather and climate. Although that is not the focus of this post, itā€™s the main reason why the NMME is a combination of many different models ā€“ theĀ multi-model averageĀ tends to filter out the errors of individual models and produce a more accurate forecast.
  3. Specifically, I averaged all the forecasts produced in September, October, and November of this year from 7 different models. Each model has a set of forecasts (ranging from 10 to 30) with slightly different initial conditions to sample the different possible realizations of chaotic weather variability. By taking the average of all forecasts, we tend to average out the effects of chaotic weather variability and isolate the predictable seasonal forecast signal. The NMME precipitation map was produced by averaging 324 individual forecast maps (108 for each of September, October, and November). You might ask, why donā€™t I just use the latest (November) forecast? Well, the more maps that I average, the more I can average out that chaotic weather and perhaps some of the individual model errors. It turns out that the average of these three months produces a precipitation forecast that performs slightly better than the November forecast.
  4. I calculated the ā€œtypical winter El NiƱo precipitation patternā€ as the linear regression of December-February precipitation anomalies on the NiƱo-3.4 index from 1952-2022. Because of the method that was used, we can just flip the sign of the precipitation anomalies in the map to get the ā€œtypical winter La NiƱa precipitation pattern.ā€ The map is scaled by the December-February NiƱo-3.4 index, so we would multiply this map by the NiƱo-3.4 index to get the expected precipitation anomaly amplitudes for that winter (i.e., the stronger the El NiƱo, the stronger the expected precipitation anomalies).
  5. If the NiƱo-3.4 index was less than zero for that winter, then the forecast would be compared with the expected La NiƱa precipitation pattern, which would be the same spatial pattern as the El NiƱo pattern but with anomalies of opposite sign. The method for calculating this pattern is described in the previous footnote.
  6. Specifically, I am evaluating the forecasts with aĀ pattern correlation, which is a metric we have used before on the blog (likeĀ here).
  7. The U.S. precipitation pattern correlation analysis shows that the average 1997/98 NMME forecast, which featured a classic El NiƱo precipitation pattern, performed very well (pattern correlation exceeding 0.8, as shown in the schematic) and likely helped to shape expectations for the major El NiƱo that occurred in 2015/16. The precipitation pattern in 2015/16, however,Ā didnā€™t materialize as expected, especially in California, and that winter precipitation forecast was widely panned as a bust for that region.The analysis confirms that the average NMME forecast in 2015/16, which again resembled the classic El NiƱo pattern, performed unusually poorly for a strong El NiƱo, with a pattern correlation near zero. There are many studies that have attempted to address what contributed to the unusual pattern in 2015/16, but this figure reveals that chaotic weather may have been a major factor. In particular, we see that many individual forecasts in 2015/16 performed much better than the average (check out some of the individual forecasts with pattern correlations greater than 0.5), and even better than many individual forecasts in 1997/98. This suggests that the influence of chaotic weather variability did not harm the 1997/98 forecast nearly as much as it could have, but it likely was a factor in why the 2015/16 forecast performed so much worse.For a very good detailed, and technical discussion on the challenges of seasonal precipitation prediction over California, I recommend Kumar and Chen (2020). Also, check outĀ Tomā€™s previous postĀ on the topic.
  8. We have covered many of these phenomena on the ENSO Blog, includingĀ sudden stratospheric warmings, theĀ Madden-Julian Oscillation (MJO), theĀ Arctic Oscillation/North Atlantic Oscillation, theĀ North Pacific Oscillation-West Pacific teleconnection, and theĀ Pacific/North American pattern.

This post first appeared on the climate.gov ENSO blog and was written by Nat Johnson.


Related Articles

Got an opinion? Let us know...