You are currently browsing the tag archive for the ‘meteorology’ tag.

Roads and trains were shut down across the New York area Monday night and into Tuesday, and for what? It snowed in New York, but only 9.8 inches fell in Central Park after predictions of a foot and a half or more. What went wrong? Forecasters, including yours truly, decided to go all-in on one weather model: the European model (or Euro).

And the Euro was way off. Other models had this storm pegged.1

Update after update, the Euro (produced by the European Center for Medium Range Weather Forecasting) kept predicting very high snow totals in New York. As of Monday morning’s run, the Euro was still projecting a foot and a half in the city. This consistency was too great for forecasters to ignore, especially because the Euro had been the first to jump on events such as the blizzard of 1996 and Hurricane Sandy. It also was one of the first to predict that a March 2001 storm was going to, like this one, be a bust. The Euro had a good track record.

That consistency, though, hid a great sense of uncertainty. The SREF (or Short-Range Ensemble Forecast), produced by the National Weather Service, collects 21 models (shown below). And Sunday night, the SREF indicated that the storm could be very different. Five of the 21 models in the SREF had (on a 10:1 snow-to-liquid ratio) less than 10 inches of snow falling. Nine of the 21 predicted a foot or less. Only eight could have been said to support 18 or more inches of snow in New York City.


In other words, 57 percent of the SREF members Sunday night suggested the forecasts were far too gung-ho. By Monday afternoon, 11 of the 21 members were on the 10-inches-or-less train. Eight of the 21 still supported big-time snow, but they were a minority.

The SREF members were not alone in being suspicious of so much snow. In Sunday’s 7 p.m. run, all of the other major models were against the Euro.

  • The American Global Forecasting System (GFS), which was recently upgraded, had only about 20 millimeters (or 8 inches of snow on a 10-to-1 ratio) falling for the storm. Although the GFS is considered inferior to the Euro by many meteorologists, the difference is probably overrated. Both models perform fairly well over the long term, as was pointed out in The New York Times this week. The GFS was showing the storm would stall too far northeast for New York to get the biggest snows. Instead, as we are seeing, those larger totals would be concentrated over Boston.
  • The GFS solution probably shouldn’t have been ignored given that it was joined by the Canadian’s global model, which had only 25 millimeters (or about 10 inches on a 10-to-1 ratio) falling as snow. The Canadian’s short-range model was slightly more pessimistic than the global. It predicted only about 20 to 25 millimeters (or 8 to 10 inches on a 10-to-1 ratio) of snow.
  • The United Kingdom’s model, which typically rates as the second-most accurate behind the Euro, was also on the little-snow train in New York. It had only 20 millimeters (or 8 inches on a 10-to-1 ratio) falling as snow.
  • Even the United States’ short-range North American Mesocale (NAM) model was on board with smaller accumulations, though it would change its tune in later runs and agree with the Euro for a time. On Sunday night, the NAM went with the 20 millimeters of snow.

Put it all together, and there was plenty of evidence this storm wouldn’t be record-setting in New York. Of course, forecasters are going to miss on occasion. Forecasting weather is very difficult. Models aren’t perfect, and forecasters should be practicing meteorology and not “modelology.”

That said, there are a few lessons to be learned:

  1. I’m not sure forecasters (including amateurs like myself) did a good enough job communicating to the public that there was great uncertainty in the forecast. This has been a problem for media forecasters who have historically been too confident in predicting precipitation events. A study of TV meteorologists in Kansas City found that when they predicted with 100 percent certainty that it would rain, it didn’t one-third of the time. Forecasters typically communicate margin of error by giving a range of outcomes (10 to 12 inches of snow, for example). In this instance, I don’t think the range adequately showed the disagreement among the models. Perhaps a probabilistic forecast is better.
  2. No model is infallible. Forecasters would have been better off averaging all the model data together, even the models that don’t have a stellar record. The Euro is king, but it’s not so good that we should ignore all other forecasts.
  3. There’s nothing wrong with changing a forecast. When the non-Euro models (except for the NAM) stayed consistent in showing about an inch or less of liquid precipitation (or 10 inches of snow on a 10-to-1 ratio) reaching New York and the Euro backed off its biggest predictions Monday afternoon, it was probably time for forecasters to change their stance. They waited too long; I’m not sure why.

Meteorology deals in probabilities and uncertainty. Models, and the forecasters who use those models, aren’t going to be perfect. In this case, there was a big storm. It just so happened to be confined to eastern Long Island and southern New England. But that’ll do little to satisfy New Yorkers who expected a historic blizzard.


Crossposted from WattsUpWithThat. If you are a science buff, and a weather/climate buff especially, you should be visiting WUWT regularly, The world’s most widely-read climate site.

Stunning map of NOAA data showing 56 years of tornado tracks sheds light on the folly of linking “global warming” to severe weather
by Anthony Watts, WUWT

…has been turned into a stunning image of the United states. Each line represents an individual tornado, while the brightness of the line represents its intensity on the Fujita Scale. The result, rendered by John Nelson of the IDV User Experience, shows some interesting things, especially the timeline bargraph that goes with the map, which show that the majority of US tornado related deaths and injury (prior to the 2011 outbreak which isn’t in this dataset) happened in the 1950′s to the 1970′s. This is a testament to NEXRAD doppler radar, improved forecasting, and better warning systems combined with improved media coverage.

Here’s the data description, the big map of the CONUS follows below.

The National Weather Service (NWS) Storm Prediction Center (SPC) routinely collects reports of severe weather and compiles them with public access from the database called SeverePlot (Hart and Janish 1999) with a Graphic Information System (GIS). The composite SVRGIS information is made available to the public primarily in .zip files of approximately 50MB size. The files located at the access point contain track information regarding known tornados during the period 1950 to 2006. Although available to all, the data provided may be of particular value to weather professionals and students of meteorological sciences. An instructional manual is provided on how to build and develop a basic severe weather report GIS database in ArcGis and is located at the technical documentation site contained in this metadata catalog.

It is also worth noting that the distribution of strong tornadoes -vs- weaker tornadoes (rated by the Fujita scale) is greatly lopsided, with the weakest tornadoes far outnumbering the strong killer F5 tornadoes (such as we saw in 1974 and 2011, both cooler La Niña years) by at least an order of magnitude:

And here’s the entire map, click for a very hi-resolution version:

Mike Smith covers a lot of the history contained in this data set in his book Warnings The True Story of How Science Tamed the Weather.

He talks about the vast improvements we’ve witnessed since the early days of severe weather forecasting and is well worth a  read if you want to understand severe weather in the USA and how the detection and warning methods have evolved. He has another book just out (Reviewed by Pielke Sr. that explains the failure of this system in Joplin in 2011.

In Mike Smith’s first book, “Warnings: The True Story of How Science Tamed the Weather,” we learned the only thing separating American society from triple-digit fatalities from tornadoes, weather-related plane crashes, and hurricanes is the storm warning system that was carefully crafted over the last 50 years. That acclaimed book, as one reviewer put it, “made meteorologists the most unlikely heroes of recent literature.” But, what if the warning system failed to provide a clear, timely notice of a major storm? Tragically, that scenario played out in Joplin, Missouri, on May 22, 2011. As a wedding, a high school graduation, and shopping trips were in progress, an invisible monster storm was developing west of the city. When it arrived, many were caught unaware. One hundred sixty-one perished and one thousand were injured. “When the Sirens Were Silent” is the gripping story of the Joplin tornado. It recounts that horrible day with a goal of insuring this does not happen again.

Of course, alarmists like Peter Gleick (who knows little about operational meteorology and is prone to law-breaking) like to tell us severe weather (and days like Joplin) are a consequence of global warming saying at the Huffington Post:

“More extreme and violent climate is a direct consequence of human-caused climate change (whether or not we can determine if these particular tornado outbreaks were caused or worsened by climate change).”

But in this story from

“If you look at the past 60 years of data, the number of tornadoes is increasing significantly, but it’s agreed upon by the tornado community that it’s not a real increase,” said Grady Dixon, assistant professor of meteorology and climatology at Mississippi State University.

“It’s having to do with better (weather tracking) technology, more population, the fact that the population is better educated and more aware. So we’re seeing them more often,” Dixon said.

But he said it would be “a terrible mistake” to relate the up-tick to climate change.

Again, for a full understanding I urge readers to click, read, and to distribute these two WUWT essays:

The folly of linking tornado outbreaks to “climate change”

Why it seems that severe weather is “getting worse” when the data shows otherwise – a historical perspective


Donations welcome!

RSS Last Alert Issued:

  • GH-GTA Scan Zone Severe Weather Alert #ONStorm October 2, 2016
    SEVERE WEATHER ALERT — 01:35 PM EDT Oct 02 2016 This is an automated alert of potentially severe weather for the Golden Horseshoe/ Greater Toronto/Niagara Peninsula/South-Central Ontario Monitored Area, from Ephemerata Weather Radar. See attached scan image. The alert triggered at 01:35 PM EDT on Oct 02 2016, from radar data analyzed from NWS radar site KBUF […]

The Radar Page

S. Ontario Warnings

Click for current EC Warnings Map

Ephemerata Weather Radar
Standard: - display: active rain scan: Buffalo, Cleveland or Detroit short or Long range base or composite reflectivity. When the GH-GTA is quiet, other areas may be spotlighted.

Alerts Archive

EWR on twitter

Ephemerata Home

EWR Image Gallery

EWR Image Gallery Miscellaneous images taken from the various EWR focus topics.

Solar/Climate Conditions


EWRadarProject on Youtube

Copyright Notice

All material, text, images, graphics and video, is ©2013 P. Coppin. All Rights Reserved. No reproduction by any means is permitted without explicit authorization.