“If you want to understand today, you have to search yesterday.” ~ Pearl S. Buck
One of the lesser-known but important functions of the NHC is to maintain a historical hurricane database that supports a wide variety of uses in the research community, private sector, and the general public. This database, known as HURDAT (short for HURricane DATabase), documents the life cycle of each known tropical or subtropical cyclone. In the Atlantic basin, this dataset extends back to 1851; in the eastern North Pacific, the records start in 1949. The HURDAT includes 6-hourly estimates of position, intensity, cyclone type (i.e,, whether the system was tropical, subtropical, or extratropical), and in recent years also includes estimates of cyclone size. Currently, after each hurricane season ends, a post-analysis of the season’s cyclones is conducted by NHC, and the results are added to the database. The Atlantic dataset was created in the mid-1960s, originally in support of the space program to study the climatological impacts of tropical cyclones at Kennedy Space Center. It became obvious a couple of decades ago, however, that the HURDAT needed to be revised because it was incomplete, contained significant errors, or did not reflect the latest scientific understanding regarding the interpretation of past data. Charlie Neumann, a former NHC employee, documented many of these problems and obtained a grant to address them under a program eventually called the Atlantic Hurricane Database Re-analysis Project. Chris Landsea, then employed by the NOAA Hurricane Research Division (HRD) and now currently the Science and Operations Officer at the NHC, has served as the lead scientist and program manager of the Re-analysis Project since the late 1990s.
In response to the re-analysis effort, NHC established the Best Track Change Committee (BTCC) in 1999 to review proposed changes to the HURDAT (whether originating from the Re-analysis Project or elsewhere) to ensure a scientifically sound tropical cyclone database. The committee currently consists of six NOAA scientists, four of whom work for the NHC and two who do not (currently, one is from HRD and the other is from the Weather Prediction Center).
Over the past two decades, Landsea, researchers Andrew Hagen and Sandy Delgado, and some local meteorology students have systematically searched for and compiled any available data related to each known storm in past hurricane seasons. This compilation also includes systems not in the HURDAT that could potentially be classified as tropical cyclones. The data are carefully examined using standardized analysis techniques, and a best track is developed for each system, many of which would be different from the existing tracks in the original dataset. Typically, a season’s worth of proposed revised or new tracks is submitted for review by the BTCC. Fig. 1 provides an example set of data that helped the BTCC identify a previously unknown tropical storm in 1955.
The BTCC members review the suggested changes submitted by the Re-analysis Project, noting areas of agreement and proposed changes requiring additional data or clarification. The committee chairman, Dr. Jack Beven, then assembles the comments into a formal reply from the BTCC to the Re-analysis Project. Occasionally, the committee’s analysis is presented along with any relevant documentation that would help Landsea and his group of re-analyzers account for the differing interpretation. The vast majority of the suggested changes to HURDAT are accepted by the BTCC. In cases where the proposed changes are not accepted, the BTCC and members of the Re-Analysis Project attempt to resolve any disagreements, with the BTCC having final say.
In the early days of the Re-analysis Project, the amount of data available for any given tropical cyclone or even a single season was quite small, and so were the number of suggested changes. This allowed the re-analysis of HURDAT to progress relatively quickly. However, since the project has reached the aircraft reconnaissance era (post 1944), the amount of data and the corresponding complexity of the analyses have rapidly increased, which has slowed the project’s progress during the last couple of years.
The BTCC’s approved changes have been significant. On average, the BTCC has approved the addition of one to two new storms per season. One of the most highly visible changes was made 14 years ago, when the committee approved Hurricane Andrew’s upgrade from a category 4 to a category 5 hurricane. This decision was made on the basis of (then) new research regarding the relationship between flight-level and surface winds from data gathered by reconnaissance aircraft using dropsondes.
The figures below show the revisions made to the best tracks of the 1936 hurricane season, and give a flavor of the type, significance, and number of changes being made as part of the re-analysis. More recent results from the BTCC include the re-analysis of the New England 1938 hurricane, which reaffirmed its major hurricane status in New England from a careful analysis of surface observations. Hurricane Diane in 1955, which brought tremendous destruction to parts of the Mid-Atlantic states due to its flooding rains, was judged to be a tropical storm at landfall after re-analysis. Also of note is the re-analysis of Hurricane Camille in 1969, one of three category 5 hurricanes to have struck the United States in the historical record. The re-analysis confirmed that Camille was indeed a category 5 hurricane, but revealed fluctuations in its intensity prior to its landfall in Mississippi that were not previously documented.
The most recent activity of the BTCC was an examination of the landfall of the Great Manzanillo Hurricane of 1959. It was originally designated as a category 5 hurricane landfall in HURDAT and was the strongest landfalling hurricane on record for the Pacific coast of Mexico. A re-analysis of ship and previously undiscovered land data, however, revealed that the landfall intensity was significantly lower (140 mph). Thus, 2015’s Hurricane Patricia is now the strongest landfalling hurricane on record for the Pacific coast of Mexico, with an intensity of 150 mph.
The BTCC is currently examining data from the late 1950s and hopes to have the 1956-1960 re-analysis released before next hurricane season. This analysis will include fresh looks at Hurricane Audrey in 1957 and Hurricane Donna in 1960, both of which were classified as category 4 hurricane landfalls in the United States. As the re-analysis progresses into the 1960s, the committee will be tackling the tricky issue of how to incorporate satellite images into the re-analysis, including satellite imagery’s irregular frequency and quality during that decade. The long-term plan is to extend the re-analysis until about the year 2000, when current operational practices for estimating tropical cyclone intensity became established using GPS dropsonde data and flight-level wind reduction techniques.
For more reading on this topic:
- Atlantic hurricane database reanalysis website – http://www.aoml.noaa.gov/hrd/data_sub/re_anal.html
- Documentation about the HURDAT database format – http://www.aoml.noaa.gov/hrd/Landsea/landsea-franklin-mwr2013.pdf
— Todd Kimberlain and Eric Blake
Last weekend’s blizzard along the East Coast of the United States caused significant flooding along the coasts of Maryland, Delaware, New Jersey, and New York. Even though this system was not a tropical cyclone, the mechanics of storm surge flooding are essentially the same whether the cause is a hurricane or extratropical storm. The blizzard provides us an excellent opportunity to delve into the topic of vertical datums, which we promised to tackle in a previous blog post anyway!
You Say MLLW, I say MHHW (and undoubtedly someone else says NAVD88)
Simply put, a vertical datum is a reference level. Whenever you talk about water levels related to tides or storm surge, that water level needs to be referenced to some datum to provide essential context. For example, a water surface 2 feet above the floor means something very different than a water surface 2 feet above the roof.
There are many vertical datums out there. Some are based on tide levels (tidal datums), while some are based on the general shape of the Earth (geodetic datums). Savvy and more technical experts generally use geodetic datums such as the North American Vertical Datum of 1988 (NAVD88) because they’re more precise and applicable to a large area, such as an entire continent. For most of us, however, we see water levels referenced to tidal datums such as Mean Lower Low Water (MLLW) or Mean Higher High Water (MHHW).
Some locations along the coast have two high tides and two low tides per day (e.g., the U.S. East Coast), while some areas only have one high tide and one low tide per day (e.g., the U.S. Gulf Coast). Mean Lower Low Water (MLLW) is simply the lowest of the two low tides per day (or the one low tide) averaged over a 19-year period. This 19-year period is called the National Tidal Datum Epoch, which currently runs from 1983 through 2001. So to calculate MLLW for a particular tide station, the National Ocean Service (NOS) took the levels of all the lowest low tides from 1983 to 2001 and averaged them. Similarly, NOS calculates Mean Higher High Water (MHHW) by averaging the highest of the two high tides per day (or the one high tide) over the same 19-year period.
Historically, MLLW has been used for navigational purposes in the marine waters of the United States and its territories. Navigational charts from the NOAA Office of Coast Survey show water depths relative to MLLW, or how far the ocean bottom extends below the MLLW line. If boaters know the tide forecast relative to MLLW, the depth of the ocean bottom relative to MLLW, and the draft of their boat or ship (the distance between the waterline and the bottom of the hull), then they can deduce if the vessel will hit the sea floor. Since this is the most common way that tides have been referenced, the National Weather Service (NWS) has generally used MLLW as a reference for its water level forecasts, and most tide gauge data is referenced to MLLW by default. People who have lived along the same stretch of coastline for many years have become accustomed to knowing what type of coastal flooding will occur when water levels reach specific thresholds above MLLW.
But what about people who don’t know those relationships between MLLW–or any other datum for that matter–and coastal flooding (which change from location to location along the coast, by the way). For this reason, NHC has moved toward providing tropical cyclone related storm surge forecasts in terms of inundation, or how much water will be on normally dry ground. You can go here for more information on the Potential Storm Surge Flooding Map, issued by NHC when tropical cyclones are forecast to affect the East or Gulf Coasts of the United States. For the purposes of using water level observations to get an idea of how much inundation is occurring during a storm, NHC uses MHHW.
Why Does NHC Use MHHW When Looking at Water Level Observations?
To answer this question, it’s probably helpful to look at a cross-section of a typical coastline. Shown below is such a schematic, which depicts both the Mean Lower Low Water line and the Mean Higher High Water line. Anything seaward of the MLLW line is typically submerged under water. The region between the MLLW and MHHW lines is called the intertidal zone, and it is the region that is submerged at high tide and exposed at low tide. Intertidal zones include rocky shorelines, sandy beaches, or wetlands (marshes, mudflats, swamps, and mangroves). Because intertidal zones are submerged during a typical high tide, people don’t generally live here.
NHC and NOS consider anything landward of the MHHW line (marked as the supratidal zone in the graphic) as normally dry ground. Only in the most extreme high tide cycles and during storm surge or tsunami events does that region become submerged under water. Seawater that rises past the MHHW line is considered inundation, and therefore water level measurements relative to MHHW can be considered as proxies for measurements of inundation. NOS has deemed MHHW as the best approximation of the threshold at which inundation can begin to occur. While safe navigation of boats is a downward-looking problem that requires the use of MLLW, coastal flooding is an upward-looking problem that is best communicated using MHHW.
Dr. J. D. Boon, Professor Emeritus of the Virginia Institute of Marine Science, probably puts it best in his book Secrets of the Tide:
…we require the MHHW datum in order to isolate and evaluate storm surge risk
in a conservative way by removing the effect of tidal range – an independent factor
that varies from place to place.
…US nautical charts use MLLW to reference charted depths conservatively so that
a mariner will know that the water depths shown on the chart can be counted on for
safe passage even at the lowest levels of the astronomical tide…
Reversing direction and looking upward instead of downward, MHHW can be used to
conservatively reference storm tides so that coastal residents will know how much
additional rise to expect above the highest levels of the astronomical tide.
These levels are generally familiar to the waterfront resident who witnesses signs of
their presence in wrack lines, marsh vegetation zones and high water marks on
We should mention that use of other vertical datums is in no way wrong. There are some very good uses for datums such as MLLW or NAVD88, but NHC uses MHHW when referencing storm tide observations to put things into a frame of reference that is understood by the majority of people at risk for coastal flooding. If we see a water level observation of 7 feet above MHHW, there’s a pretty good chance that some location in that area is being or was inundated by as much as 7 feet of water on ground that would normally be dry. This relationship worked quite well during Hurricane Sandy in 2012. Peak water levels measured by NOS tide gauges at the Battery in Manhattan and Sandy Hook, New Jersey, were between 8 and 9 feet above MHHW, and high water marks surveyed by the US Geological Survey after the storm indeed supported inundations of 8 to 9 feet above ground level in places like Sandy Hook and Staten Island.
MHHW and the January 2016 East Coast Blizzard
Since we said the recent blizzard provides a great case for us to explain vertical datums, let’s take a look at some of the water level observations during the event and how they compared to documented flooding.
Some of the worst storm surge flooding from the event occurred in extreme southern New Jersey and Delaware. So let’s look at the area around Cape May, New Jersey. The NOS tide gauge at Cape May measured a peak water level of about 9 feet above MLLW (8.98 feet to be exact). But does that mean that residents of Cape May and surrounding communities had as much as 9 feet of water on their streets? No, it just means that the water surface got about 9 feet higher than the “imaginary” line that marks the average of the lowest of the two low tides per day.
At the Cape May gauge, the difference between MLLW and MHHW is 5.45 feet, which means that the peak water level was only about 3.53 feet above MHHW (8.98 minus 5.45). Nearby, the peak water level observation from the NOS gauge in Atlantic City, New Jersey, was 3.42 feet above MHHW. So does that mean that residents of Cape May, Atlantic City, and surrounding communities had as much as 3 to 4 feet of water on their streets? Actually, yes it does. Pictures obtained via Twitter from West Wildwood, North Wildwood, and Atlantic City appear to support an estimate of 3 to 4 feet of inundation. See below for the evidence.
Incidentally, if you’re ever watching water level observations during a storm from the NOS Center for Operational Oceanographic Products and Services (CO-OPS) website, you can choose which vertical datum you’d like to use. The default will come up as MLLW, but you can change it to MHHW (as we do at NHC) or another datum such as NAVD88 or Mean Sea Level. Alternatively, NOS CO-OPS also provides a real-time “Storm QuickLook” website during coastal flooding events, and the default vertical datum on this page is MHHW. Below is a comparison of the water level data from Lewes, Delaware, during the blizzard using MLLW (top) and MHHW (bottom) as reference levels. Notice that the curves don’t change, only the reference numbers on the left vertical axis.
And finally, if you’re ever looking at storm surge forecast guidance online, make sure you know which vertical datum you’re looking at! For example, the NWS’s Extratropical Storm Surge (ETSS) model is available on the Meteorological Development Laboratory website, and although data shown is relative to Mean Sea Level (MSL), the vertical datum can be changed to MHHW or MLLW.
— Robbie Berg
Thanks go out to Cody Fritz, Shannon Hefferan, and Jamie Rhome from the NHC Storm Surge Unit, as well as the folks over at the National Ocean Service, for their assistance in putting together this blog post.
Tropical Storm Erika, coming as it did so close to the beginning of the new college and professional football seasons, is a reminder that Monday-morning quarterbacking is nearly as popular an activity as the sport itself. And we at NHC do it too. After every storm we review our operations with an eye toward improving our products and services. Erika is no different, though there’s been more questioning and criticism than usual, with few components of the weather enterprise spared. Some in the media were accused of overinflating the threat, numerical models were bashed, and some public officials were charged with overreacting. NHC’s forecasts were questioned while others lamented that NHC’s voice wasn’t strong enough amid all the chatter. So, in the spirit of searching for a tropical storm eureka, in this blog entry we present some of our own post-storm reflections.
How good were NHC’s forecasts for Erika?
The NHC official forecast errors were larger than average. A preliminary verification shows that the average 72-hr track error for Erika was 153 n mi, about 30% larger than the 5-year average of 113 n mi. And nearly all of this error was a rightward (northward) bias – that is, Erika moved consistently to the left of the NHC forecast track. As for intensity, Erika ended up being weaker than forecast; the 72-hr intensity forecasts were off by about 20 kt on average, and the official 5-day forecasts called for a hurricane over or near Florida until as late as 2 AM Friday, when it became clear that Erika was going to have to deal with the high terrain of Hispaniola.
Why was Erika so difficult to get right?
Erika was a weak and disorganized tropical cyclone, and weak systems generally present us (and the computer simulations we consider) with tougher forecast challenges. In fact, the average 72-hr track error for all tropical depressions and weak tropical storms is around 155 n mi – just about exactly what the errors for Erika were. So the track issues weren’t really a surprise to us. Of course, knowing whether such errors were going to occur and how to reduce them in real time wasn’t obvious. If it had been obvious, we would have called an audible and changed the forecast.
What made this particular situation so challenging was wind shear, mainly due to strong upper-level westerly winds in Erika’s environment. These winds tended to displace the storm’s thunderstorms away from its low-level circulation, causing the storm to lack a coherent vertical structure. When this happens, it’s very difficult to tell just how much influence those upper-level winds will have on the storm track. Sometimes storms hold together against wind shear (Andrew of 1992 is a good example), and there were times when Erika seemed to be winning its battle. If it had held together better, it would have taken a track more to the north and ended up being a much stronger system. Obviously, it didn’t play out that way, but that was an outcome far easier to see with the benefit of hindsight.
An additional complication was Puerto Rico and Hispaniola. If Erika had been able to avoid those land masses, it would have been better able to withstand the disruptive effects of wind shear. And early on, we expected Erika to mostly avoid land. In this case, not getting the track right made it much harder to get the intensity right, which made the track forecasts harder yet.
Is there too much reliance on numerical models, and did they fail for Erika?
The improvements in track forecasting over the past few decades are directly attributable to improvements in numerical models, and to the data used to initialize them, to the point where it’s become almost impossible for a human forecaster to consistently outperform the guidance. The modelling community deserves our praise for the tremendous progress they’ve made, not criticism for missing this one. While we approach each forecast with an attempt to diagnose when and how the models might fail, it is exceedingly difficult, and it’s not something we do in our public forecasts unless our confidence is very high.
Human forecasters (including those at NHC) are still able to occasionally outperform the intensity models, mainly because satellite depictions of storm structure can often be used by forecasters more effectively than by models, giving us an edge in certain circumstances. But neither human forecasters nor the models are particularly good at anticipating when thunderstorms in the cyclone core are going to develop and how they’re going to subsequently evolve, especially for weaker cyclones like Erika.
Because the atmosphere is a “chaotic” physical system, small differences in an initial state can lead to very large differences in how that state will evolve with time. This is why the model guidance for Erika was frustratingly inconsistent – sometimes showing a strong hurricane near Florida, while at other times showing a dissipating system in the northeastern Caribbean. It’s going to take a large improvement in our ability to observe in and around the tropical cyclone core (among other things), to better forecast cases like Erika for which storm structure is so important to the ultimate outcome. But our best hope for better forecasts lies in improved modeling–a major goal of the Hurricane Forecast Improvement Program (HFIP).
Given the overall model guidance we received during Erika, it’s hard even now, well after the storm, to see making a substantially different forecast with current capabilities and limitations. In fact, had our first few advisories called for a track much farther south at a much weaker intensity, or even a forecast of dissipation due to interaction with Hispaniola, our partners and the public might rightly have questioned our rationale to go firmly against many model forecasts of a stronger system farther north.
Was the message from NHC muddled?
We think that there might be some ways for NHC to make key aspects of our message easier to find. Although NHC’s Tropical Cyclone Discussions (TCDs) repeatedly talked about the uncertainty surrounding Erika’s future beyond the Caribbean, including the possibility that the cyclone could dissipate before reaching Florida, it does not appear that this was a prominent part of the media’s message to Florida residents. Making key “talking points” more distinctly visible in the TCD and the Public Advisory are options we are considering, as well as enhanced use of NHC’s Twitter and Facebook accounts. Having said that, consumers and especially re-packagers of tropical cyclone forecast information, like our media partners, should take some responsibility for making use of the information that is already available. We also invite our media partners to take more advantage of the numerous training sessions we offer, mostly during the hurricane offseason. Reaching anyone in the television industry with such training, except for on-camera meteorologists, has proven over the years to be very difficult. We would like to train more reporters, producers, news department staff, executives, etc. so they are more sensitized to forecast uncertainty and how to communicate it with the help of our products, but we realize that a more focused “talking points” approach as described above will probably be needed to assist these busy folks in conveying a consistent message.
An NHC advisory package contains a variety of products, each geared to providing a certain kind of information or to serving a particular set of users. Some of our media partners have expressed concerns over the increasing number of NHC products, but the various wind and water hazards posed by a tropical cyclone cannot be boiled down to one graphic, one scale, or one index. We are, in fact, still in the process of intentionally expanding our product and warning suite to focus more on the individual hazards and promote a more consistent message about those hazards. Even so, two of our products that have been around for many years are still crying out for greater visibility.
We’ve already mentioned the Tropical Cyclone Discussion, a two- to four-paragraph text product that is the window into the forecaster’s thinking and provides the rationale behind the NHC official forecast. In the TCD we talk about the meteorology of the situation, indicate our level of confidence in the forecast, and when appropriate, discuss alternative scenarios to the one represented by the official forecast. Anyone whose job it is to communicate the forecast needs to make the TCD mandatory reading on every forecast cycle.
Some users may not understand the amount of uncertainty that is inherent in hurricane forecasts (although we suspect Florida residents now have a greater appreciation of it than they had two weeks ago). We need to continue to emphasize, and ask our media partners to emphasize, NHC’s Wind Speed Probability Product, available in both text and graphical form, which describes the chances of hurricane- and tropical-storm-force winds occurring at individual locations over the five-day forecast period. Someone looking at that product would have seen that at no point during Erika’s lifetime did the chance of hurricane-force winds reach even 5% at any individual location in the state of Florida, and that the chances of tropical-storm-force winds remained 50-50 or less. In addition, in some of the number crunching we did after the storm, we calculated that the chance of hurricane-force winds occurring anywhere along the coast of Florida never got above 21%.
We realize that at first it seems counterintuitive that we are forecasting a hurricane near Florida while no one in that state has even a 5% chance of experiencing hurricane-force winds. That, however, is the reality of uncertainties in 5-day forecasts, especially for weaker systems like Erika, and the wind speed probabilities reliably convey the wind risk in each community. We did notice a couple of on-air meteorologists referencing the Wind Speed Probabilities, which is great – and the more exposure this product gets, the better. We would like to work with our television partners to help them take advantage of existing ways for many of them to easily bring the wind speed probabilities into their in-house graphics display systems.
A very nice training module exists for folks interested in learning about how to use the Wind Speed Probabilities: https://www.meted.ucar.edu/training_module.php?id=1190#.VenooLQ2KfQ. You will need to register for a free COMET/MetEd account in order to access the training module.
Did Floridians over-prepare for Erika?
Anyone who went shopping for water and other supplies once they were in the five-day cone did exactly the right thing (of course, it’s much better to do that at the beginning of hurricane season!). Being in or near the five-day cone is a useful wake-up call for folks to be prepared in case watches or warnings are needed later. But as it turned out, no watches or warnings were ever issued for Florida. In fact, at the time when watches would normally have gone up for South Florida, NHC decided to wait, knowing that there was a significant chance they would never be needed – and that turned out to be the right call.
Watches (which provide ~48 hours notice of the possibility of hurricane or tropical storm conditions) and warnings (~36 hours notice that those conditions are likely) seem to be getting less attention these days, with the focus on a storm ramping up several days in advance of the first watch. While the additional heads-up is helpful, we worry about focusing in on specific potential targets or impacts that far in advance. The watch/warning process begins only 48 hours in advance because that’s the time frame when confidence in the forecast finally gets high enough to justify the sometimes costly actions that an approaching tropical cyclone requires. (Although, we recognize that some especially vulnerable areas sometimes have to begin evacuations prior to the issuance of a watch or a warning, and we and the local Weather Forecast Offices of the National Weather Service directly support emergency management partners as they make such decisions.)
What can NHC do better next time?
While we’d like to make a perfect forecast every time, we know that’s not possible. Failing that, we’re thinking about ways we can improve the visibility of our key messages, particularly those that will help users better understand forecast uncertainty. As noted above, we’re considering adding a clearly labeled list of key messages or talking points to either the TCD or the Tropical Cyclone Public Advisory, or both. We’d also like to try to make increased use of our Twitter accounts (@NHC_Atlantic, @NWSNHC, and @NHCDirector), which so far have been mostly limited to automated tweets or to more administrative or procedural topics. We’re also looking at whether the size of the cone should change as our confidence in a forecast changes (right now the cone size is only adjusted once at the beginning of each season), and thinking about ways to convey intensity uncertainty on the cone graphic. But most of all, we need to keep working to educate the media and the public about the nature of hurricane forecasts generally, and how to get the information they need to make smart decisions when storms threaten.
— James Franklin
The development of Tropical Storm Bill so close to the Texas coast, with the posting of a formal tropical storm warning only about 12 hours before winds of that intensity came ashore on Tuesday June 16th, highlighted a long-standing and well-known limitation in the tropical cyclone program of the National Weather Service (NWS). An ongoing NHC initiative to improve service for such systems is the subject of today’s blog post.
Although there’s nothing new about a tropical cyclone forming on our doorstep, what is new is an increased ability to anticipate it. NHC has greatly enhanced its forecasts of tropical cyclone formation over the past several years, introducing quantitative 48-hr genesis forecasts to the Tropical Weather Outlook in 2008, and extending those forecasts to 120 hours in 2013. In 2014, we introduced a graphic showing the locations of tropical disturbances and the areas where they could develop into a depression or storm over the subsequent five days. Thirty-six hours in advance of Bill’s formation, NHC gave the precursor disturbance a 60% chance of becoming a tropical cyclone, and increased that probability to 80% about 24 hours in advance. While nothing was guaranteed, we were pretty confident a tropical storm was going to form before the disturbance reached the coast. And although we weren’t issuing specific track forecasts for the disturbance, NHC’s new graphical Tropical Weather Outlook (example below) showed where the system was generally headed.
We wouldn’t have had such confidence 20 years ago, or even 5 years ago. And so our tropical cyclone warning system, developed over several decades, doesn’t allow for a watch or warning until a depression or storm actually forms and NHC’s advisories begin; by both policy and software, warning issuances are tied to cyclone advisories. If we had wanted to issue a tropical storm watch for Bill on the morning of Sunday the 14th (48 hours prior to landfall), or a warning that evening (36 hours ahead of landfall), we would have had to pretend that the disturbance in the Gulf of Mexico was a tropical cyclone. Even during the day on Monday, data from an Air Force Reserve Hurricane Hunter aircraft showed that the disturbance had not yet become a depression or storm.
Couldn’t NHC have called the disturbance a tropical storm anyway, in the interest of enhanced preparedness? Yes, but what if the disturbance never becomes a tropical storm – remember, even an 80% chance of formation means it won’t become a tropical storm at least once every five times. So naming it early risks the credibility of the NWS and NHC, and endangers a trust we’ve worked for decades to establish. In addition, there are legal and financial consequences to an official designation of a tropical cyclone – consequences that obligate us to call it straight. And finally, as custodians of the tropical cyclone historical record, we have a responsibility to ensure the integrity of that record.
When systems that have the potential to become tropical cyclones pose hazards to life and property, NHC’s best avenue for highlighting those hazards currently is the Tropical Weather Outlook. Ahead of Bill’s formation, the possibility of tropical storm conditions along the middle and upper Texas coast was included in the Sunday evening Outlook, and by early Monday afternoon the Outlooks were saying those conditions were likely. Products issued by NWS local forecast offices (WFOs) carried similar statements.
Although most folks seemed to have gotten the message that a tropical storm was coming, it’s widely thought that the Tropical Weather Outlook and local WFO products don’t carry the visibility and weight of an NHC warning, or of an NHC advisory package with its attendant graphics. In addition, some institutions have preparedness plans that are tied to the presence of warnings. We agree that warnings during the disturbance stage could improve community response, and we’ve been working toward that goal since 2011. In that season, NHC initiated an internal experiment in which the Hurricane Specialists prepare track and intensity forecasts for disturbances with a high likelihood of development, and use these forecasts to determine where watches and warnings would have been appropriate. These internal disturbance forecasts have had some successes and failures, but may now be good enough to make public.
With our colleagues across the NWS, we’re now working through the logistics of expanding the tropical cyclone product and warning suite to accommodate disturbances. One plan under consideration calls for NHC to produce a five-day track and intensity forecast for those disturbances having a high chance of becoming a tropical cyclone, and which pose the threat of bringing tropical-storm-force winds to land areas. The forecasts would be publicly issued through the standard NHC advisory products, including the Public Advisory, Discussion, and Wind Speed Probability Product, along with the forecast cone and the other standard graphics. These advisory packages would be issued at the normal advisory times, and continue until the threat of tropical-storm-force winds over land had diminished. If and when the disturbance became a tropical cyclone, advisory packages would simply continue.
We are still evaluating these and other options for getting tropical cyclone warnings out for potential tropical cyclones. If we do begin issuing forecasts for these systems, we know from our experimental forecasts that they won’t be as accurate as our current public forecasts for tropical cyclones are – and we’ll want to make sure users know about those uncertainties. There are many details to iron out and much technical work to do, but we’re hopeful to have this service enhancement in place for the 2017 hurricane season.
— James Franklin
A previous blog entry described the new NHC five-day tropical cyclone formation (or genesis) products. In this blog entry, we discuss the factors that go into these predictions.
The primary tool used at NHC for five-day tropical cyclone genesis forecasts is global numerical modeling. Global models can predict many of the environmental factors that influence tropical cyclone formation, and the skill of these models has been improving with time. More tropical cyclone formations are being forecast with longer lead times, and weather prediction models show fewer “false alarms” than in the past. Recent studies suggest, and forecaster experience seems to confirm, that a consensus of the available model guidance usually outperforms any single model. This “two heads are better than one” approach works as long as the models (or heads) are somewhat independent of one another. In addition, NHC is currently evaluating a few statistical techniques that use the global model output to produce objective guidance designed to assist hurricane specialists in developing the probabilities of formation issued in the Tropical Weather Outlook.
Kelvin Waves and the Madden-Julian Oscillation
Global model guidance is not the only tool available to NHC forecasters, however. Researchers have learned that a majority of lower latitude tropical cyclone formations are associated with waves in the atmosphere moving through the global Tropics from west to east. Two particularly important wave types are the Convectively Coupled Kelvin Wave (CCKW), which circumnavigates the equator in about 15 to 20 days, and the Madden-Julian Oscillation (MJO), which transits the globe in 30 to 60 days. These waves are normally initiated by large areas of thunderstorm activity over tropical regions, especially near India and southeastern Asia. These waves are different in both frequency and direction of motion from the more well-known tropical waves that originate over Africa and often spawn tropical cyclones as they move westward across the Atlantic and eastern North Pacific basins.
Tropical cyclone formation often accompanies the passage of the “active phase” of either the faster-moving CCKWs or the slower-moving MJO. Figure 1 shows tropical cyclone tracks over a 37-year period in active and inactive phases of the MJO as the wave moves around the globe, along with increased or decreased rainfall anomalies associated with the two phases of the MJO (Zhang 2013). In the figure, the active phase of the MJO for the Atlantic occurs in panel (a), while for the eastern Pacific the active phase occurs in panel (d). The less active phases for these two basins fall in panels (c) and (b), respectively.
This concentration of tropical cyclone activity occurs because each type of wave temporarily makes large-scale environmental conditions, such as vertical wind shear or atmospheric moisture, more conducive for tropical cyclone formation. Although not every wave causes a tropical cyclone to form, pre-existing disturbances have a greater likelihood of developing into tropical cyclones after the passage of a CCKW or the MJO. High-activity periods can last as long as a week or more with the MJO, but are generally followed by days to possibly weeks of little to no activity during the inactive phases of these waves, when large-scale conditions become unfavorable for tropical cyclone formation. Forecasters use real-time atmospheric data and other tools to diagnose the location and motion of these important catalysts for tropical cyclone formation.
Here is an example from the 2014 hurricane season of how forecasters used these atmospheric signals. The graphic below, called a Hovmöeller diagram, shows where large areas of rising air (cool colors) and sinking air (warm colors) exist near the equator as a function of time. The dashed black contours depict the active phase of successive CCKWs, and the solid red contours show the inactive phases. In this particular case, forecasters noted that there was a strong CCKW moving through the eastern Pacific in the middle part of October. Extrapolating the wave forward in time, along with numerical models forecasts of the wave’s location and strength, suggested that a tropical cyclone could form within a few days over the far eastern Pacific from a disturbance that was already in the area. The green dot indicates where Tropical Storm Trudy formed, a day or two after the CCKW passed the disturbance. Although CCKW tracking is only a secondary factor in determining a Tropical Weather Outlook forecast, a basic knowledge of this atmospheric phenomenon is an important part of the process.
Forecasters consider many factors when preparing the five-day genesis probabilities for the Tropical Weather Outlook, including explicit forecasts from the global models and knowledge of any ongoing CCKWs or the MJO. In addition, the final NHC forecast also reflects the current trends of the disturbance, which are weighted much more heavily in the two-day outlook, but also can affect the five-day forecast as well. There are several ongoing research projects that will hopefully yield objective probabilities and other tools designed to help better predict tropical cyclone formation. These tools, in combination with the dynamical guidance from numerical models, should improve the quality of genesis forecasts and perhaps in the next five years extend reliable tropical cyclone formation forecasts from five days to one week.
— Eric Blake and Todd Kimberlain
Thanks to Chidong Zhang and David Zermeno, University of Miami RSMAS, for Figure 1.
Teenagers today seem to enjoy taking words and employing them as a new part of speech, especially if it results in the use of fewer syllables. Thus we have the verb fail used as a noun in place of failure, the verb invite used as a replacement noun for invitation, and so on. This has given us such linguistic classics as what an epic fail or where’s my invite. My teenage daughter has managed the inverse transformation, telling me that she has “no time to piano today”. Texting and Twitter can be blamed for much of this, of course, but the hurricane community’s gift to the Lexicon of Lazy Locutions originated nearly two decades ago. The noun that represents our particular role in the decline and fall of the English language is the subject of today’s blog post.
If you lurk around the dark meteorological corners of the Internet, or even if you just watch weather broadcasts during hurricane season, you’ve probably come across expressions like Invest AL94. With the accent on the first syllable (IN-vest rather than in-VEST), this is not an insider’s instruction to sell your AAPL stock at $100, but rather it’s a reference to a specific “investigative area” – a weather system for which a tropical cyclone forecast center is interested in collecting specialized data sets or running model guidance.
Accounting for Invests
NHC has responsibility for identifying these invests, or disturbances of interest, in the Atlantic basin. NHC and the Joint Typhoon Warning Center (JTWC) have shared responsibility for designating invests in the eastern Pacific, while the Central Pacific Hurricane Center (CPHC) has this responsibility in the central Pacific. The rest of the globe (for this purpose at least) belongs to JTWC.
NHC, CPHC, and JTWC prepare their forecasts and advisories on a computer platform known as the Automated Tropical Cyclone Forecast system (or ATCF). Tropical cyclones are followed on the ATCF using identifiers such as AL032014; this particular identifier would decode as the Atlantic basin’s third tropical cyclone of the 2014 season. Invests are given identifiers using the numbers 90 through 99 in place of the cyclone number, so the first Atlantic invest of 2014 was AL902014, or AL90 for short. After AL992014 is used, we would cycle around and reuse AL902014, so unlike the ATCF identifiers for true tropical cyclones, invest identifiers are not unique.
Once NHC or one of the other forecast centers “opens an invest”, data collection and processing is initiated on several government and academic websites, including those at the Naval Research Laboratory (see example to the right) and the Cooperative Institute for Meteorological Satellite Studies at the University of Wisconsin. Information on these sites, along with the standard suite of models run on the invest, then helps NHC prepare the genesis forecasts that appear in the Tropical Weather Outlook.
It’s important to recognize that the designation of a disturbance as an invest does not correspond to any particular likelihood of development of that system into a tropical cyclone. Indeed, we will open an invest in part to help us determine what that likelihood is. Also, and particularly near the beginning of the season, it’s not uncommon for NHC to create one or more invests solely to test data flow or model processing scripts. The Tropical Weather Outlook should always be consulted to determine the significance or potential threat of an invest disturbance.
No Insider Trading
ATCF databases have traditionally been posted to NHC’s public FTP server to facilitate the exchange of information with modelers and other quasi-operational groups, and to make model guidance available to the private sector. Unfortunately, posting of the ATCF data to the FTP site allowed some pre-decisional information to leak out to those who knew exactly where to look. For example, one could find the command that renumbers the invest system AL902013 to the tropical cyclone AL012013. Renumbering an invest is a process the NHC Hurricane Specialist needs to invoke in order to prepare the first advisory on a tropical cyclone, even though the final decision to release that advisory might not be made for another hour or two. In the early years of the FTP site, these leaks seemed to fly under the community’s radar, but over time had become increasingly known. Anticipation of new cyclones began to cause problems for us and for our partners in emergency management and the media, some of whom would infer or even prematurely announce that we were going to start advisories. (And indeed, occasionally we’ve had to change our minds and not initiate advisories on a renumbered invest).
In 2014, NHC has made some changes to how data from the ATCF get publicly posted. The most significant of these is the establishment of a blackout period, during which changes made to the ATCF storm ID and some other parameters will not flow to the FTP server. The blackout period begins 90 minutes prior to the nominal advisory release time (e.g., 9:30 a.m.) and ends at the nominal advisory release time (e.g., 11:00 a.m.). Quasi-operational websites that make use of ATCF data will now draw their data from the FTP server rather than NHC’s internal databases. In this way, everyone will be able to learn about an NHC advisory, and know for sure that it’s coming, all at the same time when that advisory is released. We want to emphasize that while the blackout period will restrict the release of pre-decisional information, it will not restrict the distribution of model guidance used by private-sector forecasters.
The Closing Bell
One final thought on invests. NHC knows that lots of folks, including non-meteorologists, look at the tropical cyclone models. While we make model data available on our FTP site for use by the meteorological community, and the public can find these data displayed all over the Internet, we deliberately avoid enhancing their visibility or prominence by posting model plots on our own website. This is particularly important for invests, where the model guidance is notoriously poor and erratic, partly because many of these models were never designed to be run on disturbances. NHC’s Hurricane Specialists work hard, based on their knowledge and experience, to interpret all the available models and other data in the formulation of their official forecasts and warnings, and in so doing help NHC continue to be America’s calm, clear, and trusted voice in the eye of the storm. And hopefully stay out of Weird Al Yankovic’s sequel to “Word Crimes.”
— James Franklin
In our last storm surge post, we talked about the need for a storm surge graphic and why we use “above ground level” to communicate storm surge forecasts. Now we’ll discuss how we create the new storm surge graphic.
But first, we need to touch on how forecast uncertainty relates to storm surge forecasting.
Putting All Your Eggs in One Basket
The exact amount of storm surge that any one particular location will get from a storm is dependent on a number of factors, including storm track, storm intensity, storm size, forward speed, shape of the coastline, and depth of the ocean bottom just offshore. Needless to say, it’s a complex phenomenon. Although we’re getting better on some aspects of hurricane forecasting, we still aren’t able to nail down the exact landfall of the storm or exactly how strong and big the storm will be when it reaches the coast. This means that there is a lot of uncertainty involved in storm surge forecasting. Here’s an illustration showing why all of this is important.
Here’s the forecast track for a Category 4 hurricane located southeast of Louisiana and only about 12 hours away from reaching the northern Gulf Coast:
Here’s the question: how much storm surge could this hurricane produce in Mobile, Alabama, and Pensacola, Florida (marked on the map)? If we take this forecast and run it through SLOSH (the National Weather Service’s operational storm surge model), here’s what you get:
The forecast has this hurricane making landfall near Dauphin Island, with the center moving northward just west of Mobile Bay along the black line. You can see from this map that water levels will rise to at least 14 ft. above NGVD29 (the particular reference level we are using in this scenario) in the upper reaches of Mobile Bay while they will rise to about 2 ft. above NGVD29 in the Pensacola area. What’s the problem with this storm surge forecast? It assumes that the track, intensity, and size forecasts of the hurricane will all be perfect. This is rarely, if ever, the case.
Here’s what actually happened with this hurricane. The storm turned ever so slightly toward the east and made landfall about 30 miles east of where the earlier forecast had shown it moving inland. Despite the shift, this was a good track forecast–30 miles is more or less typical for a 12-hour error. So, what kind of storm surge resulted from the actual track of this hurricane? If we take the actual track of the storm and run it through SLOSH, here’s what we get:
Since the center of the hurricane actually moved east of Mobile Bay, winds were pushing water out of the bay, and the water was only able to rise about 4-5 ft. above NGVD29 near Mobile. On the other hand, significantly more water was pushed toward the Pensacola area, with values as high as 12 ft. above NGVD29 in the upper reaches of Pensacola Bay.
This scenario was an actual storm–Hurricane Ivan in 2004. If emergency managers in Pensacola at the time had relied on that single SLOSH map that was based on a perfect forecast (or, put all their eggs in one basket), they would have been woefully unprepared and may not have evacuated enough people away from the coast. Granted, such decisions would have been made more than 12 hours away from landfall, but at that time, forecast errors are even larger and make storm surge forecasting even more difficult.
If you’re going to put all your eggs in one basket, you might as well scramble them beforehand so that they don’t break when you drop the basket. In a sense, that’s what we do when trying to assess an area’s storm surge risk before a tropical cyclone. Instead of assuming one perfect forecast, we generate many simulated storms weighted around the official forecast–some to the left, some to the right; some faster, some slower; some bigger, some smaller–and then run each of those storms through SLOSH. We then “scramble” the SLOSH output from all storms together and derive statistics that tell us the probability of certain storm surge heights at given locations along the coast.
If we go back to our example from Hurricane Ivan, we can see the value of this method in assessing storm surge risk. The image below shows the probability that the storm surge would reach at least 8 ft. above the reference level (NGVD29) for Ivan from the NHC Tropical Cyclone Storm Surge Probability product. The first thing that should jump out at you is that the probability of at least 8 ft. of surge was just about equal in Mobile Bay (60-70% chance) and Pensacola Bay (50-60% chance). The probabilistic approach indicates that both areas were at a significant risk of storm surge, and both areas should have been preparing similarly for the arrival of the storm. Because we accounted for the uncertainty in the official forecast, we were able to assess the true storm surge risk for all areas near the coast.
The Tropical Cyclone Storm Surge Probability product provides the data that are used to create the Potential Storm Surge Flooding map that will be available experimentally beginning in the 2014 hurricane season. In other words, the Potential Storm Surge Flooding map accounts for the uncertainties associated with NHC’s tropical cyclone forecasts. In Part 3 of this storm surge series, we’ll talk more about the map itself and how it should be interpreted.