Skill or Luck?
There’s one thing that many of us are missing right now while we’re occupying ourselves at home: sports. We should have been all set for the playoffs in major league hockey and basketball, and we would be excited about the beginning of the major league baseball and soccer seasons. We also would have been eagerly anticipating some of this spring and summer’s major sporting events, including the Olympics. So let’s dream a little…
When we set out to write this blog post for Inside the Eye, we wanted to show how National Hurricane Center (NHC) forecasters use their skill and expertise to predict the future track of a hurricane. And then it got us thinking, how does luck factor into the equation? In other words, when meteorologists get a weather forecast right, how much of it is luck, and how much of it is forecasters’ skill in correctly interpreting, or even beating, the weather models available to them?
Investment strategist Michael Mauboussin created a “Skill-Luck Continuum” where individual sports, among other activities in life, are placed on a spectrum somewhere between pure skill and pure luck (Figure 1). Based on factors such as the number of games in a season, number of players in action, and number of scoring opportunities in a game or match, athletes and their teams in some sports might have to rely on a little more luck than other sports to be successful. On this spectrum, a sport like basketball would be closest to the skill side (there are a lot of scoring opportunities in a basketball game) whereas a sport like hockey would require a little more luck (there are fewer scoring opportunities in a hockey match, and sometimes you just need the puck to bounce your way). Fortunately for hockey fans, there are enough games in a season for their favorite team’s “unlucky” games to not matter so much.
Figure 1. The Skill-Luck Continuum in Sports, developed by investment strategist Michael Mauboussin.
Where would hurricane forecasting lie on such a continuum? There’s no doubt that luck plays at least some part in weather forecasting too, particularly in individual forecasts when random or unforeseen circumstances could either play in your favor (and make you look like the best forecaster around) or turn against you (and make you look like you don’t know what you’re doing!). But luck is much less of a factor when you consider a lot of forecasts over longer periods of time, where the good and bad circumstances should cancel each other out and true skill shines through (just as in sports). At NHC, we routinely compare our forecasts with weather models over these long periods of time to assess our skill at predicting, for example, the future tracks of hurricanes.
An International Friendly?
From our experience of talking to people about hurricanes and weather models, it seems to be almost common “knowledge” that only two models exist – the U.S. Global Forecast System (GFS) and the European Centre for Medium Range Weather Forecasts (ECMWF) model. It’s true that those two models are used heavily at NHC and the National Weather Service in general, but there are many more weather models that can simulate a hurricane’s track and general weather across the globe. (Here’s a comprehensive list showing all of the available weather models that are used at NHC today, if you’re interested: https://www.nhc.noaa.gov/modelsummary.shtml.) We’ve also heard and seen people compare the GFS and ECMWF models and talk about which model scenario might be more correct for a given storm. This blog entry summarizes the performances of those models and discusses how, on the whole, NHC systematically outperforms them on predicting the track of a storm.
Below are the most recent three years of data (2017, 2018, and 2019) of Atlantic basin track forecast skill from NHC and the three best individual track models: the GFS, ECMWF, and the United Kingdom Meteorological Office model (UKMET) (Figure 2). Track forecast skill is assessed by comparing NHC’s and each model’s performance to that of a baseline, which in this case is a climatology and persistence model. This model makes forecasts based on a combination of what past storms with similar characteristics–like location, intensity, forward speed, and the time of year–have done (the climatology part) and a continuation of what the current storm has been doing (the persistence part). This model contains no information about the current state of the atmosphere and represents a “no-skill” level of accuracy.
Figure 2. NHC and selected model track forecast skill for the Atlantic basin in 2017, 2018, and 2019.
On the skill diagrams above, lines for models or forecasts that are above other lines are considered to be the most skillful. It can be seen that in each year shown, NHC (black line) outperforms the models and has the greatest skill at most, if not all, forecast times (the black line is above the other colored lines most of the time). Among the models, the ECMWF (red line) has been the best performer, with the GFS (blue line) and UKMET (green line) trading spots for second place.
Yet another metric to estimate how often NHC outperforms the models is called “frequency of superior performance.” Based on this metric, over the last 3 years (2017-19), NHC outperformed the GFS 65% of the time, the UKMET 59% of the time, and the ECMWF 56% of the time. This means that more often than not, NHC is beating these individual models. So the question is, how do the NHC forecasters beat the models?
Keep Your Eyes on the Ball
Forecasters at NHC are quite skilled at assessing weather models and their associated strengthens and weaknesses. It is that experience and a methodology of using averages of model solutions (consensus) that typically help NHC perform best. If you ever read a NHC forecast discussion and see statements like “the track forecast is near the consensus aids,” or “the track forecast is near the middle of the guidance envelope,” the forecaster believed that the best solution was to be near the average of the models. Although this strategy often works, NHC occasionally abandons this method when something does not seem right in the model solutions. One recent example of this was Tropical Storm Isaac in 2018. The figure below (Figure 3) shows the available model guidance, denoted by different colors, at 2 PM EDT (1800 UTC) on September 9 for Isaac, with the red-brown line representing the model consensus (TVCA).
Figure 3. NHC forecast (dashed black line) and selected model tracks at 2 PM EDT (1800 UTC) September 9, 2018 for then-Tropical Storm Isaac. The solid black line represents the actual track of Isaac and the red-brown line represents the model consensus.
Although the models were in fair agreement that the storm would head westward for some time, a few models diverged by the time Isaac was expected to be near the eastern Caribbean Islands, mostly because they disagreed on how fast Isaac would be moving at that time. Instead of being near the middle of the guidance envelope, NHC placed the forecast on the southern side of the model suite (dashed black line) at the latter forecast times since the forecaster believed that the steering flow would continue to force Isaac westward into the central Caribbean. Indeed, NHC was correct in this case, and in fact, for the entire storm, NHC had very low track errors.
In some cases all of the models turn out to be wrong, which usually causes the official forecast to suffer as well. That was the case for a period during Dorian in 2019. Figure 4 shows many of the available operational models at 8 PM EDT on August 26 (0000 UTC August 27) for then-Tropical Storm Dorian. As you can see by noting the deviation of the colored lines from the solid black line (Dorian’s actual track), none of the models or the official forecast (colored lines) anticipated that Dorian would turn as sharply as it did over the northeastern Caribbean Sea, and no model showed a direct impact to the Virgin Islands, where Dorian made landfall as a hurricane.
Figure 4. NHC forecast (dashed black line) and selected model tracks at 8 PM EDT on August 26 (0000 UTC 27 August), 2019 for then-Tropical Storm Dorian. The solid black line represents the actual track of Dorian.
Figure 5 shows many of the operational models at 2 AM EDT (0600 UTC) on August 30 when Dorian, a major hurricane at the time, was approaching the Bahamas. You can see that all of the models showed Dorian making landfall in south or central Florida in about four days from the time of the model runs, and none of them captured the catastrophic two-day stall that occurred over Great Abaco and Grand Bahama Islands. NHC’s forecast followed the consensus of the models in this case and thus did not initially anticipate Dorian’s long, drawn-out battering of the northwestern Bahamas.
Figure 5. NHC forecast (dashed black line) and selected model tracks at 2 AM EDT (0600 UTC) on August 30, 2019 for Hurricane Dorian. The sold black line represents the actual track of Dorian.
The Undervalued Player? A Consistently Good Field-Goal Kicker
In American football, probably one of the most undervalued players on the field is the kicker. They don’t see much action during the majority of the game. But at the end of close games, who has the best chance to win the game for a team? A dependably accurate field goal kicker. In that vein, it’s not just accuracy that can make NHC’s forecasts “better” than the individual models. Another important factor is how consistent NHC’s predictions are from forecast to forecast compared to those from the models. We looked at consistency by comparing the average difference in the forecast storm locations between predictions that were made 12 hours apart. For example, by how much did the 96-hour storm position in the current forecast change from the 108-hour position in the forecast that was made 12 hours ago (which was interpolated between the 96- and 120-hour forecast positions)? Figure 6 shows this 4-day “consistency,” as well as the 4-day error, plotted together for the GFS, ECMWF, UKMET, and NHC forecasts for the Atlantic basin from 2017-19. It can be seen that NHC is not only more accurate than these models (it’s farthest down on the y-axis), but it is also more consistent (it’s farthest to the left on the x-axis), meaning the official forecast holds steady more than the models do from cycle to cycle. We like to say that we’re avoiding the model run-to-run “windshield wiper” effect (large shifts in forecast track to the left or right) or “trombone” effect (tracks that speed up or slow down) that are often displayed by even the most accurate models.
Figure 6. 96-hour NHC and model forecast error and consistency for 2017-2019 in the Atlantic basin (change from cycle to cycle).
NHC’s emphasis on consistency is so great that there are times when we knowingly accept that we might be sacrificing a little track accuracy to achieve consistency and a better public response to the threat. An example would be for a hurricane that is forecast to move westward and pose a serious threat to the U.S. southeastern states. Sometimes, such storms “recurve” to the north and then the northeast and move back out to sea before reaching the coast. When the models trend toward such a recurvature, the NHC’s forecast will sometimes lag the models’ forecast of a lower threat to land. In these cases, NHC does not want to prematurely take the southeastern states “off the hook”, sending a potentially erroneous signal that the risk of impacts on land has diminished, only to have later forecasts ratchet the threat back up after the public has turned their attention and energies elsewhere if the models, well, “change their mind”. That would be the kind of windshield wiper effect NHC wants to prevent in its own forecasts. Now, there are times where the recurvature does indeed occur. Then, NHC’s track forecasts, which have hung back a little from the models, could end up having larger errors than the models. But, NHC can accept having somewhat larger track forecast errors than the models in such circumstances at longer lead times if in doing so it can provide those at risk with a more effective message–achieved in part through consistency.
The superior accuracy and higher levels of consistency of the NHC forecasts are both important characteristics since emergency managers and other decision makers have to make challenging decisions, such as evacuation orders, based on that information. It is not surprising to us that NHC’s forecasts are more consistent than the global models, since forecasters here take a conservative approach and usually make gradual changes from the forecast they inherited from the previous forecaster. Conversely, the models often bounce around more and are not constrained by their previous prediction. And, unlike human forecasters, the models also bear no responsibility or feel remorse when they are wrong!
Filling Out Your Bracket
Accuracy, consistency, and luck are important factors in one particularly favorite sport: college basketball. We just passed the time of year when we should have been crowning champions in the men’s and women’s college basketball tournaments. But before those tournaments would have kicked off, “bracketologists” (no known relation to meteorologists!) would have made predictions on which teams would make it into the tournaments and which teams would have been likely to win.
Think of it this way: a team can be accurate in that they have a spectacular winning record during the regular season, but does that mean they are guaranteed to win the tournament, or even advance far? Nope. As is often said, that’s why they play the game. An inconsistent team—one whose performance varies wildly from game to game—has a higher risk of having a bad game and losing to an underdog in the first few rounds, even if their regular season record by itself suggests they should have no problem winning. The problem is, they could have been very lucky in the regular season, winning a lot of close games that could have easily swung the other way. If that luck runs out, the inconsistent team could have an early exit from the tournament. With a consistent team, on the other hand, you pretty much know what kind of performance you’re going to get—good or bad—and that increases confidence in knowing how far in the tournament the team would advance. You’d want to hitch your wagon to a good team that is consistent and hasn’t had to rely on too much luck to get where they are.
The same can be said for hurricane forecasts from NHC and the models. NHC’s track forecasts are more accurate and more consistent than the individual models in the long run, and that fact should increase overall user confidence in the forecasts put out by NHC. Even still, there is always room to improve, and it is hoped that forecasts will continue to become more accurate and consistent in the future. It is always a good idea to read the NHC Forecast Discussion to understand the reasons behind the forecast and to gauge the forecaster’s confidence in the prediction. For more information on NHC forecast and model verification, click the following link: https://www.nhc.noaa.gov/verification/
— John Cangialosi, Robbie Berg, and Andrew Penny
Semper Paratus (Always Ready): A Shared Mission of Watching Over a Vast Blue Ocean
The National Hurricane Center (NHC) has the responsibility for issuing weather forecasts and warnings for a wide expanse of the Atlantic and eastern North Pacific Oceans. Within NHC, the Hurricane Specialist Unit (HSU) issues forecasts for tropical storms and hurricanes in these regions, issues associated U. S. watches and warnings, and provides guidance for the issuance of watches and warnings for international land areas. NHC’s Tropical Analysis and Forecast Branch (TAFB) makes forecasts of wind speeds and wave heights and issues wind warnings year-round for the eastern North Pacific Ocean north of the equator to 30°N, and for the Atlantic Ocean north of the equator to 31°N and west of 35°W (including the Gulf of Mexico and Caribbean Sea). These wind warnings include tropical storms and hurricanes as well as winter storms, tradewind gales, and severe gap-wind events (for example, the “Tehuantepecers” south of Mexico).
The United States Coast Guard (USCG) has areas of responsibility (AORs) that extend well beyond those of NHC, with potential weather hazards affecting the fleet and their missions over the ocean, inland U.S. waterways, and flood-prone U.S. land areas. Although the USCG is responsible for search and rescue missions that may occur due to weather hazards, they are also vulnerable to severe weather and must also protect their own fleet and crews from these hazards.
One of the USCG’s oldest missions and highest priorities is to render aid to save lives and property in the maritime environment. To meet these goals, the United States’ area of search-and-rescue responsibility is divided into internationally recognized inland and maritime regions. There are five Atlantic USCG Search and Rescue Regions (SRRs) (Boston, Norfolk, Miami, New Orleans, and San Juan) and two Pacific USCG SRRs (Alameda and Honolulu) that overlap with NHC’s hurricane and marine areas of responsibility. The other eastern Pacific regions north of the Alameda SRR do not typically, if ever, experience hurricane activity. The multi-million square mile area of the agencies’ overlap allows NHC to provide weather hazard Decision Support Services (DSS) for the USCG.
Building Partnerships with the Districts
The National Weather Service (NWS) signed a Memorandum of Agreement (MOA) with the USCG to provide them with weather support. Over the past couple of years, staff at NHC have had numerous discussions with several of the USCG districts in order to build stronger partnerships. These discussions, primarily involving how NHC can better serve the USCG, established criteria for requiring TAFB to provide weather briefings to key decision makers within the USCG. When criteria are met, TAFB provides the relevant USCG District with once- or twice-a-day briefing packages detailing the weather impacts on their area of responsibility. This information provides the USCG districts with the details necessary to make efficient and effective decisions about potential mobilization of their fleet.
2018 Hurricane Season Briefing Support
During the 2018 hurricane season, TAFB provided 30 briefings to USCG Districts 5 (Norfolk), 7 (Miami), 8 (New Orleans), and 11 (Alameda) for the several tropical storms and hurricanes that affected them. These interactions helped to build the relationships between NHC and the USCG districts and aided the districts in making decisions regarding fleet mobilization, conducting search and rescue missions, and preparation for USCG’s land-based assets and personnel. Some of these briefings occurred during rapidly evolving high impact scenarios, including Hurricane Michael. Michael was forecast to become a hurricane within 72 hours of developing into a tropical depression and was forecast to make landfall within 96 hours of its formation. Ultimately, Michael rapidly intensified into a category 5 hurricane only 3½ days after formation, before making landfall on the Florida Panhandle. Hurricane Michael’s track across the east-central Gulf of Mexico straddled the border of USCG Districts 7 (Miami) and 8 (New Orleans), leading to both Districts taking action in advance of the hurricane.
Support for District 5 (Norfolk)
The NWS’s Ocean Prediction Center, the NHC (through TAFB), and the NWS National Operations Center have worked together to provide weekly high-level coordination briefings to USCG District 5 on upcoming hazards focused on the Atlantic Ocean north of 31°N over the following seven days. Each Monday (except Tuesday if Monday is a holiday) by noon Eastern Time, the NWS provides a briefing that covers the mid-Atlantic region from New Jersey through North Carolina. Typically, the briefing covers the area to roughly 65°W, though the exact area covered can vary based on the week’s expected weather hazards. The USCG, in turn, has been sharing the information with mariners, port partners, and industry groups for situational awareness and critical decision-making.
NHC’s TAFB is ready to provide decision support services to the USCG Districts for the 2019 hurricane season. Plans are being developed to continue this type of support for many years to come.
— Andy Latto
Tropical Storm Erika, coming as it did so close to the beginning of the new college and professional football seasons, is a reminder that Monday-morning quarterbacking is nearly as popular an activity as the sport itself. And we at NHC do it too. After every storm we review our operations with an eye toward improving our products and services. Erika is no different, though there’s been more questioning and criticism than usual, with few components of the weather enterprise spared. Some in the media were accused of overinflating the threat, numerical models were bashed, and some public officials were charged with overreacting. NHC’s forecasts were questioned while others lamented that NHC’s voice wasn’t strong enough amid all the chatter. So, in the spirit of searching for a tropical storm eureka, in this blog entry we present some of our own post-storm reflections.
How good were NHC’s forecasts for Erika?
The NHC official forecast errors were larger than average. A preliminary verification shows that the average 72-hr track error for Erika was 153 n mi, about 30% larger than the 5-year average of 113 n mi. And nearly all of this error was a rightward (northward) bias – that is, Erika moved consistently to the left of the NHC forecast track. As for intensity, Erika ended up being weaker than forecast; the 72-hr intensity forecasts were off by about 20 kt on average, and the official 5-day forecasts called for a hurricane over or near Florida until as late as 2 AM Friday, when it became clear that Erika was going to have to deal with the high terrain of Hispaniola.
Why was Erika so difficult to get right?
Erika was a weak and disorganized tropical cyclone, and weak systems generally present us (and the computer simulations we consider) with tougher forecast challenges. In fact, the average 72-hr track error for all tropical depressions and weak tropical storms is around 155 n mi – just about exactly what the errors for Erika were. So the track issues weren’t really a surprise to us. Of course, knowing whether such errors were going to occur and how to reduce them in real time wasn’t obvious. If it had been obvious, we would have called an audible and changed the forecast.
What made this particular situation so challenging was wind shear, mainly due to strong upper-level westerly winds in Erika’s environment. These winds tended to displace the storm’s thunderstorms away from its low-level circulation, causing the storm to lack a coherent vertical structure. When this happens, it’s very difficult to tell just how much influence those upper-level winds will have on the storm track. Sometimes storms hold together against wind shear (Andrew of 1992 is a good example), and there were times when Erika seemed to be winning its battle. If it had held together better, it would have taken a track more to the north and ended up being a much stronger system. Obviously, it didn’t play out that way, but that was an outcome far easier to see with the benefit of hindsight.
An additional complication was Puerto Rico and Hispaniola. If Erika had been able to avoid those land masses, it would have been better able to withstand the disruptive effects of wind shear. And early on, we expected Erika to mostly avoid land. In this case, not getting the track right made it much harder to get the intensity right, which made the track forecasts harder yet.
Is there too much reliance on numerical models, and did they fail for Erika?
The improvements in track forecasting over the past few decades are directly attributable to improvements in numerical models, and to the data used to initialize them, to the point where it’s become almost impossible for a human forecaster to consistently outperform the guidance. The modelling community deserves our praise for the tremendous progress they’ve made, not criticism for missing this one. While we approach each forecast with an attempt to diagnose when and how the models might fail, it is exceedingly difficult, and it’s not something we do in our public forecasts unless our confidence is very high.
Human forecasters (including those at NHC) are still able to occasionally outperform the intensity models, mainly because satellite depictions of storm structure can often be used by forecasters more effectively than by models, giving us an edge in certain circumstances. But neither human forecasters nor the models are particularly good at anticipating when thunderstorms in the cyclone core are going to develop and how they’re going to subsequently evolve, especially for weaker cyclones like Erika.
Because the atmosphere is a “chaotic” physical system, small differences in an initial state can lead to very large differences in how that state will evolve with time. This is why the model guidance for Erika was frustratingly inconsistent – sometimes showing a strong hurricane near Florida, while at other times showing a dissipating system in the northeastern Caribbean. It’s going to take a large improvement in our ability to observe in and around the tropical cyclone core (among other things), to better forecast cases like Erika for which storm structure is so important to the ultimate outcome. But our best hope for better forecasts lies in improved modeling–a major goal of the Hurricane Forecast Improvement Program (HFIP).
Given the overall model guidance we received during Erika, it’s hard even now, well after the storm, to see making a substantially different forecast with current capabilities and limitations. In fact, had our first few advisories called for a track much farther south at a much weaker intensity, or even a forecast of dissipation due to interaction with Hispaniola, our partners and the public might rightly have questioned our rationale to go firmly against many model forecasts of a stronger system farther north.
Was the message from NHC muddled?
We think that there might be some ways for NHC to make key aspects of our message easier to find. Although NHC’s Tropical Cyclone Discussions (TCDs) repeatedly talked about the uncertainty surrounding Erika’s future beyond the Caribbean, including the possibility that the cyclone could dissipate before reaching Florida, it does not appear that this was a prominent part of the media’s message to Florida residents. Making key “talking points” more distinctly visible in the TCD and the Public Advisory are options we are considering, as well as enhanced use of NHC’s Twitter and Facebook accounts. Having said that, consumers and especially re-packagers of tropical cyclone forecast information, like our media partners, should take some responsibility for making use of the information that is already available. We also invite our media partners to take more advantage of the numerous training sessions we offer, mostly during the hurricane offseason. Reaching anyone in the television industry with such training, except for on-camera meteorologists, has proven over the years to be very difficult. We would like to train more reporters, producers, news department staff, executives, etc. so they are more sensitized to forecast uncertainty and how to communicate it with the help of our products, but we realize that a more focused “talking points” approach as described above will probably be needed to assist these busy folks in conveying a consistent message.
An NHC advisory package contains a variety of products, each geared to providing a certain kind of information or to serving a particular set of users. Some of our media partners have expressed concerns over the increasing number of NHC products, but the various wind and water hazards posed by a tropical cyclone cannot be boiled down to one graphic, one scale, or one index. We are, in fact, still in the process of intentionally expanding our product and warning suite to focus more on the individual hazards and promote a more consistent message about those hazards. Even so, two of our products that have been around for many years are still crying out for greater visibility.
We’ve already mentioned the Tropical Cyclone Discussion, a two- to four-paragraph text product that is the window into the forecaster’s thinking and provides the rationale behind the NHC official forecast. In the TCD we talk about the meteorology of the situation, indicate our level of confidence in the forecast, and when appropriate, discuss alternative scenarios to the one represented by the official forecast. Anyone whose job it is to communicate the forecast needs to make the TCD mandatory reading on every forecast cycle.
Some users may not understand the amount of uncertainty that is inherent in hurricane forecasts (although we suspect Florida residents now have a greater appreciation of it than they had two weeks ago). We need to continue to emphasize, and ask our media partners to emphasize, NHC’s Wind Speed Probability Product, available in both text and graphical form, which describes the chances of hurricane- and tropical-storm-force winds occurring at individual locations over the five-day forecast period. Someone looking at that product would have seen that at no point during Erika’s lifetime did the chance of hurricane-force winds reach even 5% at any individual location in the state of Florida, and that the chances of tropical-storm-force winds remained 50-50 or less. In addition, in some of the number crunching we did after the storm, we calculated that the chance of hurricane-force winds occurring anywhere along the coast of Florida never got above 21%.
We realize that at first it seems counterintuitive that we are forecasting a hurricane near Florida while no one in that state has even a 5% chance of experiencing hurricane-force winds. That, however, is the reality of uncertainties in 5-day forecasts, especially for weaker systems like Erika, and the wind speed probabilities reliably convey the wind risk in each community. We did notice a couple of on-air meteorologists referencing the Wind Speed Probabilities, which is great – and the more exposure this product gets, the better. We would like to work with our television partners to help them take advantage of existing ways for many of them to easily bring the wind speed probabilities into their in-house graphics display systems.
A very nice training module exists for folks interested in learning about how to use the Wind Speed Probabilities: https://www.meted.ucar.edu/training_module.php?id=1190#.VenooLQ2KfQ. You will need to register for a free COMET/MetEd account in order to access the training module.
Did Floridians over-prepare for Erika?
Anyone who went shopping for water and other supplies once they were in the five-day cone did exactly the right thing (of course, it’s much better to do that at the beginning of hurricane season!). Being in or near the five-day cone is a useful wake-up call for folks to be prepared in case watches or warnings are needed later. But as it turned out, no watches or warnings were ever issued for Florida. In fact, at the time when watches would normally have gone up for South Florida, NHC decided to wait, knowing that there was a significant chance they would never be needed – and that turned out to be the right call.
Watches (which provide ~48 hours notice of the possibility of hurricane or tropical storm conditions) and warnings (~36 hours notice that those conditions are likely) seem to be getting less attention these days, with the focus on a storm ramping up several days in advance of the first watch. While the additional heads-up is helpful, we worry about focusing in on specific potential targets or impacts that far in advance. The watch/warning process begins only 48 hours in advance because that’s the time frame when confidence in the forecast finally gets high enough to justify the sometimes costly actions that an approaching tropical cyclone requires. (Although, we recognize that some especially vulnerable areas sometimes have to begin evacuations prior to the issuance of a watch or a warning, and we and the local Weather Forecast Offices of the National Weather Service directly support emergency management partners as they make such decisions.)
What can NHC do better next time?
While we’d like to make a perfect forecast every time, we know that’s not possible. Failing that, we’re thinking about ways we can improve the visibility of our key messages, particularly those that will help users better understand forecast uncertainty. As noted above, we’re considering adding a clearly labeled list of key messages or talking points to either the TCD or the Tropical Cyclone Public Advisory, or both. We’d also like to try to make increased use of our Twitter accounts (@NHC_Atlantic, @NWSNHC, and @NHCDirector), which so far have been mostly limited to automated tweets or to more administrative or procedural topics. We’re also looking at whether the size of the cone should change as our confidence in a forecast changes (right now the cone size is only adjusted once at the beginning of each season), and thinking about ways to convey intensity uncertainty on the cone graphic. But most of all, we need to keep working to educate the media and the public about the nature of hurricane forecasts generally, and how to get the information they need to make smart decisions when storms threaten.