22
NFUSE Conference Call NFUSE Conference Call 4/11/07 4/11/07 How Good Does a How Good Does a Forecast Really Need Forecast Really Need To Be? To Be? David Myrick David Myrick Western Region Headquarters Western Region Headquarters Scientific Services Division Scientific Services Division

NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

Embed Size (px)

Citation preview

Page 1: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

How Good Does a Forecast How Good Does a Forecast Really Need To Be?Really Need To Be?

David MyrickDavid MyrickWestern Region HeadquartersWestern Region Headquarters

Scientific Services DivisionScientific Services Division

Page 2: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Motivating QuestionMotivating Question

• Can we use uncertainty information as a threshold for gauging when a forecast is good enough?– This is an informal talk!– Lots of examples– Approach question from the viewpoint of

observational uncertainty

Page 3: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Points Area(Grid Boxes)Points Area(Grid Boxes)

• No longer forecasting for 8-10 CCF points

• Each CWA – 1000’s of 2.5 or 5 km grid boxes

• Twofold need for grid-based verification:– Forecaster feedback across the entire grid– Identifying ways to evolve our services to focus

more attention on high impact events

Page 4: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

WR Service Improvement ProjectWR Service Improvement Project

• Initially began as a grid-based verification project using BOIVerify

• Morphed into learning how we can evolve our services to focus more effort on high impact events

• Project got us thinking about: “What is a good forecast for a small area?”

Page 5: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

ObservationsObservations

• Grid-based verification requires an objective analysis based on ASOS & non-ASOS observations

• Lots of known problems with surface & analysis data

• Ob = Value ± Uncertainty

Page 6: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Observational ErrorsObservational Errors

• Instrument errors

• Gross errors

• Siting errors

• Errors of “representativeness”

Photo: J. Horel

Page 7: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Errors of “representativeness”Errors of “representativeness”

• Observation is accurate– Reflects synoptic & microscale conditions

• But… the microscale phenomena it captures is not resolvable by analysis or model

• Example: cold pool in narrow valley– Observation on valley floor may be correct– Not captured by analysis system

Page 8: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

-1+9

Representativeness ErrorRepresentativeness ErrorTemperature (Temperature (ooC) ExampleC) Example

Tooele Valley

Rush Valley

www.topozone.com

Page 9: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Variability in ObservationsVariability in Observations

• Examples - WR/SSD RTMA Evaluation

• Comparing analysis solutions along a terrain profile near SLC, UT

• ~70 mesonet obs in a 60 x 60 km area

~60 km

Great Salt Lake

Wasatch M

ountains

Page 10: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Large Spread in Observations Large Spread in Observations

>11oC spread between 1400-1700 m

How do we analyze this?

Page 11: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Objective Analysis 101Objective Analysis 101

• Analysis Value = Background Value + Observation Corrections

• Analysis Errors come from:– Errors in the background field– Observational errors

• A “good” analysis takes into account the uncertainty in the obs & background– A “best fit” to the obs– Won’t always match the obs

Page 12: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Forecast VerificationForecast Verification

• Forecasters are comfortable with:– Verification against ASOS obs– Assessing forecast skill vs. MOS

• But is judging a forecast against a few points without any regard for observational and representativeness errors really the scientific way to verify forecasts?

• Is there a better way?• Can we define a “good enough” forecast?

Page 13: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

ProposalProposal

• Evaluate grid-based forecasts vs. RTMA

• Use RTMA to scientifically assign uncertainty

• Reward forecasts that are within the bounds of analysis uncertainty

Page 14: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

RTMA Uncertainty EstimatesRTMA Uncertainty Estimates

• RTMA/AOR provides a golden opportunity to revamp verification program

• Analysis uncertainty varies by location• Techniques under development at EMC to

assign analysis uncertainty to RTMA– Backing out an estimate of the analysis error

by taking the inverse of the Hessian of the analysis cost function

– Cross Validation (expensive)

Page 15: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

ExampleExample

• Verify forecasts based on the amount of uncertainty that exists in an analysis

• Example:– Forecast = 64oF– Analysis Value = 66oF– Analysis Uncertainty = +/- 3oF– No penalty for forecasts between 63-69oF (the

forecast fell in the “good enough” range)– This is a “distributions-oriented” approach…

Page 16: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

““Distributions-oriented” forecast verificationDistributions-oriented” forecast verification

• Murphy and Winkler (1987) – original paper

• Brooks and Doswell (1996) - reduced dimensionality problem by using wider bins

Page 17: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Problem with “distributions” approachProblem with “distributions” approach

• Brooks and Doswell (1996) example used 5oF bins

• Setup bins -5 to 0oF, 0 to 5oF, 5 to 10oF etc.

• Forecast = 4.5oF

• Verification = 0.5oF = good forecast

• Verification = 5.5oF = bad forecast

Page 18: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Myrick and Horel (2006)Myrick and Horel (2006)

• Verified NDFD grid-based forecasts using floating bins whose width was based on the observational uncertainty (~2.5oC)

Page 19: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

54

Temperature (Temperature (ooF) Forecast ExampleF) Forecast Example

Forecast RTMA RTMA Uncertainty

56

58

60

58

60

62

2

3

2

34

5

Pop

ulat

ed V

alle

y

Mou

ntai

ns

Green = Forecasts are “good enough”Red = abs(RTMA – Forecast) > Uncertainty

Page 20: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

SummarySummary

• Challenge: How do we define a “good enough” forecast

• Proposal: – Verify against RTMA ± Uncertainty

• Uncertainty based on observational, representativeness, & analysis errors

– Give the forecaster credit for forecast areas that are within the uncertainty

• Goal: Provide better feedback as to which forecast areas are “good enough” and which areas need more attention

Page 21: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

Special Thanks!Special Thanks!

• Tim Barker (BOI WFO)• Brad Colman (SEW WFO)• Kirby Cook (SEW WFO)• Andy Edman (WR/SSD)• John Horel (Univ. Utah)• Chad Kahler (WR/SSD)• Mark Mollner (WR/SSD)• Aaron Sutula (WR/SSD)• Ken Pomeroy (WR/SSD)• Manuel Pondeca (NCEP/EMC)• Kevin Werner (WR/SSD)

Page 22: NFUSE Conference Call 4/11/07 How Good Does a Forecast Really Need To Be? David Myrick Western Region Headquarters Scientific Services Division

NFUSE Conference Call 4/11/07NFUSE Conference Call 4/11/07

ReferencesReferences

Brooks H. E., and C. A. Doswell, 1996: A comparison of measures-oriented and distributions-oriented approaches to forecast verification. Wea. Forecasting, 11, 288–303.

Murphy A. H., and R. L. Winkler, 1987: A general framework for forecast verification. Mon. Wea. Rev., 115, 1330–1338.

Myrick, D. T., and J. D. Horel, 2006: Verification of surface temperature forecasts from the National Digital Forecast Database over the Western United States. Wea. Forecasting. 21, 869-892.

Representativeness Errors – Western Region Training Module:http://ww2.wrh.noaa.gov/ssd/digital_services/training/Rep_Error_basics_final

Western Region Service Evolution Project Internal Page: http://ww2.wrh.noaa.gov/ssd/digital_services/