Upload
gwendolyn-harvey
View
223
Download
1
Tags:
Embed Size (px)
Citation preview
DTC/HWT COLLABORATION:
DEMONSTRATION OF MET ATHWT SPRING EXPERIMENTSTara Jensen for DTC Staff 1, Steve Weiss 2, Jack Kain 3, Mike Coniglio 3
1NCAR/RAL and NOAA/GSD, Boulder Colorado, USA2 NOAA/NWS/Storm Prediction Center, Norman, Oklahoma, USA3 NOAA/OAR/National Severe Storms Laboratory, Norman, Oklahoma, USA
What is HWT?
NOAAHazardous Weather Testbed(HWT)
NOAANational Severe Storms Lab
(NSSL)
NOAAStorm Prediction Center
(SPC)
Cooperative Institute forMesoscale Meteorological
Studies (CIMMS)
BRINGING RESEARCHto FORECAST OPERATIONS
The mutual interests of forecastersfrom the SPC, researchers from NSSL,and collocated joint research partnersfrom CIMMS inspired testbed formation.
What is Spring Experiment?Goal: Give forecasters first-hand look at the latest research concepts and products Immerse researchers in the challenges, needs, and constraints of front-line forecasters
Approach: Forecast teams gather in Norman each week from late April to early June. Each day consists of:
Daily Briefing Review of Previous Day’s Forecast Selection of Current Day’s Forecast
Area Forecasters split into 2 teams to predict
Chance of Severe Wx between 20 UTC – 04 UTC(two periods 20-00 UTC; 00-04 UTC)
Years2000 2004
2001 2005 2002 2007 2003 2008
2009
Spring Experiment – BRINGING RESEARCH to FORECAST OPERATIONS
2008: Demonstration and first on-line system
Goal: Demonstrate use of objective metrics in Spring Experiment format
2009: Expanded evaluation withresults in real-time
Goal: Assess impact of radar assimilation on forecast
DTC Collaboration with HWT
MET Components
Grid-Stat - Traditional VxStatistics for Dichotomous Variables Including:
Frequency Bias Gilbert Skill Score Critical Success Index PODy FAR
MH
F
Observation
Forecast
MODE – Spatial VxOnce Objects are Identified:• Traditional Stats may be calculated• Properties of the objects may also be calculated,
including:
Intersection Area, Area Ratio, Centroid Distance, Angle Difference, Percent Coverage, Median of Maximum Interest, Intensity Quartiles
Results
2008 PRELIMINARY RESULTS Fcst Vars: 1-hr accum. precipitation forecasts
Models: 2 high-resolution modelsEMC-WRF 4km (NMM)NSSL-WRF 4km (ARW)
Obs: NEXRAD Stage II QPE
User Interface: Available toward end of ExperimentTraditional stats aggregated by Day, Threshold, Lead TimeSpatial stats (MODE output) available for each day
DTC Participation: 2 people attended Experiment for a week
Traditional – Gilbert Skill ScoreResults were aggregated over Spring Experimenttime period and the median value was calculated
0-12 hoursNSSL slightly higher skill for lead times 0-12 hours.
12-36 hoursLight precip: EMC exhibits slightly greater skill
Heavier precip: NSSL model has greater skill
Maximum SkillSkill appears to be peak between 8-12 hours for lighter precip and 5-6 hours for heavier precip
Gilbert Skill Score (Equitable Threat Score) Measures the fraction of forecast events that were correctly predicted, adjusted for hits associated with random chance
1 1
2 23 3
Forecast Observed
A large proportion of small simple objects were misses or false alarms (i.e., not matched)
OBJECT SUMMARY # Forecast objects 17 # Observed objects 18
# False alarms 11 # Misses 11
# Composites 3
Fcst: NSSL –ARW f025 1-hr accumulated precipitationObs: NEXRAD Stage 2 1-hr precipitation estimate
Case Study:11 June 2008
COMPOSITE OBJECT 1
Forecast Observed Difference %
Difference Area (grid squares)
11,866 9,683 2,183 23%
Intensity50 (mm)
4.34 4.20 -0.14 -3%
Intensity90 (mm)
19.09 18.70 0.39 2%
Intensity Sum
87,084 78,095 8,989 12%
TRADITIONAL SCORES POD 0.22 FAR 0.86 CSI 0.09
GILBERT (ETS) 0.08 BIAS 1.6
MODE Spatial Scores
2009 PRELIMINARY RESULTS Fcst Vars: Composite Reflectivity; 1-hr accum. precipitation forecasts
Models: 3 high-resolution modelsCAPS CN (SSEF 4km ensemble member - ARW core –
radar assimilation)CAPS C0 (SSEF 4km ensemble member - ARW core –
no radar assimilation)HRRR 3km – (ARW core - radar assimilation)
Obs: NSSL-NMQ Q2 QPE and Composite Reflectivity Products
User Interface:Tailored around HWT specifications and displaysTrad. and Spatial Statistics available for individual forecast runsMODE graphical output place into a multi-panel looped display
DTC Participation: 1 person on-site each week; provided short tutorial on MET and how to interpret results
Prototype Database and Display System
System developed for HWT collaboration
1. Pulls in files
2. Runs MET using pre-defined configurations
3. Loads database with MET output
4. Generates static graphics for website
5. Prototype Interactive Evaluation Tool in development
Forecast andObs
Run METGrid-Stat
MODE
Database of
MET output
Static Graphics Display
PrototypeInteractiveDisplay
14 May 2009 Init: 00 UTC MODE - Radius: 5 (20km); Thresh: 30dBZ
2009 Preliminary Results from Grid-Stat Gilbert Skill Score
F00-F03: Assimilation – clear improved skill during f00-f03 even though skill is decreasing over this period. F04 and beyond – skill trends for both models are similar regardless of initialization, suggesting model physics dominates. This is consistent with the idea that it takes 5-6 hours to spin up a model from cold start.
Overall:The Objective Verification provided by HWT/DTC collaboration has a been a very positive addition to the Spring Experiment process.
2008 Preliminary Results:Over 36 hours - There is no “clear winner” between EMC-4km and NSSL-4km. It appears each model excels during different parts of that forecast cycle
2009 Preliminary Results :Radar assimilation appears to improve skill scores in the first few hours, however it provides diminishing returns on improvement after this. No radar assimilation forecast closes the skill gap between hours 4-6, supporting the subjective evaluation that it takes 4-6 hours for a model to spin up from a cold start.
Summary