Information on Speaker NotesInformation on Speaker Notes There are speaker notes on some of the...
33
Information on Speaker Notes There are speaker notes on some of the slides. The notes provide more information about the content of the slide. An icon, indicating there are notes for a slide, is found in the upper left hand corner of the slide. You can activate the notes window in two ways: Hold the cursor over the icon and the text will appear. Click on the icon and a text box will open. Close with a click outside the text box. The text box can be moved around if needed by clicking (in top of box) and dragging.
Information on Speaker NotesInformation on Speaker Notes There are speaker notes on some of the slides. The notes provide more information about the content of the slide. An icon ,
Levee Safety Program • Update Standard Project Storm Criteria • Precipitation-Frequency (NOAA-14, TP40, NOAA II)
Extreme Storm Database • Extreme Storm Data Archiving/Retrieval • Analysis of Recent Extreme Storm Events • Linked with HEC-HMS and HEC-MetVue
Computation of Areal Reduction Factors
Monte Carlo Analysis for Frequency Curve Extension
Atmospheric Modeling of Extreme Precipitation
3
Presenter
Presentation Notes
The USACE Extreme Storm team supports the Dam and Levee Safety program by performing hypothetical storm studies and updating guidance. A major undertaking is the development of the Extreme Storm Database. This database will contain information about historic storms, like the raw data, depth-area-duration curves, and contour maps (images and GIS format). Other areas the team is exploring is a methodology for developing area-reduction factors for large, spatial storm events. With the addition of Monte Carlo computation capabilities to HEC software, guidance and example applications are necessary. Finally, use of atmospheric models for estimation of the PMP is being explored.
BUILDING STRONG®
NOAA HMR Regional Coverage
Location of USACE Dams
4
Presenter
Presentation Notes
USACE owns and operates a large portfolio of more than 650 dams. The median age of the portfolio is about 50 years with many dams needing rehabilitation.
BUILDING STRONG®
Site Specific Studies Site Specific PMP Studies
• Completed: Moose Creek, Bluestone, Whittier Narrows
• In progress: Martis Creek, Ft Peck, Garrison, Kajaki
Antecedent Storm Studies Atmospheric Modeling
5
Presenter
Presentation Notes
USACE has completed three site specific PMP studies using the same methodology as the National Weather Service. Site specific studies are being requested for dams that are hydrologically deficient (overtopping due to the PMF is an issue). Antecedent storm studies are used to define the antecedent conditions prior to the PMF simulation. An important decision is the amount of water behind a reservoir at the beginning of the PMF simulation; is it reasonable to assume all the flood storage is filled prior to the PMF or is there a better assumption? Also, the Corps would like to use regional atmospheric models to help better define the PMP for a region/watershed. This is especially true in the west, where the dynamics between the terrain and the atmosphere are challenging when using historic storms and extrapolating them to a PMP estimate.
BUILDING STRONG®
HMR Publication Date Latest Storm Used
49 1977 Sept. 3-7, 1970
51 June 1978 June 19-23, 1972
55A June 1988 Aug. 1-4, 1978
57 October 1994 Dec. 24-26, 1980
59 February 1999 Feb. 14-19, 1986
Status of NOAA HMRs
Rank
Rainfall (inches) Date Location Storm
1 43 7/24-25 1979 Alvin TS Claudette
2 42 8/1-2 1978 Kerr County TS Amelia
3 38.2 9/9-10 1921 Thrall Unnamed
4 36 7/1-2 1932 Ingram None
5 35 6/27-28 1954 Pandale Alice
6 34 9/20 1967 Nueces River Beulah
7 33 8/3 1978 Shackleford Co TS Amelia
8 30 10/17-18 1998 San Marcos None
9 28.8 6/9 2001 Houston TS Allison
10 25 10/19 1984 Oden Unnamed
6 of the Top 10 Texas Storms Have Occurred Since
HMR 55A Publication
6
Presenter
Presentation Notes
This table shows the publication and most recent storm used in current guidance. It is important to update the extreme storm catalog and PMP index maps using the most recent storm information. Notice the table containing major storms in Texas and that 6 of them were not used when creating the PMP index maps in HMR 51 and 55A.
BUILDING STRONG®
Extreme Storm Catalog
7
Presenter
Presentation Notes
Storm studies published in the late 1940s and 1970s by the War Department documented DAD and mass rainfall curves for storms across the eastern U.S.; events date back to the late 1800s.
BUILDING STRONG®
Common Database shared with all federal and state agencies, academia, private consultants
Historic and Synthetic Extreme Storms
Depth Area Duration, Isohyetal maps, mass rainfall curves, meteorological characteristics
Interfaced with Hydrologic Models, tools, etc
Extreme Storm Database
8
BUILDING STRONG®
Radar Image .xmrg format gridLoadXMRG
Gage Data NCDC HEC-GageInterp
HEC-DSS file Gridded Format
dss2ascGrid
ArcGIS Shape files
HEC-GeoRAS Depth Area Duration
Curves
Storm Animation
HEC-DSSVue HEC-HMS
Zonal Histogram Raster Calculator
Maximum Precip. Regional PMF Prob.
Digitize Isohyetal Maps
9
Presenter
Presentation Notes
This schematic shows the steps for developing storm isohyetal maps and depth-area-duration curves. The software listed is available from HEC; HEC-GageInterp, gridLoadXMRG, and dss2ascGrid. Once the gridded precipitation data is in HEC-DSS format, the HEC-HMS can use the data as a boundary condition to rainfall-runoff simulations.
HEC-GageInterp Approach
Presenter
Presentation Notes
HEC-GageInterp is a tool for creating a gridded surface by interpolation point rainfall data. 15-minute, 1-hour, and other time steps can be used to create the gridded datasets. The figure on this slide shows the interpolated surface from the 1969 total storm event for the Whittier Narrows watershed. HEC-GageInterp contains a “bias” option for precipitation interpolation. This option can be used to scale precipitation based on mean annual precipitation or other data sources. For the Whittier Narrows watershed, the bias option ensurse more precipitation along orographically enhanced regions in the watershed.
BUILDING STRONG®
Extreme Storm Database
Select Storm by Map or by Searching Catalog
11
Presenter
Presentation Notes
Currently, the USACE Extreme Storm Database has over 800 storms from across the U.S. cataloged with depth-area-duration curves, storm center locations, mass rainfall curves, and any available documentation. The catalog is searchable by map interface or by catalog.
BUILDING STRONG®
Extreme Storm Database
Obtain Storm Documents, Storm Summary, and Data
12
Presenter
Presentation Notes
By clicking on a storm’s location on the map or in the catalog, a storm summary providing known features such as the storm, including the storm dates, storm center location, storm area and depth, maximum dewpoint, and elevation among other things. Additionally, any ArcGIS layers and documentation can be view and downloaded.
BUILDING STRONG®
Planned Extreme Storm Database Enhancements
Lat/Long Search Box or by Location/Radius
Field for Storm Type (ie, synoptic, convective)
Access by non-Corps (read-only, edit, add rights) • Field to ID who entered/edited data
DAD Tables • Search & Interpolate any area or duration • Plot DAD Curves
Show Reference Location on Map
Extract Dew Points from Map
Extract PMP from Map • Compute % PMP
13
Presenter
Presentation Notes
Work on the USACE Extreme Storm database continues, including population with more storms. Planned enhancements for the database include being able to search and interpolate a depth-area-duration table to any area or duration, being able to plot depth-area-duration curves directly from the interface, extracting the percent of the storm to the latest published PMP, the addition of the storm type for each storm, and being able to search by storm type and by means other than name, such as latitude and longitude. Additionally, providing a field for who has entered in the storm information, so that they may be contacted for more information if needed.
BUILDING STRONG®
Outline USACE Extreme Storm Team
• Catalog Historic Storms
• Perform Site Specific Studies, PMP and Antecedent Storm
• Develop Guidance for Hypothetical Storm Analysis
HEC-MetVue and its Application to Cataloging Historic Storms and Applying Synthetics Storms
Stochastic Hydrologic Simulations using HEC Software • Monte Carlo Analyses will be Available in HEC Software
• Knowledge Uncertainty and Natural Variability are Modeled in a Nested Monte Carlo Sampling Loop within HEC-WAT
• Application of HEC-WAT, HMS, and ResSim to Define Flow and Reservoir Stage Frequency Curves (Beyond Observed Records) while Including Uncertainty
Presenter
Presentation Notes
Exchange of information between the USACE Extreme Storms database and MetVue facilitates deeper information available in the database and also provides a more efficient data collection means for MetVue analyses.
BUILDING STRONG®
GIS-Based Meteorologic Model / Visualization Tool • Accepts shapefiles in any coordinate system
Analyze Historic Storms • Compute DAD • Translate, Rotate, Maximize • Calibrate QPE to gage data or PRISM data • Aggregate/Segregate Storms
Develop Design Storms • Hypothetical Frequency Based • PMP (HMR51/52, HMR55A, HMR58/59)
Linkage with HEC-HMS/Extreme Storm Database
Future work : Sample storm characteristics such as movement, centroid location, and orientation
HEC-MetVue
15
Presenter
Presentation Notes
Much of the storm data processing that is possible using Metvue would also be helpful information in the USACE Extreme Storms Database. For example, if a storm is transposed and reoriented to generate a maximum rainfall for the event for a watershed, the associated precipitation and resulting shapefiles would be useful to also have available for use in the database for future studies. Output, such as hyetographs and precipitation grids, developed for synthetic storms in MetVue would also be helpful data for the database. Precipitation grids for an event in an area as well as the historical storm data, made available by the database, would also be helpful in the data collection effort for application in MetVue.
BUILDING STRONG®
HEC-MetVue
16
Dynamically updates basin average precipitation with
transposed, reoriented storm Basin
Precipitation = 4.49 inches
Basin Precipitation = 15.94 inches
Transposed Storm Downward and Rotated 20 Degrees
Presenter
Presentation Notes
HEC-MetVue contains tools for visualizing and modifying storm events. The figures on this slide show a grid of accumulated 72 hour precipitation for 30 Dec 1996 – 02 Jan 1997. If the storm was rotated counter clockwise by 20 degrees and shifted south (so that the maximum precipitation was centered over the American River Basin), then the 72 hour precipitation for the watershed would have increased from 4.49 inches to 15.94 inches. Linking HEC-MetVue to HEC-HMS will facilitate quick simulation of alternative meteorologic scenarios.
BUILDING STRONG®
Outline USACE Extreme Storm Team
• Catalog Historic Storms
• Perform Site Specific Studies, PMP and Antecedent Storm
• Develop Guidance for Hypothetical Storm Analysis
HEC-MetVue and its Application to Cataloging Historic Storms and Applying Synthetics Storms
Stochastic Hydrologic Simulations using HEC Software • Monte Carlo Analyses will be Available in HEC Software
• Knowledge Uncertainty and Natural Variability are Modeled in a Nested Monte Carlo Sampling Loop within HEC-WAT
• Application of HEC-WAT, HMS, and ResSim to Define Flow and Reservoir Stage Frequency Curves (Beyond Observed Records) while Including Uncertainty
BUILDING STRONG®
Monte Carlo Capabilities in HEC Software Monte Carlo sampling capabilities are being
added to HEC Software: HEC-HMS - Loss Rate, Transform, Baseflow,
and Routing HEC-ResSim – Initial conditions, Rules (like
maximum releases), and input TS HEC-RAS – Manning’s n-values and dam
breach parameters HEC-FIA – Stage-damage information and
structure elevations User defines parameters for uncertainty
When used within a HEC-WAT Monte Carlo simulation, HEC-WAT helps to manages sampling of model data/parameters
18
Presenter
Presentation Notes
The simulation control framework was rebuilt to support tens of thousands of simulations. Capability – Select model parameters that will be "uncertain" and characterize each one. Select output results for key locations. Use robust sampling techniques to perform many realizations and produce probabilistic results. Independent parameter sampling: Choose one of the seven included analytical distributions, and enter properties. Dependent parameter sampling: Choose an independent parameter that will be sampled first. Select a linear or log-linear dependency. Enter the slope and intercept for the dependency relationship. Specify an epsilon error term for the dependency relationship by choosing one of the seven analytical distributions, and entering properties. Independent parameter sampling by month: Choose one of the seven analytical distributions. Enter separate distribution properties for each month of the year.
BUILDING STRONG®
Extension of Frequency Curve
Upper End of the Reservoir Stage Frequency Curve Based Upon Synthetic Events Determined on a Discrete Basis or by Monte Carlo Analysis:
Initial reservoir storage Soil loss Total inflow volume Hydrograph shape Snowpack depth SWE
Top of Dam
19
Presenter
Presentation Notes
A necessary analysis for Dam Safety studies is the reservoir stage frequency curve. Often, there is very limited observed data to define the frequency curve. Currently modeling procedures include discrete events where assumptions are made about the state of the watershed (how wet or dry) and the initial condition of the reservoir. Either discrete events flood events could be developed and routed through a reservoir operation model or a stochastic simulation where boundary conditions and hydrologic modeling parameters are sampled for thousands of events.
BUILDING STRONG®
HEC-WAT
20
An overarching interface that allows
project teams to perform water
resources studies in a comprehensive, systems- based
approach by building, editing and running
models commonly applied by multi-
disciplinary teams and save and display data
and results in a coordinated fashion..
Presenter
Presentation Notes
The HEC-WAT is a software program that integrates different model applications through a user interface. Individual models, HEC-HMS, HEC-ResSim, and HEC-RAS, can be developed through the HEC-WAT interface. HEC-WAT contains tools for define a simulation sequence (order the models are computed) and manages the passing of data from one model to the next. Results can be accessed through the HEC-WAT schematic.
BUILDING STRONG®
HEC-WAT for the Columbia River Watershed ■ 258,000 sq. miles ■ 2 countries ■ 7 states ■ 1,214 miles ■ 125 tributaries ■ Approximately 176,000
structures ■ 65 projects ■ 100 fragility curve
locations ■ 43 consequence areas ■ 128 levee systems; 449
miles
21
Presenter
Presentation Notes
HEC-WAT was used for accessing flood risk in the Columbia River watershed for different reservoir operation scenarios. Multiple reservoir operation, river hydraulics, and consequence models were developed for the system. One of the study goals was to perform a Monte Carlo simulation that incorporated both natural variability and knowledge uncertainty when computing risk based metrics. The HEC-WAT Flood Risk Analysis (FRA) compute was configured to sample hydrology, forecasted inflows, and levee failure elevations while running 50,000 events through the entire model compute sequence. HEC-WAT is also being considered as a tool on the Missouri River where there are multiple reservoir, hydraulic, consequence and ecosystems models. HEC-WAT does not have to be used only in large regional studies. Work is underway with the Dam and Levee Safety program to add functionality to HEC-WAT so that it can be applied across the country from small to large dam safety studies.
BUILDING STRONG® 22
HEC-WAT Framework
HEC-WATSimulation
With DefaultProgram
Order
HEC-ResSim Plug-In
HEC-RAS Plug-In
HEC-HMS HEC-ResSim HEC-RAS HEC-FIA
HEC-HMS Plug-In
HEC-FIA Plug-In
Model Results (simulation.dss)
HEC-WAT – Linking Models The initial set of models and tools to be
used during the analytical process in HEC-WAT are:
Communication between software is provided through a "plug-in.”
Data is shared through a common DSS file.
Hydrology
Hydraulics
Reservoir Operations
Consequences
Presenter
Presentation Notes
This figures shows how HEC-WAT facilitates the passing of data between models in a simulation sequence. Any data generated by HEC-WAT, like precipitation or flow, is saved to a HEC-DSS file where other models in the simulation sequence can access it.
BUILDING STRONG®
Nested Sampling Approach for Monte Carlo Simulations
Nested Monte Carlo: A. Sample instances of natural variability as flood events,
with enough events to capture the distribution of damage B. Sample instances of knowledge uncertainties in model
parameters to get their impact on the damage distribution
inner loop A varies natural variabilities, computes EAD
A B
outer loop B varies knowledge uncertainty, computes EAD distribution
1 outer loop B = a realization
23
Presenter
Presentation Notes
The nested Monte Carlo sampling procedure separates natural variability and knowledge uncertainty. Natural Variability = some variables are random and unpredictable by nature, and their values change with time (event to event) or in space. Knowledge Uncertainty = some variables do not change with time or space, but we do not know their values accurately. More investigation could reduce knowledge uncertainty (more analysis could go into understanding the flow frequency relationship and decrease the uncertainty). HEC-WAT manages the nested Monte Carlo compute by generating random number seeds that are passed to all the models in the simulation sequence. For example, HEC-WAT will pass an HEC-HMS model a random number seed for every event in the simulation. HEC-HMS might use the seed to sample initial soil moisture for each event. The Monte Carlo compute was designed to generate repeatable sampled parameters across multiple alternatives (event 350 in alternative A should have the same sampled parameters as event 350 in alternative B).
BUILDING STRONG®
0
Flow
1 0 Exceedance probability
Nested Sampling Approach for Monte Carlo Simulations
Peak flows from 1 realization of 1000 events/years
24
Presenter
Presentation Notes
The figures on this slide illustrate how an annual maximum flood dataset can be created by 1) sampling a new flow frequency curve (1 realization within the outer Monte Carlo loop), 2) sampling 1000 flows from the flow frequency curve created in step 1, and 3) sampling a hydrograph shape to apply to the sampled volume. For a preliminary level of the risk assessment stage at USACE, we sample volume from the frequency curve, hydrograph shape, data of event, and starting reservoir pool elevations. For an in-depth hydrologic analysis, we sample meteorology, hydrology, and reservoir simulation variables.
BUILDING STRONG®
Peak Reservoir Stage from 1 Realization
Top of Dam
25
Presenter
Presentation Notes
Red squares represent observed annual maximum pool stages. The blue diamonds were computed from a nested Monte Carlo compute within a HEC-WAT model. The two largest peak stage values were the result of two very different scenarios: The largest maximum reservoir stage was generated by the largest peak inflow flood event, 24459.9 cfs (3-day average flow) and a starting pool elevation of 5555.3 ft; the second largest peak reservoir stage was generated by a much smaller flood event, 1353.3 cfs, but a much larger starting pool elevation, 5585.1 feet. The reservoir only rose 0.9 feet during the simulation of scenario 2.
BUILDING STRONG®
Peak Reservoir Stage from 10 Realizations
26
Top of Dam
10 realizations x 1,000 events each
Presenter
Presentation Notes
The figure on this slide shows results from 10 realizations (of 1000 events each – only results from the largest events are included). With more realizations, the best estimate and 90 percent confidence limits could be determined.
BUILDING STRONG®
■ Generate the boundary conditions for the first model in the compute sequence (initially, this has been flow)
■ Two methods for generating flow data:
• Sampling from a frequency curve (and applying a shape)
50Plan: BaseCondit:Event1948:RAS-Reach1 River: Columbia Reach: RM 143-Bonn RS: 145.86
Time
Sta
ge
(ft)
Legend
Stage
28
BUILDING STRONG®
Distributed Computing
29
Presenter
Presentation Notes
We have demonstrated the distributed compute works on a server with multiple VMs, HEC classroom computers, HEC staff computers on the local area network, and VMs on the Amazon Cloud network. At the beginning of FY 2013, HEC began making alternative runs (Monte Carlo computes) for the CRT study. Multiple simulations of 50,000 events through all models (average time 10 days; 100 computers) were completed using the Amazon Cloud computing network. On average it took about 25 minutes per event with the CRT watershed (all models). This is an average over 50 events; this does include some models being skipped. At the beginning of FY 2013 it was taking about 90 minutes per event. That was before the Skip Model Tool was working, the scripting capability added to the Time-Window Modification Tool, and improvements made to HEC-FIA and HEC-RAS.
BUILDING STRONG®
Time Window Modifications Order of Time Window Modifications
1) Simulation Window (defined by the hydrograph shapes) 2) Time Window Modification (with in the simulation) 3) Time Window Modification Alternative (simple model) 4) Scripting Option with Time Window Modification Alternative
30
Presenter
Presentation Notes
There are two methods available for modifying the time window in the FRA compute, edit the time window defined in the simulation editor, and create a Time Window Modification Alternative and insert this “model” into the compute sequence. The Time Window Modification Alternative is loosely treated as a model because it performs an analysis (evaluates time-series data) and generates output (returns a modified time window). The figure shows the FRA simulation editor with one of the CRT simulation alternatives selected. Due to how the datasets were created for the simulation run, the base simulation time window was a water year, 01 October to 30 September. Notice that the time window was modified for select models by defining “(+183D), (-78D). In this example, we are shortening the water year. We are moving the start date backward by 183 days (from 01 Oct to approximately 01 Apr) and moving the end date forward by 78 days (from 30 September to 15 July). This shortened time window was used to define the snowmelt season. The time window modification alternative was used to shorted the time window by ending the time window four days after the peak flow and starting the simulation time window when certain criteria are met.
BUILDING STRONG®
Skip Models in the Compute Sequence Model simulations can be
skipped (might not be necessary to simulate smaller magnitude events).
Multiple locations can be selected
• Interface assists in defining skip logic
• Select the models to be skipped in the compute sequence
31
Presenter
Presentation Notes
This figure shows the Model Skip Rules editor with flow thresholds defined at multiple locations in Reach 1. The first step in setting up model skip flags is to associate the flags, or thresholds, to a model alternative in the compute sequence. There are four different models computed for Reach 1, the time window modification, fragility curve sampler, the HEC-RAS model, and the HEC-FIA model. In the example shown, the skip flags are associated with the Time Window Modifier Alternative for Reach 1 since this is the first model computed for Reach 1. Within the If block, statements are used where thresholds are defined at multiple locations (those with possible damages) in Reach 1. For example, the first statement evaluates flow at the Cowlitz location. If the flow is greater than 400,000 cfs, then the event will not be skipped. Notice there are multiple locations where the flow is evaluated and that “AND” logic is included. The flows at all these locations must be less than the defined thresholds in order for the event to be skipped The time savings from the Model Skip Flags were considerable for the CRT model. Depending on the shortened time window computed by the Time Window Modifier Alternative for Reach 1, the compute time for the Reach 1 HEC-RAS model was be between 5 and 25 minutes. Based on the conservative skip thresholds defined for Reach 1, approximately 35 percent of the events were skipped for just this one reach segment. A higher percentage of events were skipped for other reach segments where damages were not realized until large, rare flood events were simulated.
BUILDING STRONG®
HEC-WAT Output
32
Presenter
Presentation Notes
Different simulation alternatives can be compared through the HEC-WAT map interface. For example, hydraulic and consequence results can be compared between two alternatives with different reservoir operation scenarios. There are multiple options for saving output during an nested Monte Carlo WAT simulation. One option is to save all output during an FRA simulation. This option was initially beneficial to verify model linking (ensure the correct data is being passed from one model to the next) and to better understand how the HEC-WAT manages the Monte Carlo compute. An added benefit of keeping model files from individual events is the ability to load these files and evaluate the model and output. For example, one event might generate damage values that seem unrealistic. The model files used to generate the event could be opened by the standalone application, like HEC-RAS, and the model parameters or boundary conditions that generated the unrealistic results could be easily evaluated. Another Monte Carlo simulation option is to remove all files and time-series data at the end of a number of events. For example, if the user defines the lifecycle to be 50 years long, then HEC-WAT will delete files and time-series data once the last year in the lifecycle is computed. Not all data is lost when this option is selected. At the end of a lifecycle, HEC-WAT will mine all time-series data generated during the 50 year lifecycle and extract scalar values, like maximum flow, stage, and total damages, and copy these values to one file for storage. After the scalar values are saved, the time-series data is deleted. At the end of a Monte Carlo simulation, all scalar data from multiple locations are organized in one file. This option for managing files and output would likely be chosen for production simulations in most studies. The total output generated from this option would be minimal.