Click here to load reader

Harvey Stern

  • View
    31

  • Download
    0

Embed Size (px)

DESCRIPTION

Generating quantitative precipitation forecasts using a knowledge based system. Harvey Stern. 17 th BMRC Modelling Workshop, Bureau of Meteorology, Melbourne, 6 October, 2005. Background. - PowerPoint PPT Presentation

Text of Harvey Stern

  • BackgroundOver recent years, the present author has been involved in the development of a knowledge based weather forecasting system.

    Various components of the system may be used to automatically generate worded weather forecasts for the general public, terminal aerodrome forecasts (TAFs) for aviation interests, and marine forecasts for the boating fraternity.

    The knowledge based system generates these products by using a range of forecasting aids to interpret NWP model output in terms of a range of weather parameters.

  • February to May 2005 TrialThe author conducted a 100-day trial (Feb 14, 2005 to May 24, 2005) of the performance of the knowledge based system, with twice-daily forecasts being generated out to seven days in advance.

  • Graphical Forecasts Generated

  • Performance

    During the trial, the overall percentage variance of observed weather explained by the forecasts so generated (the system's forecasts) was 43.24% compared with 42.31% for the official forecasts.

  • Some DefinitionsDomain knowledge: Knowledge practitioners gain through experience as part of their jobs

    Contextual knowledge: Knowledge one develops by working in a particular environment.'

    'The quality of domain knowledge is affected by the forecaster's ability to derive the appropriate meaning from the contextual (or environmental) information' (1).

    Webby et al (2001)

  • Correlation BetweenSystem Forecasts and Official ForecastsDuring the trial, the overall percentage variance of official forecasts explained by the system's forecasts was only 45.91% (that is, the systems forecasts were not highly correlated with the official forecasts).

    This indicates, that there are significant aspects of the processes employed in deriving the official forecasts that are not taken into account by the system's forecasts (in all likelihood 'domain knowledge').

  • Combining Forecasts Regarding the two sets of forecasts as partially independent and utilising linear regression to optimally combine the forecasts, it was demonstrated that a substantial lift in the overall percentage variance of observed weather explained was possible.

    Indeed, the overall percentage variance of observed weather explained was lifted (by the combined forecasts) to 50.21% from 42.31% (official), a lift of 7.90%.

  • Why the Accuracy Increases The accuracy increases because Combining is most effective when the forecasts combined are not correlated and bring different kinds of information to the forecasting process (1); and that,

    Although 'both (human) intuitive and (computer) analytic processes can be unreliable different kinds of errors will produce that unreliability' (2).

    Sanders and Ritzman (2001)Stewart (2001)

  • PotentialWhat these data suggested was that adopting a strategy of combining predictions has the potential to deliver a set of forecasts that explain as much as 7.90% more variance than that explained by forecasts currently issued officially.

  • Mechanically IntegratingJudgemental and Statistical Forecasts 'Consider mechanically integrating judgmental and statistical forecasts instead of making judgmental adjustments to statistical forecasts

    Judgmental adjustment (by humans) of (automatically generated statistical forecasts) is actually the least effective way to combine statistical and judgmental forecasts (because) judgmental adjustment can introduce bias (1)

    Mathews and Diamantopoulos (1986)

  • Human Judgement as an InputThe most effective way to use (human) judgment is as an input to the statistical process

    Cleman (1989) reviewed over 200 empirical studies on combining and found that mechanical combining helps eliminate biases and enables full disclosure of the forecasting process. The resulting record keeping, feedback, and enhanced learning can improve forecast quality' (1).

    Sanders and Ritzman (2001)

  • The Strategy The strategy is:

    To take judgmental (human) forecasts (derived with the benefit of knowledge of all available computer generated forecast guidance); and,

    To input these forecasts into a system that incorporates a statistical process to mechanically combine the judgmental (human) forecasts and the computer generated forecast guidance;

    Thereby immediately yielding a new set of forecasts.

  • Modification of SystemThe knowledge based system was modified so that it now automatically integrates judgmental (human) forecasts and the computer generated guidance, thereby incorporating the forecasters' valuable contextual knowledge into the process.

    It has been demonstrated (1) that judgmental forecasts based on contextual knowledge were significantly more accurate than those based on technical knowledge (and) were even superior to (a) statistical model.

    Sanders and Ritzman (1992)

  • A New Trial

    The modified system is undergoing a new 'real-time' trial.

    This 100-day trial of the performance of the modified system, being conducted with a fresh set of data (Aug 20, 2005 to November 28, 2005), involves daily forecasts being generated out to seven days in advance.

  • The Purpose In this context, the purpose of the present work is:

    1.To evaluate the new set of forecasts; and,

    2.To document the increase in accuracy achieved by that new set of forecasts over that of the judgmental (human) forecasts.

  • Percentage Variance ofOfficial Forecasts Explained Preliminary evaluation of the forecasts prepared during the new trial (up to September 25) shows that the overall percentage variance of official forecasts explained by the system's forecasts is now lifted to 80.95% (from 45.91% previously).

    Demonstrating that, in most circumstances, the combining strategy leaves the systems forecasts almost identical to the official forecasts.

  • Percentage Variance ofObserved Weather ExplainedFurthermore, the overall percentage variance of observed weather explained is now lifted by the system to 52.40% from 45.51% (official) a rise of 6.89%, which is nearly as great as the 7.90% lift suggested previously by the present authors combined forecasts.

    Demonstrating that, in those few circumstances when the combining strategy substantially changes the official forecasts, the systems forecasts usually represent an improvement on the official forecasts.

  • Domain knowledgenow taken into account These results indicate, that, on a day-to-day basis, 'domain knowledge', is now taken into account by the system.

  • Specifically for PrecipitationSpecifically for precipitation, the percentage variance explained is lifted by the system to 45.83% from 34.66% (official).

    On a rain/no rain basis the percentage of correct forecasts generated by the system is lifted by the system to 82.49% from 74.19% (official).

    The root mean square error (rmse) of the (Amount of Precipitation forecast) is reduced by the system to 0.73mm from 0.87mm.

  • ThunderstormsThe system also develops predictions of other weather elements, such as thunderstorms.

    The Critical Success Index (1) of the system's thunderstorm forecasts was 0.04.

    The Critical Success Index was 0.07 for the official forecasts of thunderstorms.

    Wilks (1995)

  • Potential for ImprovedThunderstorm ForecastsThere is considerable potential for an increase in accuracy of the thunderstorm forecasts.

    Reducing the probability criterion under which there is a categorical reference to thunderstorms by the system from 25% to 5%, would lift the Critical Success Index of the system's forecasts to 0.21.

    Chart1

    0.176

    0.178

    0.183

    0.212

    0.167

    0.143

    0.145

    Thunder

    Probability Criterion (%)

    Critical Success Index

    MAINDATA

    DayMonYearObsObs-normFcstFcst-normDay1/DayFcst*DayFcst*(1/Day)IssuedTypeObsMaxObsMax-normFcstMaxFcstMax-normObsMinObsMin-normFcstMinFcstMin-normObsProbObsProb-normFcstProbFcstProb-normObsWindFcstSysWindFcstOffWinWindDateDirSysDirOffmaeSYSmaeOFFmaeDateMEAN

    21820054.40.79761769633.20.488854382110.4888543820.48885438220-Aug-05sys16.3-0.216-0.510.82.5101.710050873713253321-Aug-05010.470.7521-Aug-0529

    22820052.60.31245154971.3-0.159824574920.5-0.3196491498-0.079912287520-Aug-05sys14.1-2.414-2.56.4-1.98-0.31005058824253321-Aug-05100.581.2222-Aug-0529

    23820050.4-0.6675444680.5-0.592893218830.333333333-1.7786796564-0.197631072720-Aug-05sys14.2-2.316-0.58.308-0.31005054428252422-Aug-05111.330.5523-Aug-0524.5

    24820050-1.30.8-0.40557280940.25-1.622291236-0.101393202320-Aug-05sys13.5-316-0.510.21.97-1.325-2554418252422-Aug-05100.910.7724-Aug-0524.5

    25820050-1.30-1.350.2-6.5-0.2620-Aug-05sys18.5215-1.58.50.27-1.30-5033-17187.51723-Aug-05011.011.2125-Aug-0512.25

    26820050-1.30-1.360.166666667-7.8-0.216666667120-Aug-05sys14.8-1.7170.59.91.66-2.30-5021-2926151723-Aug-05100.931.0426-Aug-0516

    27820050-1.30-1.370.142857143-9.1-0.185714285920-Aug-05sys19.93.4181.52.5-5.88-0.30-5024-26187.51424-Aug-05011.181.4627-Aug-0510.75

    21820054.40.79761769633.70.6235384062110.62353840620.623538406220-Aug-05off16.3-0.215-1.510.82.5101.710050934313151924-Aug-05111.421.4228-Aug-0517

    22820052.60.31245154971.3-0.159824574920.5-0.3196491498-0.079912287520-Aug-05off14.1-2.414-2.56.4-1.97-1.31005047-397.5925-Aug-05001.140.6629-Aug-058.25

    23820050.4-0.6675444681.3-0.159824574930.333333333-0.4794737247-0.053274858220-Aug-05off14.2-2.316-0.58.308-0.31005047-387.51325-Aug-05000.630.8530-Aug-0510.25

    24820050-1.31.3-0.159824574940.25-0.6392982996-0.039956143720-Aug-05off13.5-316-0.510.21.98-0.325-2547-392.51126-Aug-05na10.920.5631-Aug-056.75

    25820050-1.30-1.350.2-6.5-0.2620-Aug-05off18.5216-0.58.50.2

Search related