12
Forecasting Accuracy: Expectations and Performance October 9, 2004 Heinz J. Dommert Thomas E. Getzen Forecasts, whether formal statistical extrapolations or simple “last year plus 5%” estimates, are essential for planning. They communicate future expectations to clinicians, management and staff. Good forecasts build good budgets. They are a primary tool for evaluating performance relative to expectations. Rarely, however, do CFO’s ask, “How good are the forecasts? Have we really evaluated our predictions?” When variances occur, is it because the method of setting the targets was flawed, or because management was not effective in its job? In the summer of 2004, we conducted a set of interviews of CFOs, healthcare experts and consultants about how confident they were in the accuracy of their forecasting--or if they even knew how close last year’s forecast was to the mark. The Difference Between “Forecasting” and “Setting Goals” The difference between a “forecast” and a “goal” or a “target” is not always clearly understood, so it is worthwhile to make that distinction at the start. A forecast is a technical extrapolation from data. Whether made with a spreadsheet, a sophisticated software package, or a simple assumption that growth will be 2% next year: whether assembled from carefully compiled records, incomplete files, or a set of impressionistic guesses, the forecast is a quantitative extrapolation based on what has happened in the past. It is not what management wants to happen, or plans to make happen, but just a projection of what would happen if no major changes occurred. Conversely, a goal or target is modifies the trend to incorporate

Forecasting Accuracy

Embed Size (px)

Citation preview

Page 1: Forecasting Accuracy

Forecasting Accuracy: Expectations and PerformanceOctober 9, 2004

Heinz J. DommertThomas E. Getzen

Forecasts, whether formal statistical extrapolations or simple “last year plus 5%” estimates, are essential for planning. They communicate future expectations to clinicians, management and staff. Good forecasts build good budgets. They are a primary tool for evaluating performance relative to expectations. Rarely, however, do CFO’s ask, “How good are the forecasts? Have we really evaluated our predictions?” When variances occur, is it because the method of setting the targets was flawed, or because management was not effective in its job? In the summer of 2004, we conducted a set of interviews of CFOs, healthcare experts and consultants about how confident they were in the accuracy of their forecasting--or if they even knew how close last year’s forecast was to the mark.

The Difference Between “Forecasting” and “Setting Goals”The difference between a “forecast” and a “goal” or a “target” is not always

clearly understood, so it is worthwhile to make that distinction at the start. A forecast is a technical extrapolation from data. Whether made with a spreadsheet, a sophisticated software package, or a simple assumption that growth will be 2% next year: whether assembled from carefully compiled records, incomplete files, or a set of impressionistic guesses, the forecast is a quantitative extrapolation based on what has happened in the past. It is not what management wants to happen, or plans to make happen, but just a projection of what would happen if no major changes occurred. Conversely, a goal or target is modifies the trend to incorporate organizational aspirations, management plans, and a tactical assessment of opportunities and threats. Judgment is required to blend the record of the past with an understanding of the strategic environment. In the end, management may confirm that a simple trend based on the last 3 years performance is an appropriate goal, but that is a judgment call. The forecast is the technical part, without any assessment of what should or might happen differently.

Types of ForecastsTrend. The most common forecast is a trend extrapolation. Thus, for example, if

the average rate of growth in outpatient surgery over the last 4 years has been 11.6%, it could be estimated that next year’s growth will also be 11.6%. Technically, this is known as a moving average forecast -- “moving” because the forecast will make use of a slightly more recent 4-year span next year. If one were to use only last year’s growth rate, that would be called a naïve forecast -- naïve because one is assuming that next year will show the same trend as last year. A somewhat more sophisticated method is to give more weight to the most recent years, and less to prior years, but to use all available information with an exponential smooth which updates the forecast each period by some

Page 2: Forecasting Accuracy

fraction of the error in the prior period (the fraction or “exponent” is usually about .30 for annual observations, and .05 for monthly observations). If the data has no trend, it is said to be stationary. However, it is still useful to calculate the percentage change from one year to the next as a means for gauging the likely range of variance.

Subjective. As stated above, if a projection is purely subjective, it is technically not a forecast since it is not based on data. However, most forecasts, after the technical phase of forecasting is complete, make subjective adjustments based upon assessments of the strategic environment, allowance for important shifts in the environment, the need for consistency, and impressions of knowledgeable leaders.

Multivariate. A forecast using only the prior data on number to be forecast is called univariate. It takes account of no other factors, except insofar as they are already affecting the prior trend. Multivariate forecasts may add other factors such as aging and other demographics, illness prevalence, advances in medical technology and practice organization, wage growth or interest rates.

Advanced Statistical Methods. There are a number of advanced statistical methods for making forecasts such as ARIMAs, Box-Jenkins transformations, Co-integration, etc. While powerful in some applications, they are usually over-powered for management forecasting in healthcare organizations, even for complex integrated systems with multiple facilities desiring detailed projections by revenue and expenditure line items, CPT code or diagnosis. Virtually all of the forecasting normally required can be performed on a spreadsheet without use of special software.

The first key step in making a good forecast is to start with a simple method that is readily understood by all users (see “Seven Keys to Good Forecasting” at the end of the article). Another key step is to regularly evaluate the forecast for accuracy. How well did it work? Only by evaluation can one tell if a more complex (or simpler) method is justified.

Key 1: Start with a simple objective method to make the baseline forecast.

Measures of Forecast AccuracyForecast accuracy is measured by recording the deviations between the forecast and the actual result. Statisticians call this the “residual” or “forecast error.” They commonly use a statistic known as the Mean Standard Error (MSE) calculated by squaring each forecast error, summing all these squares, dividing by the number of observations, and then taking the square root. It is analogous to the standard deviation of a column of numbers, and the formula is included in most spreadsheets.

Managers usually prefer to use the average percentage error, called Mean Absolute Percentage Error (MAPE). The “average percentage error” is readily understood and facilitates comparisons between forecasts. These are important strengths when considering how best to communicate expectations and establish benchmarks for performance evaluation. The only tricky part of the calculation is that all errors, whether positive or negative, are treated as being of the same sign. This point leads naturally to a discussion of the other common measure of accuracy, bias. The bias is an indicator of whether the forecast is usually above or below the actual.

2

Page 3: Forecasting Accuracy

M ean S tandard E rror (statistical measure) MSE = √ [Sum (Actual – Forecast) 2 /# of Observations]

M ean A bsolute P ercentage E rror (managerial measure) MAPE = average ±(Forecast-Actual) (nb: plus or minus counts the same)

Bias BIAS = average (Forecast - Actual) (nb: add pluses and subtract minuses)

Rigorous studies done by professional forecasters have shown that the simplest methods often work best in practice, that the adjustments made to mechanical forecasts by using judgment usually cause bigger errors, and that the adding multiple variables rarely improves the real-world performance. Simple averages and the quality of the data used to make the forecast are more important than sophisticated statistics.

Actual Forecast Practices: Study ResultsA twenty-two question open-ended interview survey was designed to allow respondents to describe how their organization makes forecasts and measures accuracy. Interviews were conducted in person, where appropriate, or by phone. The respondents were encouraged to express their views from current or previous experience in measuring accuracy as it related to healthcare finance. The respondents ranged from financial officers in healthcare systems, with an average of 150 beds, to consulting firms specializing in strategic healthcare forecasting. In all, over fifty forecasts per year are generated between these organizations and institutions. Both non-profit and for-profit medical and healthcare facilities were interviewed.

The first finding of the survey was that forecasts are primarily created for budgeting. The budget usually consists of current and next year’s financial plan. Other forecasts are created for utilization or facilities planning and are 5 to 10 years in length. The projections for utilization or facilities planning are usually generated annually, and are rarely projected by month. A monthly forecast does not add much value to utilization projections because most resources (equipment, nursing, beds) involve long-term commitments.

3

Page 4: Forecasting Accuracy

Most respondents were in agreement that short-term forecasts follow predictable historical patterns regarding disease states, birth rates or revenue cycles. Expenditure predictions were considered much more accurate than revenue predictions. Respondents thought this was because expenditures reflected management decisions, made in advance and hard to change, while revenues fluctuated with random variations in third party reimbursement policies.

For strategic long-term planning at least three years of actual historical data is required, while 12 to 18 months are used for budgeting. Changes in healthcare costs, technology changes and demographics are for the most part factored into the historical data for short-term predictions. Adjustments to the baseline forecasts, based on events or assumptions, add little in that way of forecast accuracy when forecasting for current and next years’ budget. In fact, as the number of events included increases, the forecast accuracy may actually diminish for short-term forecasting.

Since many of the forecasts are annualized, outliers in the data sets are not removed. Depending on the size and magnitude, outliers can have an impact on the historical data and therefore accuracy in projections. Outliers can be removed from the data and redistributed to preserve the level and trend in the historical pattern.

4

Page 5: Forecasting Accuracy

The final finding of the survey was that most forecasts are generated bottom-up. Data is collected at the cost or procedure level, and then rolled-up to aggregate facility-wide levels. These forecasts, created from the bottom up, are then adjusted for resource constraints and prorated to lower categories.

While the study was designed to find out how organizations measured the accuracy of their forecasts, it turned out that in practice almost no formal evaluations or audits of forecasts were done. It was often stated that accuracy was of paramount importance. In practice forecasts were made every year, but performance was almost never measured. None of the organizations we surveyed had instituted a formal method for tracking the accuracy of their forecasts.

Discussion of Survey Results: “Meeting the Numbers” It is evident from the survey results that forecast accuracy, although frequently talked about and considered important in principle, is not directly measured by most healthcare organizations. Several factors contribute to this lack of monitoring and control – the type of method used to generate the budget or forecast, the reporting periods employed and the lack of accountability for variances.

If a “forecast” has been generated using the judgment of a manager, then that person may be over-invested in being proven “right.” Measurement of deviations from the forecast may be taken as signs of failure, and so are avoided. Often the first line of defense in responding to budget overruns is to cut costs to meet year-end targets and “make the numbers.” The focus is on minimizing the gap between actual and planned results, rather than recognizing problems and building support for organizational change. Such short-term cuts may help the organization to meet a target while missing the larger objective --

5

Page 6: Forecasting Accuracy

evaluating performance accurately against reasonable standards and providing firm grounds for continuous improvement.

In order to avoid such problems, it is better to create forecasts and budgets at the lowest divisional or line levels and aggregate to total revenue and expenses. The production of the forecast and the evaluation of the forecast are separated to ensure objectivity. A greater degree of detail can be realized at lower divisional levels where specific variances can be addressed. Separate, mutually exclusive, forecasts created at the divisional level can be aggregated to institution-wide totals. Then, the total can be evaluated, and if out of line, modified with the changes apportioned back down to the division level. Adjustments can thus still be made at the top levels after the plan is built from the bottom up. Variances due to the adjustment process and judgments by senior management can thus be distinguished from pure trend variances. The bottom-up/top-down approach will usually be more accurate over a purely top-down approach.

Why is Forecast Accuracy Not Being Tracked?The process used by many healthcare organizations to make forecasts violates a basic principle of accounting: separation of reporting and rewards. If the same person is calculating the numbers and also being evaluated on the result of those calculations, pressures for distortion are inevitable. The problem becomes even more acute when a top executive makes a forecast. Since errors are inevitable, any measurement of forecast error becomes subject to interpretation as a failure of management or technique. By not checking the actual against the forecast, or by not keeping a record of such deviations, it is easier to maintain an aura of infallibility and flawless perfection. Healthcare organizations that would never think of going without an audit or analysis of inventory, investment results or nursing hours will make forecasts year without ever recording results.

Many managers have a good idea of how far off forecasts usually are, and whether they are consistently high or low. Such reservations are rarely made explicit during budgeting, and awareness of “fudge factors” is no substitute for careful records of forecast accuracy. If the projections are unrealistic, or get moved around to meet targets, the line staff becomes cynical. Eventually, some may claim, “those reports don’t mean anything--they are just for the higher ups.” Commingling the judgment of senior management with the technical process of forecasting creates confusions and blurs lines of responsibility. It is better to eliminate such distortions from the start. Use a simple method, independent of top management, to calculate forecasts. Then measure and record the deviations every year or month. It is more cost-effective and more accurate. Without ego involvement, audit and performance evaluation becomes routine

Keys to Good ForecastingThe keys to good forecasting are fairly straightforward. The first is to make an objective projection with a simple set of calculations and no external judgments. It can be a naïve forecast (same as last year), basic trend (percentage growth) or exponential smooth (gives

6

Page 7: Forecasting Accuracy

more weight to most recent data). The important thing that the method is clear to all users, that it is done mechanically, or at least does not require management intervention. Judgmental adjustments can always be added later (and should be explicitly footnoted).

In order to make a good forecast, the organization must have a record from prior months and years. The usual rule of thumb is that the historical data must be three times as long as the forecast period; i.e., if you want to make a 2-year projection, you need 6 years of data. If some service is totally new, take your best guess--and be honest, label it as a guess-estimate and explain what it is based on. Calling such a number pulled from the air a “forecast” can be misleading.

A graph of the trend over the prior periods makes forecasting relatively easy--it’s even ok just to take a pencil and extend the line out. Anomalies, outliers, and changes in the base (i.e., we dropped two clinics) quickly stand out in such a graph. A graph of the percentage change in prior years, and the errors that would have been made using your preferred forecast method, allow you to quickly determine the range of estimates that should be allowed for.

Forecasts are worthwhile only to the extent that they are used. A few good projections on which plans can be built with confidence are much better than a bunch of numbers that are largely ignored because they are inappropriate or unreliable. Forecasts must be reviewed regularly for accuracy if they are to be relied on. No politician would depend upon polls if they did not know how they had performed in the past, and sportscasters would not report the current odds on a team without examining the record of the source. Can you afford to risk committing funds for a new hire, or a new building, without having measured the actual performance of the forecasts used? Check for consistency in assumed growth rates. Major discrepancies between departments relative to trend indicate points where further analysis is usually warranted.

7

Keys to Good Forecasting1. Choose a simple, objective method as a

starting point.2. Maintain good records.3. Graph past experience to project the

future. 4. Use percentage rates of growth.5. Review forecasts regularly for accuracy

and usefulness.6. Check for consistency among

departments.7. Get professional advice when necessary.

Page 8: Forecasting Accuracy

Forecasting is easy. A naïve forecast takes one minute, if the data is available. A moving average is not much more difficult, and even the exponentially weighted trend with seasonal, monthly and weekly variation can be completed by a staff person in less than an hour. Rarely have we seen the need for more sophisticated statistical techniques. What we have often seen is management that is not sure of what it needs. Assessing informational requirements can be done internally, but when necessary, call in a professional. The inevitable forecast errors may well be smaller, the assessment process can be cheaper, and outside technical support becomes very precious when you have to defend your department against bad luck.

Heinz Dommert, MBA, M.S. is a graduate of the program in healthcare finance at Temple University and a consultant to healthcare organizations in Delaware, New Jersey and Pennsylvania.Thomas E. Getzen, Ph.D. is a Professor of Risk, Insurance and Health Management at Temple University, and executive director of the International Health Economics Association. Both are members of HFMA’s Metropolitan Philadelphia Chapter.

Questions and comments about this article may be sent to [email protected] or [email protected].

8