143

Actuarial Mathematics Output

Embed Size (px)

Citation preview

Page 1: Actuarial Mathematics Output
Page 2: Actuarial Mathematics Output

Other Titles in This Series

51 Louis H. Kauffman, editor, The interface of knots and physics (San Francisco, California, January 1995)

50 Robert Calderbank, editor, Different aspects of coding theory (San Francisco, California, January 1995)

49 Robert L. Devaney , editor, Complex dynamical systems: The mathematics behind the Mandlebrot and Julia sets (Cincinnati, Ohio, January 1994)

48 Walter Gautschi , editor , Mathematics of Computation 1943-1993: A half century of computational mathematics (Vancouver, British Columbia, August 1993)

47 Ingrid Daubechies , editor, Different perspectives on wavelets (San Antonio, Texas, January 1993)

46 Stefan A . Burr, editor, The unreasonable effectiveness of number theory (Orono, Maine, August 1991)

45 D e W i t t L. Sumners , editor, New scientific applications of geometry and topology (Baltimore, Maryland, January 1992)

44 Be la Bol lobas, editor, Probabilistic combinatorics and its applications (San

Francisco, California, January 1991)

43 Richard K. Guy , editor, Combinatorial games (Columbus, Ohio, August 1990) 42 C . Pomerance , editor , Cryptology and computational number theory (Boulder,

Colorado, August 1989)

41 R. W . Brockett , editor, Robotics (Louisville, Kentucky, January 1990)

40 Charles R. Johnson , editor, Matrix theory and applications (Phoenix, Arizona, January 1989)

39 Robert L. Devaney and Linda Keen , editors, Chaos and fractals: The mathematics behind the computer graphics (Providence, Rhode Island, August 1988)

38 Juris Hartmanis , editor, Computational complexity theory (Atlanta, Georgia, January 1988)

37 Henry J. Landau, editor, Moments in mathematics (San Antonio, Texas, January 1987)

36 Carl de Boor , editor, Approximation theory (New Orleans, Louisiana, January 1986)

35 Harry H. Panjer, editor , Actuarial mathematics (Laramie, Wyoming, August 1985)

34 Michael Anshe l and Wil l iam Gewirtz , editors, Mathematics of information processing (Louisville, Kentucky, January 1984)

33 H. P e y t o n Young, editor , Fair allocation (Anaheim, California, January 1985)

32 R. W . McKelvey , editor, Environmental and natural resource mathematics (Eugene,

Oregon, August 1984)

31 B . Gopinath , editor, Computer communications (Denver, Colorado, January 1983)

30 S imon A. Levin, editor, Population biology (Albany, New York, August 1983)

29 R. A . DeMi l lo , G. I. Davida, D . P. Dobkin, M. A. Harrison, and R. J. Lipton, Applied cryptology, cryptographic protocols, and computer security models (San Francisco, California, January 1981)

28 R. Gnanadesikan, editor , Statistical data analysis (Toronto, Ontario, August 1982)

27 L. A . Shepp, editor, Computed tomography (Cincinnati, Ohio, January 1982)

26 S. A . Burr, editor, The mathematics of networks (Pittsburgh, Pennsylvania, August

1981)

25 S. I. Gass , editor, Operations research: mathematics and models (Duluth, Minnesota,

August 1979)

24 W . F. Lucas, editor, Game theory and its applications (Biloxi, Mississippi, January 1979)

(Continued in the back of this publication)

http://dx.doi.org/10.1090/psapm/035

Page 3: Actuarial Mathematics Output

This page intentionally left blank

Page 4: Actuarial Mathematics Output

AMS SHORT COURSE LECTURE NOTES Introductory Survey Lectures

published as a subseries of Proceedings of Symposia in Applied Mathematics

Page 5: Actuarial Mathematics Output

This page intentionally left blank

Page 6: Actuarial Mathematics Output

CONTRIBUTORS

JOHN A. BEEKMAN. Department of Mathematical Sciences, Ball State University, Muncie, Indiana

J. C. HICKMAN, School of Business, Department of Statistics. University of Wisconsin, Madison, Wisconsin

P . M . KAHN, P. M. Kahn k Associates, 2430 Pacific Avenue, San Francisco, California

STUART A. KLUGMAN, Department of Statistics and Actuarial Science, University of Iowa, Iowa City, Iowa

CECIL J. NESBITT, Department of Mathematics, University of Michigan. Ann Arbor, Michigan

HARRY H. PANJER, Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, Ontario

ELIAS S. W. SHIU, Department of Actuarial and Management Sciences, University of Manitoba, Winnepeg, Manitoba

Page 7: Actuarial Mathematics Output

This page intentionally left blank

Page 8: Actuarial Mathematics Output

Proceedings of Symposia in

APPLIED MATHEMATICS

Volume 35

Actuarial Mathematics

Harry H. Panjer Editor

\Q American Mathematical Society /•? Providence, Rhode Island

^^P

Page 9: Actuarial Mathematics Output

LECTURE NOTES PREPARED FOR THE

AMERICAN MATHEMATICAL SOCIETY SHORT COURSE

ACTUARIAL MATHEMATICS

HELD IN LARAMIE, WYOMING

AUGUST 10-11, 1985

The AMS Short Course Series is sponsored by the Society's Committee on Em­ployment and Educational Policy (CEEP). The series is under the direction of the Short Course Advisory Subcommittee of CEEP.

2000 Mathematics Subject Classification, Primary 62P05.

Library of Congress Cataloging-in-Publicat ion D a t a

Actuarial Mathematics (Proceedings of symposia in applied mathematics, ISSN 0160-7634; v. 35) Book is the result of the Society's short course given at the University of Wyoming, Laramie,

Aug. 10-11, 1985. Bibliography: p 1. Actuaries. 2. Insurance—Mathematics. I. Panjer, Harry H. II. American Mathematical

Society. III. Series. HG8781.A26 1986 368.3'2'00151 86-3306 ISBN 0-8218-0096-5

Copying and reprinting. Material in this book may be reproduced by any means for educational and scientific purposes without fee or permission with the exception of reproduction by services that collect fees for delivery of documents and provided that the customary acknowledgment of the source is given. This consent does not extend to other kinds of copying for general distribution, for advertising or promotional purposes, or for resale. Requests for permission for commercial use of material should be addressed to the Assistant to the Publisher, American Mathematical Society, P. O. Box 6248, Providence, Rhode Island 02940-6248. Requests can also be made by e-mail to reprint -permiss ionQams.org .

Excluded from these provisions is material in articles for which the author holds copyright. In such cases, requests for permission to use or reprint should be addressed directly to the author(s). (Copyright ownership is indicated in the notice in the lower right-hand corner of the first page of each article.)

© Copyright 1986 by the American Mathematical Society. All rights reserved. The American Mathematical Society retains all rights

except those granted to the United States Government. Printed in the United States of America.

@ The paper used in this book is acid-free and falls within the guidelines established to ensure permanence and durability.

Visit the AMS home page at URL: http://www.ams.org/

10 9 8 7 6 5 4 04 03 02 01 00

Page 10: Actuarial Mathematics Output

Table of Contents

Preface xi

Introduction to Actuarial Mathematics

J.C.HICKMAN 1

Updating Life Contingencies J.C.HICKMAN 5

Models in Risk Theory H.H. PANJER 17

Loss Distributions S.A. KLUGMAN 31

Overview of Credibility Theory P.M. KAHN 57

A Survey of Graduation Theory

E.S.W. SHIU 67

Actuarial Assumptions and Models for Social Security Projections J.A. BEEKMAN 85

On the Performance of Pension Plans

C.J. NESBITT 113

ix

Page 11: Actuarial Mathematics Output

This page intentionally left blank

Page 12: Actuarial Mathematics Output

Preface

This book is the result of the 1985 American Mathematical Society Short Course entitled Actuarial Mathematics given at The University of Wyoming, Laramie, on August 10-11, 1985. It was organized by Cecil J. Nesbitt, James C. Hickman and Elias S.W. Shiu.

Actuarial mathematics is loosely defined as the mathematical concepts underly­ing the professional practice of actuaries as they relate to the various forms of insurance and pensions.

A number of forces have been shaping actuarial mathematics in the latter part of the twentieth century. Clearly, the on-going developments of computers have had a major influence. At the same time, the clarification of concepts of probability and statistical theory have provided a much richer foundation for actuarial theory. These concepts apply in related fields such as biostatistics, demography, economics and reliability engineering. Life tables are no longer simply actuarial tools but have been explored and utilized in those other fields. The mathematical theory of risk has flourished with the concurrent development of stochastic processes. Still more recently, developments in the theory of finance may eventually be assimilated into actuarial mathematics.

Because of the wide range of mathematical methods used to solve actuarial problems, a short course and the resulting book cannot provide an exhaustive review of actuarial mathematics. Instead, it can only highlight a few areas of actuarial mathematics.

The variety of interests of the authors is demonstrated by the breadth of topics covered in the papers in this book. It is hoped that the reader will be stimulated by some of the ideas in this book and proceed to the literature and perhaps contribute to it.

I wish to thank the authors, especially the organizers, without whom this volume would not exist, for their contributions to this book.

The authors are extremely grateful to Lynda Hohner, LIniversity of Waterloo, for the expert and timely computer typesetting of this volume. Any remaining errors are the sole responsibility of the editor.

H.H. Panjer, Editor

xi

Page 13: Actuarial Mathematics Output

This page intentionally left blank

Page 14: Actuarial Mathematics Output

Proceedings of Symposia in Applied Mathematics Volume 35, 1986

Introduct ion to Actuar ia l M a t h e m a t i c s

J.C. HICKMAN

Actuarial mathematics is a collection of mathematical ideas that has been found useful in designing and managing financial security systems. The criterion for whether a mathematical topic belongs in actuarial science is not whether the topic is derived from some fundamental mathematical notion. Instead, the criterion is prag­matic; does the topic contribute to the construction of useful intellectual models for insurance systems? The consequences of this definition of actuarial mathematics will be apparent in this short course. Basic ideas from calculus, linear algebra, probabil­ity, statistics, demography, mathematical programming and economics will appear as building blocks in models for insurance systems.

In order to provide a structure for this short course, it is necessary to create an outline for insurance systems. With such an outline in place the various topics to be reviewed in the short course can be placed in perspective. Clearly there are alterna­tive principles by which insurance systems can be classified. For the current pur­pose, our basic division will be between those systems that involve long-term com­mitments, with recognition of the long-term cost implications, and short-term com­mitments. Economic considerations, such as interest rates, rather than the probabil­ity distribution of claims payment often play a dominant role in long-term models.

A. Long-term Insurance

1. Individual life insurance. For this important type of insurance the fun­damental model requires that a balance be struck between the expected present value of the income and expenditures of the insurance system for each insured life. The lecture Updating Life Contingencies wil develop the basis model. A key component of the model will be an appropriately selected life table which will be used to describe the distribution of the random variable time until death. Actuarial science shares with biostatistics, demography and

© 1986 American Mathematical Society

1

http://dx.doi.org/10.1090/psapm/035/849137

Page 15: Actuarial Mathematics Output

2 J.C. HICKMAN

reliability engineering an interest in constructing life tables from data. One aspect of the construction process is the systematic revision of observations to conform with prior notions of smoothness. Smoothing, or graduation, is a process which is used in many fields in science. E.S. Shiu will review actuarial contributions to this field in his lecture "A Survey of Graduation Theory". To complete the mathematical model for individual life insurance, components for capital growth and expenses will be required. In this short course only basic compound interest models will be used. How­ever, one of the most active areas of actuarial research intersects with financial theory. The issue is to match expected insurance cash flows with expected cash flow generated by invested assets with the goal of minimiz­ing the inconvenient consequences of changes in interest rates.

2. Private pensions. Models to guide private pensions contain all of the ele­ments of individual life insurance models. However, in many cases pension models may be more complex. The complexity arises because benefit amounts may depend on the cause of termination from active employment (retirement, death, disability, withdrawal, lay-off) and the performance of an economic index such as salaries or prices. Some elements of these ela­boration will appear in the lecture "Updating Life Contingencies". In his lecture, "The Performance of Pension Plans", C.J. Nesbitt will review the history of benefits paid by Teachers Insurance and Annuity Association and College Retirement Equity Fund. These two related systems are of great interest to college and university teachers and benefit payments are related to the economic performance of the supporting pools of assets.

3. Social insurance. Almost exactly fifty years ago, August 14, 1935, President Roosevelt signed the Social Security Act. This act established in the United States a broad social insurance system in which long-term costs are recognized but in which the balance between income and expenditures is sought not on the individual level but in a global sense. The issue is whether the provisions for aggregate income are in balance with aggregate benefits defined in the law. In order to examine this balance demographic and economic projections, supported by an examination of experience, are required. J.A. Beekman's lecture, "Actuarial Assumptions and Models for Social Security Projections", will survey these topics.

Page 16: Actuarial Mathematics Output

INTRODUCTION 3

B. Sho r t - t e rm Insurance

1. Risk models . Most property, liability, health and group insurance systems can be classified in this category because the insurance contracts are typi­cally for a short-term period. In constructing mathematical models for such systems distributions for two basic random variables are needed. First, it is necessary to specify the distribution of the number of insurance claims in one period. In his lecture, "Models in Risk Theory", H.H. Panjer will review models for short-term insurance with emphasis on considera­tions in selecting the frequency of claims distribution.

A second random variable that enters risk models is associated with the loss amount, given that a loss event has occurred. The estimation of the loss amount distribution from data is a special case of statistical estimation theory. The unique aspects arise because of special features of insurance data and the highly skewed nature of some of these distributions. In his lecture, "Loss Distributions", S.A. Klugman will survey this field.

2. Credibil i ty. In order to provide for stability in premium rates, it is neces­sary to update these rates by combining prior and current insurance experience data and information from collateral sources. Credibility theory is a collection of ideas developed by actuaries to perform this process of summarizing information from multiple sources in a coherent fashion. P.M. Kahn's lecture, "Overview of Credibility Theory", summarizes this topic. Credibility theory can be built on several somewhat different foundations. Consequently, credibility theory has fascinating connections with issues in the foundations of statistics.

Clearly within the limits of this short course only a few of the many topics that comprise actuarial mathematics can be reviewed. As financial security systems evolve, so will actuarial mathematics. Likewise, new mathematical tools can pro­mote the development of new types of financial security systems.

Page 17: Actuarial Mathematics Output

This page intentionally left blank

Page 18: Actuarial Mathematics Output

Proceedings of Symposia in Applied Mathematics Volume 35, 1986

U p d a t i n g Life C o n t i n g e n c i e s

JAMES C. HICKMAN

ABSTRACT. Actuarial mathematics is a branch of applied mathematics devoted to building models of insurance systems. It is an old and successful application of mathematics in business and the social sciences. To an increasing extent, actuarial models incorporate stochastic submodels for components. This is illustrated for the mathematics of life contingencies, which is the study of models used in life insurance and annuities. The ran­dom nature of time until death and, to some extent, the random nature of future interest rates are incorporated into the basic model. The basic model requires the formulation of a loss variable, summarizing the present values of future payments, which may depend on random time and cause of decrement variables. Expense and compensating premium loadings can be added to the loss variable. The equivalence principle, that the expected present value of future losses defines current liability, is a decision rule that yields premiums and reserves.

1. Introduction. In this paper we will cover the rudiments of models built to assist in the design and management of insurance systems where the time and amount of benefit payment depend on contingencies to which humans are subject. A key building block in constructing such models will be an appropriately selected life table. It is timely that this review is conducted in 1985 because of the return of Halley's comet. Some scholas fix the date of the beginning of actuarial science as 1693. In that year. Edmund Halley published "An Estimate of the Degrees of the Mortality of Mankind, Drawn from Various Tables of Births and Funerals in the City of Breslau." This paper contained a tabular display which became known as the Breslau Table and in later years its business applications were developed. In the sections that follow Halley's invention will play a prominent role.

2. Life Contingencies. This term has a special meaning within actuarial mathematics as the study of models of life insurance and annuity operations, as developed in Europe and systematized by King (1887). These models are built on

1980 Mathematics Subject Classification. 62P05. © l 9 8 6 American Mathematical Society

5

http://dx.doi.org/10.1090/psapm/035/849138

Page 19: Actuarial Mathematics Output

6 J.C. HICKMAN

two assumptions: (l) time until death is a continuous type random variable, and (2) capital grows.

In order to establish notation, we let X be the continuous type random vari­able time until death of a newborn, usually measured in years. We have as a conse­quence two functions:

survival function, s(x) = Pr[X>x], and

probability density function, —s^x).

The future lifetime of a life aged x, X—x, will be denoted by T(x). Then

Pr[T{x) > t] = s(x + t)/s{x) (2.1)

and the probability density function is

-5 ' ( : r + 0 / s ( z ) = f{t\x). (2.2)

The distribution of X, from which distributions for T(x) may be derived, is fre­quently constructed from data and displayed as a life table. The collection of mor­tality data and its summarization in the form of a life table is a topic common to actuarial science, demography, reliability engineering, and biostatistics.

The assumption that capital grows can be reduced to the simple growth dif­ferential equation

A'{t) = 6A{t) * > 0 , 6 > 0 , (2.3)

where A{t) = e6tA(0) is the size of the invested fund at time t. Time is usually measured in years. The value at t = 0, the present value, of one paid at time t is exp(—6t). The present value of payments made continuously at the annual rate of" one for t years, denoted by a~v, is

Page 20: Actuarial Mathematics Output

LIFE CONTINGENCIES 7

a^-Je-t'ds-il-e-")/*. (2.4) o

Clearly these are simple assumptions. For example, the distribution of T(x)

depends on more than the time already lived and capital does not grow at a con­

stant rate.

3. Premiums and Reserves. To complete a basic model for individual life insurance and annuities, we need a decision principle to use with our basic assump­tions. The principle adopted in early actuarial science is the equivalence princi­ple. That is, the expected present value of future losses shall be zero. Within insurance systems based on individual policies, this principle is applied to each policy for the purpose of determining a major element of the price (premium) for the finan­cial protection. We will illustrate the operation of the principle with a life insurance policy paying one on the death of a life now age x and a life annuity paying con­tinuously at an annual rate of one until the death of a life now age x. To simplify notation, the random variable T(x) will be denoted by T. The loss variable, denoted by L1 is the present value of payments to the insured less the present value of payments to the insurance system. The formulae are given in Table 1 in terms of traditional actuarial notation. In this table 2AX = i?[exp(—25T)}. The variance of the loss variable is useful in measuring the risk attributed to the random nature of time until death.

If a random sample of n loss variables is available, the central limit theorem n

can be used to make approximate probability statements about ^Lif the aggregate

present value of future losses. That is,

M X X ) A / n V a r ( L , . ) > c] = 1-4>(C). 1

A second example of the use of the equivalence principle, and the two basic assumptions, is provided by a whole life policy paying a unit death benefit, paid for by a level stream of premium income which is received continuously at rate P(AX) by the insurance system until the death of a life age x when the policy is issued. The loss variable is

Page 21: Actuarial Mathematics Output

8 J.C. HICKMAN

Present value

| future payments to insured

Expected value of future payments (denoted by special actuarial symbols)

Loss variable. L,

with expected value zero

Variance (L)

TABLE 1

Insurance

e~iT

OO

Ax = Je-S'f(t\x)dt 0

e-ST-Ax

2AX-Al

Annuity

"fl*

OO

0

Offe"5"*

(2Ax-A?)/6*

L = e"6 ' - P{Ax)an (3

The principle of equivalence requires that

E\L] = 0 (3

which implies that the premium rate is given by

P(AX) = A,/ax. (3

The mortality risk can be measured by

Page 22: Actuarial Mathematics Output
Page 23: Actuarial Mathematics Output

10 J.C. HICKMAN

The basic elements of the model presented in this section are at the heart of classical actuarial science. The calculation of the variance of loss variables has received more stress in recent years because of the increased emphasis on measuring risk more precisely. In turn, this is associated with ideas in the economics of uncer­tainty which relate to developing appropriate prices for transferring risk.

Although the notation becomes more complex, we see that using the methods of this section a very general variable premium benefit policy can be designed. Let b(T) be the benefit paid for death at random time T and let na(s) be the rate of premium payment at time s where a(s) = 0 if T<s. We have the general loss variable

L = e~8Tb(T) - *ja(s)e-**ds. (3.7) o

Application of the equivalence principle leads to

E[L] = 0, or

oo

/e-*'fc(0{s'(x+0/5(x)}cft

f {J a{s)e-89ds}{6'{x + t)/s{x)}dt 0 0

4. Expenses . Clearly an insurance enterprise must match its administrative expenses as well as its benefit payments with future revenue. If we choose to bal­ance these expense payments, in an expected present value sense, with revenue from insureds, the balancing may be done by the equivalence principle and expense aug­mented loss variables. Let

Present value of benefits and expenses - Present value of future expense loaded premiums * ' '

and require that E[L] = 0.

Page 24: Actuarial Mathematics Output

LIFE CONTINGENCIES 11

We build on the whole life example of (3.1). Let G denote the expense loaded premium rate. Let K denote the expenses incurred at time zero on each policy, c the constant annual rate of expense for each policy and e the fraction of of the premium income needed for expenses related to premium. For this case, the loss variable, as given in (4.1), becomes

e^r+/C-[G(l-0-«Rf|*-

Applying the principle of equivalence, we have

5. Multiple Decrement. Life insurance policies can be terminated by events other than death. We will call the non-fatal type of decrements withdrawal and reinterpret the random variable T as time until decrement. The discrete random variable I is associated with cause of decrement and will take on the value 7 = 1 , if the cause is death and 7 = 2 if the cause is withdrawal. The joint probability density function of T and I will be denoted by /(tjlx). In biostatistics such models are called competing risk models.

Consider the somewhat more realistic version of the whole life policy introduced in (3.1). Let premium payments be at rate P(Ax)r (the !2 is to denote that two causes of decrements are recognized) and end at decrement. If decrement is by cause 1 (death), a unit benefit is paid. If decrement is by cause 2 (withdrawal), the benefit b(Tf2' is paid. The loss variable, recognizing both causes of decrement, is

L = e-6T-P(AxfEn, 1=1

= b(Tpe-ST-P(Az^af]6, 7 = 2.

There are no obstacles to following the path of (3.3), (3.4), (3.5) and (3.6) in using the principle of equivalence to complete a model for this double benefit policy. For example, the annual premium rate will become

Page 25: Actuarial Mathematics Output

12 J.C. HICKMAN

P{Axf =

Je'5Tf{t,l\x)dt + /e-*f6(0(2)/(<,2|:r)<f* o o

oo

fan{f(t,l\z)+ f(t,2\x)}dt

The benefit b(t)[2\ the cash value, has been the subject of regulation in the United States. This regulation has been guided by the fact that if b(t)^ is equal to the reserve, as given by (3.5) in the single cause of decrement model, and there is no interaction between the mortality and withdrawal processes, then P(AC)~= P{AX). That is, with this special withdrawal benefit, the premium rate is not changed by the payment of a withdrawal benefit.

6. Multiple Lives. The present value of a set payments may depend on the time until death of two or more lives. For example, a life annuity that pays until the sur­vivor of two lives die or a life insurance policy that pays upon the first of two deaths. In many practical cases the model of Section 3 can be used without modifi­cation except for the distribution. We consider two new random variables related to the order statistics:

Clearly

and

T(xy) = min[T(x), T{y)} (joint life)

T(x~y) = max[T(x), T(y)} (survivor)

T(x) + T(y) = T(xy) + T(Zy)

T(x) T(y) = T(xy) T{Zy).

If we assume that T(x) and T(y) are independent, then

Pr\T{xy) >t} = Pr\T(x) > t)Pr{T{y) > t] (6.1)

and

Page 26: Actuarial Mathematics Output

LIFE CONTINGENCIES 13

Pr[T\xy)>t} = Pr[T(x)>t} + Pr\T(y)>t] - Pr{T(X)>t}Pr[T(y)>t}. (6.2)

Formulas (6.1) and (6.2) are survival functions for the joint life status and the sur­vivor status. These survival functions provide a description of the distribution of T(xy) and T(xy) in the form of distributions that are already available. The pro­

gram of Section 3 can be followed to produce premiums and reserves.

7. Indexed Benefits. The general insurance described in connection with (3.7) and (3.8) raises the possibility of benefit amounts and premium rates that are not constant. The benefit amount function b(t) and the premium payment rate func­tion a(t) may be determined in advance. However, there are several examples where these functions are related to an index, J. These examples include the fol­lowing:

(a) Variable life insurance, where b(t,J), the death benefit at time t, may depend on the value of an investment index J.

(b) Variable annuities where a(s,J), the annuity payment rate at time s may depend on the value of an investment, or price, index.

(c) Pension systems where the initial retirement benefit rate may depend, in whole or in part, on salary history.

Unless the benefit stream and the premium income stream are based on the same index, an additional source of possible divergence between the values of the two streams is created.

8. Variable Interest. The experience of the past two decades in the United States is that the second basic assumption listed in Section 2 is not true. Capital does not grow at a constant annual rate. There are three ways to modify our basic model to recognize this fact. First, if the changes in 6 are deterministic, the con­stant S in (2.4) can be replaced by the function 6(t) and result (2.4) for the present value of payments made continuously at an annual rate of one for t years will become

Page 27: Actuarial Mathematics Output

14 J.C. HICKMAN

1 - / * ( » ) *

Je ° ds 0

and

t

JS(y)dy A(t) = A(0)e°

Second, a time series model for 6(t), the future spot continuous rate of interest could be constructed. In this case, the expected present value of a unit paid at the death of a life now age x would become

T __ -J6(e)d9 Az = ETEsm[e ° ].

Clearly, general results are difficult for this alternative, because they depend on the nature of the stochastic model for $(T).

A third alternative is to define m interest rate scenarios, {5y(t); <>0, j = 1,2, . . . ,m}, such that the probability of scenario j is Py>0, and m

YJPJ = 1. This approach permits judgmental determination of the m scenarios

and their probabilities. Under this approach

t

Ax = EjETb[e ° ]

m °° -/*/•)*«•

i-1 0

Another possibility is that the risk of variability in interest rates can be shifted to policyholders by an artful choice of economic indices as illustrated in Section 7.

The idea of this section are covered in papers by Boyle (1976), Hickman (1985), Panjer and Bellhouse (1980, 1981).

Page 28: Actuarial Mathematics Output

LIFE CONTINGENCIES 15

9. S u m m a r y . Actuarial Mathematics is an old and successful application of ideas from mathematics in business and the social sciences. In this section the basic model of life ocntingencies was reviewed. As used for many years, the model was deter­ministic. The current approach, as shown in Bowers et al. (1984), places emphasis on the random nature of time until death. The next step will be to incorporate sto­chastic interest models in order to increase the realism of the model and to price risk more precisely.

BIBLIOGRAPHY

1. Bowers, N.L., H.U. Gerber, J.C. Hickman, D.A. Jones, and C.J. Nesbitt, Actuarial Mathematics, Society of Actuaries, Itasca, Illinois, 1984.

2. Boyle, P.P., "Rates of Return as Random Variables", Journal of Risk and Insurance, 43 (1976), 693-713.

3. Hickman, J .C , "Why Not Random Interest", The Actuary, 19, No. 2 (1985).

4. King, G., Institute of Actuaries'Textbook, Part II, Charles and Edwin Layton, London, 1887 1st ed., 1902 2nd ed.

5. Panjer, H.H. and D.R. Bellhouse, "Stochastic Modelling of Interest Rates with Applications to Life Contingencies", Journal of Risk and Insurance, 47 (1980), 91-110.

6. Panjer, H.H. and D.R. Bellhouse, "Stochastic Modelling of Interest Rates with Applications to Life Contingencies-Part II", Journal of Risk and Insurance, 48 (1981), 628-637.

SCHOOL OF BUSINESS, DEPARTMENT OF STATISTICS UNIVERSITY OF WISCONSIN MADISON, WISCONSIN 53706

Page 29: Actuarial Mathematics Output

This page intentionally left blank

Page 30: Actuarial Mathematics Output

Proceedings of Symposia in Applied Mathematics Volume 35, 1986

M o d e l s in R i sk T h e o r y

HARRY H. PANJER1

ABSTRACT. In many insurance contexts, the insurer is interested in modelling the effect of total claims in the company or in some subset of the company. The modelling of claims for an aggregation of risks is reviewed. Theoretical justification of the choices of models using concepts of mixing and compounding are given. Numerical procedures for the evaluation of total claims and related quantities, such as stop-loss premiums will be emphasized.

1. Introduction. The subject of risk theory is concerned with the random fluctua­tions in the financial results of insurers or other risk enterprises. The random fluc­tuations arise from several sources; namely, fluctuations in the numbers of insurance claims made, fluctuations in the sizes of the claims that are made, fluctua­tions in the costs of running the insurance company, fluctuations in the premium income and fluctuations in the investment income. Risk theory has traditionally only been concerned with modelling total claim costs with the frequency and sizes of claims considered separately. Early contributors were Filip Lundberg (1909) and Ilarald Cramer (1930) who examined the progress of an insurer over time by consid­ering the claims as arising from a Poisson or related stochastic process. Very little attention was paid to the modelling of company expenses and investment income until recently. Principal books in the area of risk theory began appearing in the late 1960's. They include Beard, Pentikainen and Pesonen (1969, 2nd ed. 1977, 3rd ed. 1981), Seal (1969, 1978), Buhlmann (1970), Beekman (1974), and Gerber (1979). The third edition of Beard, Pentikainen and Pesonen (1984) provides the broadest defini­tion of risk theory and considers all aspects of insurance company operation. These books provide extensive bibliographies and should be consulted by the interested reader.

Let {N(t); t>0} denote a stochastic process representing the number of claims arising from a portfolio of risks in the time interval (0,2 ]. If the sizes of the succes­sive claims are labelled Xl,X2,Xs,... then the total (or aggregate) claims in the time interval (0,f] is

1980 Mathematics Subject Classification. Primary 62P05, 62E15, 62E30 Supported by the Natural Sciences and Engineering Research Council of Canada and The Ac­

tuarial Education Research Fund. © 1986 American Mathematical Society

17

http://dx.doi.org/10.1090/psapm/035/849139

Page 31: Actuarial Mathematics Output

18 H.H. PANJER

5(t) - X , + X 2 + • • • 4- XN{t) (1)

where it is understood that S(t) = 0 if N(t) = 0.

Let U(t) denote the amount of funds available to the insurer at time t to protect against excessive claims. This quantity is called the "risk reserve" and can be thought of as the surplus (or some allocation of surplus) appearing in the insurer's balance sheet. Then U(t) can be written as

U(t) = 17(0) + P(t) - 5(<) (2)

when P(t) is the total premium income to date. Ruin theory deals with evaluating the probability that U(t) becomes negative so that the company is "ruined". Typi­cally, the rate of premium income is assumed to be constant so that P(t) = Pt. When this assumption is made the only stochastic term in the right hand side of (2) is S(t). Note that in (2), various aspects of company operations such as investment income and company expenses are ignored. When these elements are considered, the results often lead to mathematical intractability and computer simulation is used to obtain numerical results.

In this paper, we focus on models for the distribution of S(t) for some fixed time period such as one year. Without loss of generality, for some fixed value of t, we write

S = X l + X 2 + • • • + X N (3)

where 5 = 0 if N = 0. It is assumed that N,Xi,X2,X3,... are mutually indepen­dent random variables and that the individual claim sizes X1,X2,X3,... are identi­cally distributed with common distribution function Fx{x)- Thus claim sizes and frequency are assumed to be independent. Then the distribution function of the total claims is

Fs(x)=f,pkFxk(x) (4)

where F*k is the Ar-fold convolution of F and pk = Pr{N=k). Letting

Page 32: Actuarial Mathematics Output
Page 33: Actuarial Mathematics Output

20 H.H. PANJER

FxW-t^-Ffr). (9)

This is a very desirable property, since it is clear that an insurer is able to combine policies and treat the aggregation as it did the individual policies. Similarly, it can be shown that if claim sizes are partitioned into disjoint classes {A{, i — 1,2, . . . ,m}, with

a, = Pr{XeAi}, i = 1,2, . . . ,m (10)

then the numbers of claims N1,N2, • . • ,Nm corresponding to size classes Al1A2, • • • ,Am respectively are independent Poisson random variables with means Xa^Xan, . . . ,Xan. This is a particularly useful property when one considers claim

size distributions with a maximum or minimum benefit, with deductibles in which the maximum or minimum benefits can be treated separately if desired. The Poisson assumption has been tested empirically by several authors in portfolios consisting of a large number of life insurance risks.

The negative binomial distribution arises naturally in claim frequency models as a mixed Poisson distribution with probability generating function

PN(z) = JPN(z \6)dU{0) = { i - / J (*_ i )} - ' , (11)

where /?>0, r > 0 and U(9) is a gamma distribution function and P^(t \0) is a Poisson pgf. Here it is assumed that for a known risk parameter 6, the number of claims has a Poisson distribution. The risk parameter is assumed to itself be distri­buted over the population of risks (or "collective") in accordance with a distribution referred to as the "structure function" or the mixing distribution; in this case, gamma.

In an insurance context, this model is used to describe the heterogeneity of risks that are in a single risk classification of the insurer. For example, in auto insurance an insurer may charge a different premium for those who drive more than 10 miles to work and those who drive less than 10 miles to work. Within each of these categories there will be heterogeneity with respect to the risk parameter under con­sideration.

Page 34: Actuarial Mathematics Output

RISK T H E O R Y 21

The " sp read" of the mixed Poisson frequency will be greater than the Poisson

distribution in the sense that the variance exceeds the mean. An example, taken

from Buhlmann (1970, p . 107) is given in Table 1. It is based on accident claims in

1961 for a class of automobile insurance in Switzerland.

Number of Claims

0

1 2 3 4 5 6

TABLE 1

Observed Frequencies

103,704 14,075

1,766 255 45

6 2

Fitted Poisson

102,630 15,922

1,235 64

2 0 0

Fitted Negative Binomial

103,724 1 13,990 I

1,857

245 32 4 1

The Poisson and negative binomial distributions are fitted by the method of

maximum likelihood. It can be seen tha t the fitted Poisson distribution is very light

in the tail and fits poorly as demonstrated by the chi-squared test-of-fit which has a

significance level of less than . 0 1 ^ for this da ta . The negative binomial appears to

fit much bet ter . Unfortunately, when the chi-square test is used to examine the fit

of the negative binomial distribution, the significance level of the corresponding test

statistic is less than .2% suggesting tha t further models should be tried.

The negative binomial distribution can also arise as the distribution of the

number of claims when the number of accidents has a Poisson distribution and the

number of claims per accident has a logarithmic distribution with probability gen­

erating function

Q(=) los( »-'/>) ()<</< I. (12)

Under the usual independence assumptions, the number of claims is compound

Page 35: Actuarial Mathematics Output

22 H.H. PANJER

Poisson with pgf

sX/log(l-9)

PA^)=exp{X[Q(*)-l]} = 1— qz 1-9

{i- /?(*- i ) r (is)

where /? = q/(l—q)>0 and r = —X/log(l—g). This example demonstrates how one can model multiple claims from accidents.

The negative binomial can arise in a third plausible context; namely, a "positive contagion" model. Suppose that the number of claims N(t) is a non-homogeneous birth process with the intensity function of the n + l-st claim given by

Xn(0 = X(f){a+/M, <>,/9>0. (14)

Hence, the rate at which claims occur increases with the number of claims. This model can be used to describe the total number of claims in contexts of accident-proneness or spread of disease.

The special case of the negative binomial with r = 1 is called the geometric distribution. It may be viewed as a mixed Poisson with an exponential mixing func­tion.

The binomial distribution with probability generating function

P(z) = {l+q(z-l)}n (15)

arises naturally as the distribution of the number of claims for a closed group of identical independent risks from which at most one claim can arise per risk. This situation occurs in medical and dental expense insurance for which an annual deduc­tible may arise and all "claims" are consequently grouped together.

The binomial also arises as a "negative contagion" non-homogeneous birth pro­cess with claim intensity

Xn(f) = X(0{*+/?n}, " = 1,2, . . . ,r (16)

where a is positive and p negative and where

Page 36: Actuarial Mathematics Output

RISK THEORY 23

a + fir = 0 (17)

and where Xn(/) = 0 for n>r. The intensity function (16) indicates a decreasing propensity to claim as the number of claims increase. In a portfolio of n identical life insurance policies issued at age x at time 0, the intensity function (16) is

M 0 = /ix+ ,{r-n} (18)

where //x+f is the "force of mortality" of a life aged ar-ff. Equation (18) simply states that the total intensity for the portfolio is the sum of the intensities for the remaining lives.

The class of distributions consisting of the Poisson, binomial and negative bino­mial (including geometric) form a natural class from the point of view of computing the distribution function of total claims. They each satisfy the relation

- £ l _ « a + A n = 1,2,3,... (19) Pn- l n

It is easy to show that these are the only distributions satisfying (18) (Panjer, 1981 and Sundt and Jewell, 1981). This relation yields the following relationship of the Laplace transforms of the individual claim size (severity) distribution and the total claims distribution:

L's(z) = aLx(z)L's(z) + (a+b)Lx(z)Ls(z). (20)

When claim amounts are defined only on the positive integers, inversion of (20) yields

/ s (* ) = E ( « + 6 - ) / x ( < / ) / s ( * - 2 / ) , x = 1,2,... (21) * - i x

This recursion is especially useful in evaluating the distribution of total claims. In order to calculate the distribution up to a maximum level m requires a number of computations of order m 2 using (21) and m 3 using (4). Cleary for large values of m, such as m = 1000, (21) is much faster than (4).

Page 37: Actuarial Mathematics Output

24 H.H. PANJER

When the severity distribution is non-arithmetic, it could be approximated in several ways (cf. Gerber, 1982 and Panjer and Lutek, 1983). Alternately, if the severity distribution is of the continuous type then the analogue of (20) is

X

fs(*) = Pifxi*) + J(a+i>-)fx(y)fs(z-y)dy (22) 0 X

which is a Volterra integral equation of the second kind. Baker (1977) and others provide numerical procedures for solving these equations numerically.

We now proceed to examine a larger class of models frequency that lend them­selves naturally to the recursive techniques described in this section.

3. Compound Claim Frequency Distributions. We generalize the class of dis­tributions considered in Section 2 to distributions with probability generating func­tions of the form

PN(z) = P&PAz)) (23)

where P\{.) and Fo() satisfy (19) except that for P2(-)> t n e initial value may be zero so that (19) hold only from n == 2. It can be seen that a large number of 2, 3 or 4 parameter distributions are generated when all possible choices of P\{.) and P 2 ( ) are considered. Consequently, (and for other reasons to be discussed in Sec­

tion 4) we focus on a Poisson distribution for Px(.) and allow a free choice of Fgl-)-Distributions of the form (23) are called contagious, stopped or generalized distribu­tions in various applications. Johnson and Kotz (1969) and Douglas (1980) provide extensive bibliographies for these distributions but do not consider the computation aspects that we are interested in.

If equation (23) is used for claims frequency then the LT of total claims is, from

(6),

Ls(z) = Px{P2{Lx{z))). (24)

In order to evaluate the distribution of S, one first evaluates the distribution with

LT given by L(z) = P<J(Lx(z)) a n d t h e n repeats the procedure for

Page 38: Actuarial Mathematics Output

RISK THEORY 25

Ls(z) = PMz))

using the result of the first part. Consequently, the computing techniques and pro­cedures require about twice the amount of time as the procedures in section 2.

For the secondary distribution, we may choose any of the binomial, Poisson, geometric or negative binomial distributions; or, if we remove the probability mass at zero, we may choose the truncated versions of the above distributions, the loga­rithmic distribution or the extended truncated negative binomial distribution with generating function of the form

It is "extended" in the sense that r, which is normally positive can be extended to negative values. It is easy to show that if Pi(.) is Poisson, binomial, negative bino­mial or geometric, then the use of the truncated form of P2(.) does n o ^ generate any new distributions since the resulting distributions are of the same form (but with different parameters) as the distribution obtained by using the untruncated versions. Consequently, the only new distributions obtained are those using the logarithmic and extended truncated negative binomial. As has been seen in section 2, the loga­rithmic distribution for P 2 ( ) combined with the Poisson distribution for Pi(.) yields the negative binomial distribution. The most well known distributions, with Pj(.) being Poisson, are the Neyman Type A distribution (with P2(.) Poisson), the

Polya-Aeppli distribution (with P2(.) geometric), the Poisson-binomial and the Poisson-Pascal (with P2(.) negative binomial). The Poisson-Inverse Gaussian distri­bution is the case where P2(.) is extended truncated negative binomial with

r = —.5 although its name arises as a result of other considerations. The general­ized Poisson-Pascal has for P2(.) any extended truncated negative binomial. It is the only one for which variance can be made arbitrarily large for fixed mean and variance. This is a useful property in fitting heavily skewed distributions. The Poisson-Inverse Gaussian was fitted using the method of maximum likelihood to the data in Table 1. The results are given in Table 2. The Poisson-Inverse Gaussian distribution appears to give a better fit than the negative binomial. In fact, the sig­nificance level of the test statistic is 85% indicating that the fit is virtually perfect. Note that this is still done with only two parameters.

4. Mixed Claim Frequency Models. In section 2, it was shown that the negative

Page 39: Actuarial Mathematics Output

26 H.H. PANJER

; Number ! of Claims

0

1 2 3 4 5 6

TABLE 2

Observed Frequencies

103,704 14,075

1,766 255

45 6 2

Fitted Poisson-Inverse Gaussian

103,710

14,055

1,785 254

40 7

1

binomial distribution could arise as a mixed Poisson distribution with a gamma mix­ing function. We now consider mixed Poisson distributions. The probability gen­erating function of the mixed Poisson distribution can be written as

PN{*)-SPN(*)PW{0) (26)

where

P N ( 2 | 0 ) = ex'<*-'>. (27)

Hence,

PN(z\9) = Lt\\(l-z)\. (28)

The simplest mixture is a mixture of two Poisson distributions (which requires 3 parameters). Other mixtures can be obtained by using parametric forms of mixing distributions. If the mixing distribution is infinitely divisible, then the resulting mixed Poisson distribution can be shown to be compound Poisson. This result motivates us to find links between the class of distributions considered in the previ­ous chapter and those in this chapter. It is easy to show that the Neyman Type A distribution is a mixed Poisson distribution with a Poisson mixing function. Simi­larly, if the mixing function is exponential, the mixed Poisson distribution is a

Page 40: Actuarial Mathematics Output

RISK THEORY 27

geometric distribution. Table 3 gives some numerical results for some data on accident frequencies given in Seal (1969, p. 27) and Trobliger (1961). Trobliger ori­ginally assumed that drivers were of two types and their proportions were unknown. He estimated that 94.03% of risks were normal or "good" with an expected number of claims of .1089 while 5.97% were bad with an expected number of claims of .7. He found that the mixture of two Poisson distributions provided an adequate fit with a significance level of 44%. If we fit the geometric distribution to the same data, the fit is only slightly poorer but still adequate with a significance level of 29%.

Number of Claims

0 1 2

1 3 4 5 6

TABLE 3

Observed Frequencies

20,592 2,651

297 41

7 0 1

Fitted Mixture

20,589 2,656

289 44

7

1 0

Fitted Geometric

20,620 2,598

327 41

5 1 0

In addition to the geometric distribution having only one parameter, it has the advantage that as a mixed Poisson distribution, it has a more plausible interpreta­tion; namely, that there is a continuous exponential distribution of "accident-proneness" of drivers, not just two classes as in the case of Trobliger's model.

If the mixing distribution is the inverse Gaussian, the mixed Poisson distribution is a Poisson-Inverse Gaussian. This is infinitely divisible and can be written as a compound Poisson distribution with P2(.) given in section 3. Similarly, the general­ized Poisson-Pascal can be written as a mixed Poisson distribution with a stable mix­ing function of the form

Page 41: Actuarial Mathematics Output

28 H.H. PANJER

U{x)=~i^e "/o(x)' X>° (29)

where

U x ) = - L g ^ ^ ( - i f - ^ M r i n i a h ) (30)

Many other mixtures could be considered (cf. Panjer and Willmot, 1983, 1985 and Willmot 1985).

5. Applications. Many applications in insurance require the calculation of the dis­tribution of total claims for a portfolio of risks. The distribution describes the ran­dom nature of the risks accepted by the insurer. Various strategies can be used to change the distribution of an insurer's total claims. They include changes to deduc­tibles, changes to maximum benefits or various forms of reinsurance.

Stop-loss reinsurance is an insurance on the total claims with a very large deductible, typically 125% of the mean. Its expected claim costs is

oo d

J(x-d)dFs(x) = E[S] - J{l-Fs(s)}ds. (31) d 0

From (31) it can be seen that if the distribution of total claims can be calculated, then so can the net stop-loss premium. Stop-loss is frequently used in conjunction with experience rating of group insurance plans and with reinsurance treaties between insurers.

Many other applications can arise. The reader is challenged to find them in the literature in connection with his or her own work.

BIBLIOGRAPHY

1 C.T.H. Baker, The Numerical Treatment of Integral Equations, Oxford Press,

Clarendon, 1977.

Page 42: Actuarial Mathematics Output

RISK THEORY 29

2. R.E. Beard, T. Pentikainen and E. Pesonen, Risk Theory, Chapman and Hall, 1st ed. 1969, 2nd ed. 1977, 3rd ed. 1984.

3. J.A. Beekman, Two Stochastic Processes, Almquist, Stockholm, 1974.

4. H. Buhlmann, Mathematical Methods in Risk Theory, Springer-Verlag, New

York, 1970.

5. H. Crame'r, On the Mathematical Theory of Risk, Skandia Jubilee Volume,

Stockholm, 1930.

6. J.B. Douglas, Analysis with Contagious Distributions, International Co­operative Publishing House, Fairland, Maryland, 1980.

7. H.U. Gerber, An Introduction to Mathematical Risk Theory, University of Pennsylvania, Philadelphia, 1979.

8. H.U. Gerber, On the Numerical Evaluation of the Distribution of Aggregate Claims and its Stop-Loss Premiums, Ins.: Math. & Econ. 1 (1982), 13-18.

9. N.L. Johnson and S. Kotz, Discrete Distributions, Wiley and Sons, New York, 1969.

10. F. Lundberg, Uber die Theorie der Riickversicherung, Trans. VI Int. Congr. Act. 1 (1909), 877-948.

11. H.H. Panjer, Recursive Evaluation of a Family of Compound Distributions, ASTIN Bull. 12 (1971), 22-26.

12. H.H. Panjer and B.W. Lutek, Practical Aspects of Stop-Loss Calculations, Ins.: Math. & Econ. 2 (1983), 159-177.

13. H.H. Panjer and G.E. Willmot, Compound Poisson Models in Actuarial Risk Theory, J. Econometrics, 23 (1982), 63-76.

14. H.H. Panjer and G.E. Willmot, Difference Equation Approaches in Evaluation of Compound Distributions (to appear).

15. H.L. Seal, Stochastic Theory of a Risk Business, Wiley and Sons, New York, 1969.

16. H.L. Seal, Survival Probabilities, Wiley and Sons, New York, 1978.

17. B. Sundt and W.S. Jewell, Further Results on Recursive Evaluations of Com­pound Distributions, ASTIN Bull. 12 (1971), 27-39.

18. A. Trobliger, Mathematische Untersuchungen zur Beitragriickgewahr in der Kraftfahrversicherung, Bl. Deuts. Gesell. Versich. Math. 5 (1961), 327-348.

Page 43: Actuarial Mathematics Output

30 H.H. PANJER

19. G.E. Willmot, The Tails of Some Compound Distributions (to appear).

DEPARTMENT OF STATISTICS AND ACTUARIAL SCIENCE UNIVERSITY OF WATERLOO WATERLOO, ONTARIO N2L 3G1

Page 44: Actuarial Mathematics Output

Proceedings of Symposia in Applied Mathematics Volume 35, 1986

Loss D i s t r i b u t i o n s

STUART A. KLUGMAN

ABSTRACT. Insurance programs are designed to transfer the risk of ran­dom losses. Of primary importance in analyzing risks is an estimate of the probability distribution of the loss, given that one has occurred. We demonstrate that knowing the mean of the distribution is not sufficient and that parametric models are superior to empirical ones. We then proceed to develop a number of appropriate parametric distributional models. Five methods of parameter estimation are then presented with emphasis on maximum likelihood. This is followed by an illustration based on basic dental coverage that includes estimating the effects of inflation and the imposition of deductibles and limits. We close with the development of confidence intervals for the estimates.

1. In t roduct ion . The risks assumed under an insurance contract usually involve three aspects of the covered situation. They are (1) the occurrence of the covered event, and if it occurs, (2) the time at which the insurance settlement is paid, and (3) the amount paid. While all three elements are part of any insurance they need not be random quantities. For example, in whole life insurance death is certain and the amount paid usually is fixed. The time of payment is the relevant quantity. In most property, liability, and health insurances it is the other two quantities that are most important as the time to settlement is usually short.

A simple probability model for the cost of the insurance is P = We~STX where W is zero or one depending on whether or not the covered event occurs, T is the time from policy issue to the payment of the claim (this time includes both the time until the incident and the additional time until settlement, this latter time could be considerable in a liability case), 6 is the force of interest (assumed known and constant) and A" is the amount paid. The random variables T and X are not defined when W = 0. The net premium is the expected value of P, the actual premium would include an additional amount for expenses and profit. If we assume that T and X are independent (this is not always true) then E(P) = wE(e"8T)E{X) where w = E(W) = Pr{W=l) is called the frequency and

1980 Mathematics Subject Classification. Primary 62P05, 62F10. © 1986 American Mathematical Society

31

http://dx.doi.org/10.1090/psapm/035/9873

Page 45: Actuarial Mathematics Output

32 S.A. KLUGMAN

E{X) is called the severity.

Of course in many insurances the covered event can happen several times dur­ing the contract period. Two ways to incorporate this are to (l) let W still be 0 or 1 and let X be the total paid on all of the claims during the period or (2) let W be the number of events and X be the average paid per claim. In the second

case the expected value of X is unchanged from the simple model. In the first case the appropriate model for X is the risk model discussed in the article by Panjer in this volume. That article also discusses distributional models for W as given in the second case. In this article we will be concerned only with W as the loss on a sin­gle claim.

The discussion will be presented as follows. In Section 2 an argument is pro­vided that indicates it is not sufficient to estimate just the expected value of X. Instead, knowledge of the entire probability distribution of X is required. That section also contains a discussion of the advantages of using parametric continuous distributions as models. In Section 3 a number of candidates for parametric distri­butional models are given. Estimation techniques for fitting these models are given in Section 4 and some applications are provided in Section 5. Most of the material will be taken from Hogg and Klugman (1984). A concise description of most of the procedures appears in Hogg and Klugman (1983). These two references will not be mentioned again. Other sources will be re referenced as they arise.

2. Parametric Distributional Models — A Justification. If all that is needed is an estimate of E(X), the sample mean from a survey of past losses would most always be sufficient. The following example illustrates why it is important to have an estimate of the probability distribution of X.

Suppose an automobile physical damage policy has a deductible of 50. Let Y be the amount of the loss with X the amount paid by the insurance. The relation­ship is

X = 0 r < 5 0

=Y-50 , Y>50.

This is called truncation since we are not able to observe losses less than 50. By the earlier definition W = 1 when Y>0 so we cannot directly estimate w. This allows for consistency. If the coverage changes, as indicated below, the frequency will not change. However, the nature of insurance company records is such that it is not possible to distinguish between the case where no event occurs and the one

Page 46: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 33

where the event occurs but nothing is paid.

Now suppose that on 1000 policies we observe 100 payments averaging 200. It is important to note that 100/1000 = .1 is not an estimate of / and 200 is not an estimate of the severity. If F(y) is the distribution function (d.f.) and f(y) is the probability density function (p.d.f.) of Y then .1 is an estimate of

wPr(Y>50) = u'[l-F(50)j

and 200 is an estimate of

CO

E(X\Y>50) = j(y-50) f(y)dy/[l-F(50)}. 50

All is not lost since the product .1(200) = 20 is an estimate of

oo

wPr(Y>W)E{X\Y>50) = wj{y-50) f (y)dy 60

= wE(X),

precisely the net premium (we ignore E(e~ T) for the rest of this presentation). We see there is no problem in using sample averages to estimate the pure premium when the policy is unchanged.

Now suppose we want to issue the same policy but with a deductible of 25. Since the sample does not include observations on losses between 25 and 50, the sample mean is of no value. Suppose instead we had fitted a probability distribution to the 100 observed losses. As will be demonstrated later it is not difficult to fit a distribution for Y when the observations are from X, a function of Y. For exam­ple, let the fitted distribution have p.d.f. and d.f.

F{y) = 1 - (.0085y-f l)e-°°85y , y>0

f{y) = .00007225ye-0086y, y>0.

From here on, when defining functions take them to be zero at all points not

Page 47: Actuarial Mathematics Output

34 S.A. KLUGMAN

indicated in that description. We then estimate the frequency as

w = .1/[1-F(50)] = .1/96162 = .10734

and the severity as

oo

E(X) = / ( y - 5 0 ) / ( y ) d y = 186.51667. 50

The net premium is (.10734)(186.51667) = 20.02. Since this was based on a fitted model, we do not expect the net premium estimates to agree with the one taken directly from the sample mean.

We now have enough information to estimate the net premium for a deductible of 25. It is

oo

.10734/(y-25)/(y)dy = 22.59. 25

A second problem that can be solved by using the probability distribution is measuring the effect of inflation. Suppose the data on which the estimates are based was collected on losses from one year ago and our goal is to set premiums for next year. Further suppose that over the two years the cost of repairing damaged auto­mobiles has increased uniformly by 15%. That is, the distribution of future losses

V is equal to that of 1.15y Retaining the deductible of 50 we see that multiply­ing the old net premium by 1.15 is not correct since some events that were not reported last year will lead to payments next year as they will now cost more than 50 to repair. The net premium will have to increase by more than 15%. The

correct calculation uses

X'=0, 1.15r<50 = 1.157-50, 1.15y>50.

Assuming that w is unchanged the new net premium is

Page 48: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 35

00

.10734 / (1.15y-50)/(y)dy = .10734(221.53760) = 2378 50/1.15

an increase of 18.8%.

One final point in favor of probability models needs to be made. Had the issue been one of raising the deductible we could have made direct use of the 100 obser­vations merely by adjusting each loss to what would be paid under the higher deduc­tible. However, when working with raw data there is the possibility that a few unusual values in the sample will distort the outcome. We would expect, for exam­ple, that increasing the deductible from 50 to 100 would produce less than twice the savings as would increasing it from 50 to 75. An unusual sample might indi­cate just the opposite. The smooth p.d.f. from a fitted model will not produce such inconsistencies.

There are additional reasons for using continuous models other than those presented here. One is that they provide for interpolation as well as extrapolation. Data on individual losses are often not available. Instead data are collected in fre­quency table form as in the example in Section 4. Inflation or a change in the deductible may require probabilities for intervals other than those given in the grouping. Interpolation will then be needed and a continuous p.d.f. provides a smooth means of doing it.

Most continuous distributions are found as members of a parametric family. For example the family of normal probability distributions is indexed by the parame­ters fi and ex, the mean and standard deviation. Three advantages of parametric families are:

(1) comparisons (either with other insured populations or within the same group over time) are easier to make as they involve the comparison of vectors of low dimension and not the comparison of functions.

(2) Tests of homogeneity can be conducted. That is, two different risk groups (e.g., males and females) can be tested to determine if their experience is sufficiently similar so that they can be combined into one large class.

(3) Questions of parsimony can be addressed. The simplest (fewer number of parameters) model that adequately describes the data can be identified.

A final comment is in order. The techniques presented here are useful for describing the situation that produced the data. Great care should be taken in using these models as descriptions of future experience. Both the continuation of trends (e.g., inflation) and dramatic shifts (e.g., a large increase in the variability of interest

Page 49: Actuarial Mathematics Output

36 S.A. KLUGMAN

rates) can cause the future to be fundamentally different from the past. At best we are "predicting the past" and from there we can try to analyze the future.

3. Heavy Tailed Probability Models. Most loss distributions will have a p.d.f.

of one of the following two forms:

In both cases the key element is the rate at which the p.d.f. approaches the horizon­tal axis. Typically, the function will be asymptotically either exponential or polyno­mial. The basic forms of these two cases are:

g a m m a - f(x)^e"^x€"l/r(a)f x>0

Pareto-- f{x) = a(l+x)~ x>0.

The gamma distribution places less probability in the tail as indicated by the fact that all moments E(Xk) = a ( a + l ) • • • (a+Ar-l) exist while for the Pareto E{Xk) = k\/(a— l)(a—2) • • • (a—k) exists only for k<a. In each case it is possible

to relocate the mean by adding a scale parameter. The transformation Y = XX will accomplish this, producing the customary forms of these distributions:

g a m m a - f(x) = f-x^{x/\)a'x/\Y{a\ x > 0

Page 50: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 37

Pareto-- f(x)^a(l^x/\)'°'l/\1 x>0.

There are two methods of creating new distributions. The first is mixing. Suppose the distribution of A' depends on parameters 6 v . . . ,6m. Consider the parameter 6 j to be a random variable with its distribution depending on parameters P1, . . . ,fin. The marginal p.d.f. of A" is then

/(* \e2>... ,emtpv . . . ,pn) = j f{x \ox,... A J / ( ^ , K •. • ,Pn)dox

where / ( . | . ) is the appropriate p.d.f. In general the marginal random variable will have a heavier tail than the original one with 0 x fixed at any value. An example is:

X | a , \ ~ gamma(a,l/X)

X \k,/3 ~ gamma(A-,l/0)

OO

f{x \a,k,/3) = Je-XxxQ"l\0€-^\k^/3kd\/Via)rik) o

oo

= x^p'fe-t'+W^^dX/TiaFik) o

= x^/S'ria+kMx+fl^'riapik)

= r(a+k)(x/pr> x>0 Tia)Tik)Pil+x/p)°+k '

This has been called the generalized Pareto distribution. A special case of this can be obtained by letting a = rx/2, k = ro/2, and fi = r2/rx. Then X has an F-distribution with rx and r2 degrees of freedom. If k = 1, the result is a Pareto distribution, so we could have started with just the gamma distribution.

The following justification for mixing is taken from Venter (1983). Suppose you have a reasonable amount of faith that a particular distribution adequately explains the process of interest. However, you have been forced to estimate the parameters based on old data while your objective is to describe the probability distribution of next year's losses. Next year's losses will likely be some unknown constant times the

Page 51: Actuarial Mathematics Output

38 S.A. KLUGMAN

past losses. If a distribution can be assigned to tha t constant the effect is exactly

the same as mixing on the scale parameter of the original distribution.

A second justification for mixing is tha t the population being sampled may be

heterogeneous. Suppose tha t all of the insured lives being studied produce losses

according to the same distributional model bu t t ha t the scale parameter varies from

individual to individual. The net effect is t ha t the entire sample may be viewed as

having been from the mixed distribution.

The second method of obtaining a new distribution is by transformation. Two

such transformations are common. The first is Y = ex. The major drawback is

tha t the suppor t of Y is now (l,oo). The transformation is easy to do, just replace

x by log(x) in the p.d.f. and divide by x. So, for example, the loggamma ran­

dom variable has p.d.f.

e - Q o S x ) A ( l o g 3 : r i M ' x\°T(a)

( logs) 0

i 1 + 1 ^ T ( a ) ' x > l .

One case t ha t does not follow directly from the gamma or Pa re to is the lognormal

distribution where A' is taken to have the normal distribution. The lognormal is a

useful model of this type since it has support over all positive real numbers.

Another transformation is Y = X1/r. In order to increase the tail weight we

need r > 1, bu t other positive values less than 1 also produce useful distributions.

The p.d.f. is obtained by replacing x by xT and multiplying by rxr~x. The sup­

port remains (0,oo). If there is a scale parameter its role is preserved by using the

transformation Y = X(Ay\ ) 1 / / r . In the p.d.f. replace x/X with (x/X)T and multi­

ply by r ( x / X ) r ~ 1 . For example, the Pare to distribution becomes a three parameter

distribution with p.d.f.

This has been called the Burr distribution and has proved to be very good for fitting

a variety of losses. The transformation also works for negative values of r . The

only change in obtaining the p.d.f. is tha t the old p.d.f. is multiplied by — r ( x / X ) r ~ .

Of particular interest is the case r = —1. This gives the inverse distributions. For

example, the inverse gamma distribution has p.d.f.

Page 52: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 39

fW- xr(a) ' x>0-

This has a heavier tail than the gamma distribution. The moments are E(Xk) = X*/(ar—l)(or—2) • * * (a—k) for Ar<a. Unlike the gamma random variable,

this one does not possess all moments.

The following might be considered as the ultimate extension of these ideas. The power transformation of the generalized Pareto has p.d.f.

r r ( a + f c ) ( - | - ) ' - ' / ( * ) = 2 , *>0 .

0r(a)r(k){i+(±y}a+k

This has been termed the generalized beta of the second kind by McDonald and Richards (1985). They find sixteen other distributions that are either special or lim­iting cases of this distribution. Venter (1983) refers to it as the transformed beta distribution. He shows that it can also be derived by letting X have the transformed gamma distribution with parameters a, 1/X, and r and then mixing by letting X be transformed gamma with parameters ky (l/p)l/T> and r . It is then used as a model for uncertainty about the scale parameter.

4. Fitting Parametric Distributions. With an inventory of heavy-tailed distri­butions in hand we turn to the problem of estimating the parameters of a specific distribution. For the general case let F{x;0) and f(x;0) be the d.f. and p.d.f. of a distribution with parameters Q1 = (6lf . . . ,6p). Let t*i{0) = E(Xl) be the tth moment. Assume a random sample of n observations has been obtained with xx, . . . ,xn being the observations and Fn(x) the empirical d.f. That is, Fn(x) = ( number of observations < x ) / n . We identify two sampling situations.

Individual data means that the values xu . . . ,xn were recorded and are available for use. Grouped data means that only the number of observations in prespecified intervals are recorded. The latter is equivalent to observing Fn(c0), . . . ,Fn(ck) where 0 < c0 < c0 < cx < • • • ck <oo and c0 < xt < ck, i — 1, . . . ,n. Then the frequency for class i is /,• =• n[Fn(ct-)—Fn(ct-_!)], i =. 1, . . . ,k.

Page 53: Actuarial Mathematics Output

40 S.A. KLUGMAN

The methods to be presented will be illustrated with the following example. Losses were observed on 392 claims in a group dental basic coverage (this policy covered check-ups, fillings, and other minor procedures). The data was collected in grouped form; available values from the empirical d.f. are given in Table 1.

TABLE 1

Losses on Basic Dental Coverage

2

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

c,-

0

25

50

75

100

150

200

250

300

400

500

600

700

800

900

1000

1250

1500

2000

2500

3000

4000

/••

--

6

24

30

31

57

42

38

27

30

28

16

15

13

8

o

5

5

5

7

2

1

Fn(Ci)

00000

.01531

07653

.15306

.23214

.37755

.48469

58163

.65051

.72704

.79847

.83929

.87755

.91071

.93112

.93622

.94898

.96173

.97449

.99235

.99745

1.00000

Page 54: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 41

The first step is to get a rough idea of the nature of the distribution. The his­togram (see Figure 1) is the p.d.f. related to the empirical d.f. That is, fill in the empirical d.f. by connecting the FM(c,-) with straight lines. The derivative is the histogram. It is apparent that these losses follow the heavy-tailed skewed models of Section 2. In particular we would want to use a model that has a non-zero mode.

The following are five methods that can be used to estimate the parameters:

(1) Percentile matching — Select p points (zly . . . ,zp) from the empirical d.f. and equate them to the d.f. of the model. That is, solve the equations

F(zi]0) = Fn(zil i = 1, . . . , p

for 0\, . . . ,0p. For grouped data select p values of c,-.

(2) Method of moments — Let my be the / th sample moment. n k

rrij = XK x t )V n **or individual data and rrij = YJ [{ci + ci-i)/2}3 fi/n f° r grouped

data. Solve the p equations

Pj{°) = mv J = l> • • •>*>•

Usually at least one of these methods will be easy to compute. Neither will pro­duce accurate estimates. The remaining methods are more complex, but also more accurate.

(3) Minimum distance — Let K be a distance measure. For example

K = X>(xJ [F(x , ;0 ) - F n (xJ] 2 (individual)

= TlHci){F{ci;0) - Fn(c,)j2 (grouped). t '= l

The estimator is the value of 0 that minimizes K. A reasonable choice for the weight function is

Page 55: Actuarial Mathematics Output

42 S.A. KLUGMAN

w(x)= {F(x;»)[l-F(x;8)]rl

which is proportional to the inverse of the variance of the quantity being squared. This gives each term in the sum an approximately equal contribution.

(4) Minimum chi-square — Minimize

<3 = £ ( 0 , - E , - ) 7 £ , - (grouped only) I s i

where

0,. = n[F1,(c i)-F11(c |._1)]=/,-

Ei = n[F(ci;0)-F(ci.x;0)}.

It is often easier to solve this problem if the Ej in the denominator is replaced by o,.

(5) Maximum likelihood — minimize the negative of the logarithm of the likelihood function

L = -XJlog !(xi)») (individual) *=i

= -£ / i log[F( C l . ;0 ) - F(ci-it)} (grouped). t '=l

In all of these cases numerical methods will be needed to perform the minimization. The easiest way to do it is to note that each problem can be written as minimize

£[My.-;*)]2-*=1

Numerous programs are available for minimizing sums of squares (e.g., IMSL subrou­tine ZXSSQ). In addition, a sum of squares can be considered as a nonlinear least squares regression problem where the model is Z = h(Y;0) with zf- = 0 for each

Page 56: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 43

y,-. The examples in this paper were obtained using the SAS routine NLIN. All of these methods require initial values that should be reasonably close to the minimum. For these purposes methods (l) and (2) often provide a good start.

While the final three methods all produce good estimates, maximum likelihood estimators have two advantages, both arising from the well developed asymptotic theory of such estimators. The advantages are the ability to construct hypothesis tests as outlined at the end of Section 2 and the ability to estimate the standard error of the estimator. Both of these are illustrated in Section 5.

For the dental data we want to fit Pareto, Burr, and lognormal models. Even though the Pareto is unlikely to fit well (its mode is zero), it will help in the fitting of the Burr distribution. The Pareto starting values are found by percentile match­ing.

Fn(200) = 190/392 = .485 = 1 - (1+200/^)-°

Fn(500) = 313/392 = .798 = 1 - (l+500/X)-<\

Eliminating a yields

(1+500,4) = (1-f 200/X)2408

and Newton-Raphson iteration produces X = 3712 and then a = 12.65. Eight iterations using NLIN then produced the maximum likelihood estimates a = 5.394 and X = 1575. The value of L at the minimum is 1091.83.

The method of moments was used for the lognormal starting values. The sam­ple moments were available from the entire set of 392 observations. The equations are

361 = exp(/i+<72/2)

347,585 = exp(2/*+2<r2).

They are easily solved for fi = 5.399 and a = 0.990. Twenty-eight iterations of NLIN produced \i = 5.354 and er = 1.024 as the maximum likelihood estimates. The minimum L is 1068.79.

Page 57: Actuarial Mathematics Output

44 S.A. KLUGMAN

Finally, the Burr parameters are found by using the Pareto maximum likelihood estimates plus r = 1 as the starting values. That is, start with the Pareto and see what improvement can be made by adding a third parameter. Seventeen iterations produced a = 0.9682, X = 203.1, and T = 1.721. The value of L is 1071.96.

These three estimated models are displayed in Figures 1, 3, and 5 where the model p.d.f. is plotted against the histogram. Besides the visual comparison of the estimates, the value of L also provides a measure of distance, here indicating that the lognormal may be a slightly better model than the Burr. More will be said on this in the next section.

In Section 1 it was noted that when observations are truncated it is the p.d.f. of the untruncated distribution that is desired. As in that section let Y be the ran­dom variable of interest and X the observed random variable. To cover all the usual possibilities at once consider an insurance with a deductible of d, coinsurance of a, and a limit of u. This means that the insurance pays nothing when the loss is below d and then pays 100(1—a)% of the excess of the loss over d, but never pays more than u. This is a common practice in health insurance. The relationship between Y and X is

X 0, Y<d (i-a)(X-d), d<Y<d+u/{l-a) u, Y>d+u/(l-a).

We observe A' and want to make inferences about Y. When X = 0 we do not know if it is because there was no loss or because the deductible was not achieved. Therefore we must deal with the conditional distribution, given Y>d. For this case

Fx{x) = Pr{X<x\Y>d)

= Pr{X<x and Y>d)/Pr(Y>d)

{Prld<Y<d + x/{l-a)}/Pr(Y>d)y x<u = \Pr(d<Y)/Pr{Y>d), x>u

({FYld + x/{l-ay\-Fy(d)}/\l-FY(d)}, x<u

and

Page 58: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 45

= fy\d + x/(\-a))/{l-a){l-FY(d)), 0 < i < u.

Then when trying to fit a model use, for example, in the maximum likelihood case

with grouped data,

L = -£fMlFx(Ci;0) - FY(c,--i;*)]

= - £ / , l o g [ F y [ d + c , , / ( l - a ) ] - Fy[d+c,._, /( l-a)] t « l

+ n log[l-Fy(d)j.

When a limit is present those values with X = u can just be considered as being in the class [d-f ti/(l—a),ooj with respect to Y and the limit has no effect of the analysis. This is because unlike truncation the observations are not lost, they are just not known exactly. In the individual case the likelihood will contain ele­ments of both types. That is (where fk is the number of observations at X = u)

I « - E log frld+XiAl-*)] + (n-/*){k>g(l-a) - log[l-Fy(rf)]}

- / 4 l o g { l - F y [ d + « / t l - a ) ] } + /*log(l-Fy(d)].

5. Applicat ions of F i t t e d Models. There are two tasks remaining. One is to select the best model from the alternatives and the other is to formalize the applica­tions. The first requires some idea of the second in that the best model is the one that is most appropriate for the intended application. It will be shown later in this section that many applications use values of the limited expected value function

Page 59: Actuarial Mathematics Output

46 S.A. KLUGMAN

E{X;x)= Jf(t)dt + x(l-F(«)]. o

It is the expected value of X when all values above x are replaced by (censored at) x. Its empirical version can also be calculated from the data

For grouped data this may be approximated by

En{X;cs) = lifdci+'i-JA + cj E fi)/n.

It is a desirable goal to have E(X;x) and En(X;x) be as close as possible for all x. Since the limits as x goes to infinity of these functions are /ij and ml

respectively, this is more restrictive than asking that the first moments match. Fig­ures 2, 4, and 6 compare the empirical and fitted limited expected value functions for each of the three models. For the empirical function a spline is used to smoothly connect the values at the class boundaries. In Table 2 the values are available for comparison. We see that the Pareto model is too high for middle values of x (500—1800) and the Burr model is too low for middle values of x and then too

high in the limit. The lognormal appears to provide the superior fit although it too is a bit off around x = 2500.

An additional test is available for comparing models obtained from maximum likelihood estimates when one model is a special case of the other. Twice the differ­ence of the values of L will have a chi-square distribution with as many degrees of freedom as there are restricted parameters in the special case. For Pareto vs. Burr the test statistic is 2(1071.96—1068.79) = 6.34, a clearly significant value with one degree of freedom. There is a clear advantage in adding the third parameter.

We will use the lognormal model in the applications that follow. The limited expected value function is

Page 60: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 47

TABLE 2

Limited Expected Values

Empirical Pareto Burr Lognormai

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

25

50

75

100

150

200

250

300

400

500

600

700

800

900

1000

1250

1500

2000

2500

3000

4000

infinity

25

49

71

91

126

153

177

196

226

250

267

281

292

300

306

321

332

349

357

360

361

361

24

46

66

85

118

146

171

192

226

252

272

287

299

309

317

331

339

349

353

355

357

358

25

48

71

91

126

155

179

198

227

249

265

278

288

297

304

318

328

341

350

357

366

407

25

49

71

91

126

154

178

198

230

253

270

284

295

304

311

324

332

342

347

351

354

357

E(X;X) = e ' - * 8 / V o g J - " - g ) + ^i-OfJSEEziLj]

= 357.2 V ° 5 * - 6 - 4 0 3 ) + x[ l -^( ]° g J- 5- 3 5 4)] 1 1.024 ' [ V 1.024 n

where 4>(a:) is the d.f. of a standard normal random variable.

Consider now the problem of imposing a deductible. The net premium without the deductible is wE(X) while with a deductible d it is

Page 61: Actuarial Mathematics Output

48 S.A. KLUGMAN

oo oo d

wj{x-d)f{x)dx = w{Jxf(x)dx - Jxf{x)dx - rf[l-F(d)j}

w\E(X) - E(X\d)].

If a deductible of 25 is added to the dental coverage, the new pure premium is w(357.2-24.87) = 332.33M;, a reduction of 7.0%.

Now suppose a limit of u and coinsurance of a is added to the policy. The net premium is w times

\—a

J (l-a)(x-d)f(x)dx + / uf(x)df d

I—a

1-a d

= ( l - a ) { / x/(x)rfx - Jxf{x)dx - dF[d+t*/ t l -a) ] + dF(d)} 0 0

+ u{l-F[<f + zi/( l-a)]}

= ( l - a ) { £ [ X ; d + i i / ( l - a ) ] - E{X;d) - F ( r f+ i / / ( l - a ) ) / ( l~a )}

+ ti{[l-F[d + t i /( l-a)]>

= ( l -a){E[X;d + t i / ( l - a ) ] - E[X;d]}.

If a limit of 500 and 10% coinsurance is added to the deductible of 25, the net premium becomes w(.9)[E{X;5$0M)-E{X;2b)} = w(.9)(267.22-24.87) = 218.12w, a reduction of 39.9% from the original net premium of 357.2.

Page 62: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 49

We next investigate the effect of inflation on the premium. Suppose inflation acts uniformly, so that next year's losses will have the same distribution as Y = (l-fr)A~ where r is the inflation rate. The key relationship is

y

E(Y;y) = JtfY(t)dt + y{l-FY(y)} 0

y

= JtfxltAl+r)]dt/(l+r) + y{l-Fx[yAl+r)]} o

y/(i+r) = (1 + r) / sfx(s)ds + y { l - F Y M H r ) ] }

o

= (l+r)E[X;y/(l+r)).

Then after inflation we have a net premium of

w(l-a){l+r){E[X;d/{l + r) + u/{l+a)(l+r)] - E[X;d/{l+r)]}.

Since r > 0 implies £J[X;d/(l+r) < Is(X;d), when there is no limit the relative increase in the net premium must be greater than r. With a limit and no deducti­ble the opposite effect occurs. With both it could go either way. In the dental example, 15% inflation yields a net premium of

u;(.9)(1.15)[E(A";504.83) - E{X;21.74)] = t£;(l.035)(253.66-21.66) = 240.12us an increase of 10.1% over the net premium before inflation.

The last application involves confidence statements about these estimates. One major advantage of using parametric models and maximum likelihood estimates is that the asymptotic variance of the estimates or of functions of the estimates can be obtained. This reduces the usefulness of estimators (3) and (4) introduced in Section 4 but it should be noted that they are often easier to obtain and may provide better starting values for finding the maximum likelihood estimator than estimators (1) and (2).

The asymptotic theory states that under mild regularity conditions the max­imum likelihood estimator B will converge in distribution to a multivariate normal random variable with mean vector 0 and covariance matrix A where A"1 has (i,y)-th element.

Page 63: Actuarial Mathematics Output

50 SA. KLUGMAN

d2L(0)

where L(0) is the negative of the logarithm of the likelihood function (Theorem 5e.2(iv), Rao, 1965). To estimate A""1 when the expectation can be obtained, replace 0 by its maximum likelihood estimate. If not, estimate the expectation by the observed value. That is, take the derivatives and insert the estimate of 0. For the lognormal distribution and grouped data

(̂M )̂ = -E/.log[F(c,)-F(c,_1)] t*=l

= - £ / , l o g [ * ( — - — ) - 4>( ~ )].

The derivatives are

^ f 1 = -E/,{(A-A- 1)(^rf,-^-W,- 1) ^

- (^-rf,-ifMA-A-i) V 2^2

^Z&P- = -EMA-A-^M-^. - i ) dfidcr «"«i

,2^2 - K-d.-OM.-^,-,) } /(A-A_.) V

a*2 -E/.{(A-A--i)(^,+2^rf,-^_1di_1-22,_1rf,_1)

-(^d,-^_,d,-_,) 2 }/(A-A--i) 2 " 2^2

where z{ = (logc,—\i)/a, D{ = #(* t) , rff = ^(z,), and <t>(z) = e"z ^ / V ^ T T , the

standard normal p.d.f. Inserting the estimates of p and a produces

Page 64: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 51

A~l = 370.032

71.0002

and

A = .0027541

-.00029221

71.0002 669.167

-.00029221

.0015254

Finally, suppose we want to estimate a quantity that is a function of the parameters, say h{0). Further asymptotic theory (Theorem 4.2.5, Anderson, 1958) states that if h(0) is sufficiently continuous then h(0) has an asymptotic normal distribution

with mean h{0) and variance (h'(0))'A(h'(0)) where h\B)i = dh{Q)/dB£. This vector is estimated by h'(0). Our estimate of the mean with a deductible of 25 used

h(0) = E{X)-E(X;2S)

= e^** *>*[l-*( k g 2 5 - * - * )] . 2 5 [ l - » ( I o g 2 5 ^ ) ] .

Then

fc'(')i - e ^ > - ^ ( l 0 s 2 5 ^ - f f ) ] + e^A*{!2&=!L^)/o^{}S&=IL)/o

c o c

h'{0)2 = a e ^ f l - V O S ' 2 5 ~ ' i ~ < T ) ] + e ^ ^ ( l 0 g 2 5 7 ~ < T ) ^ ( l 0 g 2 5 ~ ; i - ' T

_ 2 5 ( i2 i25z i i W i2S2±zAL ) . 0" <7

At ji = 5.354 and a = 1.024 we have (h'(0))' = (356.92, 364.03). Then Var(/i(0)) = 477.057 and a 90% confidence interval for the net premium (ignor­

ing w) is

332.33 ± 1.645V477.057

or

(296.40, 368.26).

Page 65: Actuarial Mathematics Output

52 S.A. KLUGMAN

FIGURE 1 Histogram and Fi t ted P .D.F .

PARKTO DISTR.

0 . 0 0 A-\

0 . 0 0 0

L O S S

FIGURE 2 Empirical (—) and Fitted (—) Limited Expected Value Functions

PARBTO DISTR.

400H

300H

L 200-E V

1 00H

- i — i i ? i i r T * | » i — t r I i T » f i i t i » i i i i i — j — i — i r i i » » i • |

500 1 000 1 500 2000

LOSS

Page 66: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 53

FIGURE 3 His togram and F i t t ed P . D . F .

L0GN0R1OL DISTR.

0 . 0 0 0 0

LOSS

FIGURE 4 Empirical ( ~ ) and Fit ted (—) Limited Expected Value Functions

L0GN0R1IAL DISTR.

4 0 0 H

300 -3

L 2 0 0 -E V

1 00-

~ i i t i i i i i i t i » > t > i f | • ? i

500 1000

i i i i | i

1 500 2000

LOSS

Page 67: Actuarial Mathematics Output

54 SA. KLUGMAN

FIGURE 5 Histogram and Fitted P.D.F.

BURR DISTR.

0 . 0 0 0 0

LOSS

FIGURE 6 Empirical (—) and Fitted (—) Limited Expected Value Functions

BURR DISTR.

5 0 0 -

4 0 0 H

3 0 0 H

2 0 0 H

1 0 0 -

» » » t t i i i > i t » i i r t t i i f ? i i f i i t

500 1 000 1 500 r

2000

LOSS

Page 68: Actuarial Mathematics Output

LOSS DISTRIBUTIONS 55

BIBLIOGRAPHY

1. Anderson, T.W. (1958), An introduction to multivariate statistical analysis, Wiley: New York.

2. Hogg, R.V. and S.A. Klugman (1983), On the estimation of long tailed skewed distributions with actuarial applications, J. Econometrics, 28, 91-102.

3. Hogg, R.V. and S.A. Klugman (1984), Loss distributions, Wiley: New York.

4. McDonald, J.B. and D.O. Richards (1985), A general methodology determining distributional forms with applications, unpublished manuscript (Dept. of Economics, Brigham Young University).

5. Rao, C.R. (1965), Linear statistical inference and its applications, Wiley: New York.

6. Venter, G.G. (1983). Transformed beta and gamma distributions and aggregate losses, Proc. Casualty Actuarial Soc. 70, 156-193.

DEPARTMENT OF STATISTICS AND ACTUARIAL SCIENCE UNIVERSITY OF IOWA IOWA CITY, IA 52242

Page 69: Actuarial Mathematics Output

This page intentionally left blank

Page 70: Actuarial Mathematics Output

Proceedings of Symposia in Applied Mathematics Volume 35, 1986

Overview of Credibil i ty Theory

P.M. KAHN

ABSTRACT. Credibility theory studies the systematic revision of premium rates in light of current claim experience. Historically it has studied linear estimates of true expectations derived as a blend of hypothesis and obser­vation. Classical credibility assumes models with claim process parameters as fixed constants, while the later Bayesian theory examines models with such parameters considered as random variables. These approaches are discussed, and some recent directions in research are reviewed.

S u m m a r y . An important problem in actuarial science treats the systematic revi­sion of premium rates in light of current claim experience, both as to frequency and to size. In life insurance, for example, one important problem is how to recognize a change in underlying mortality rates operating in a population under study. When is a fluctuation from past experience, as evidenced by recent data, purely a random effect and when is it a change in the basic risk process? An attempt to solve this problem has led actuaries to the concept of credibility; this term describes the actuary's confidence in a set of experience data and his reliance on these data as a guide to designing the price structure of his product. One helpful definition of credi­bility is "a linear estimate of the true (inherent) expectation derived as a result of a compromise between hypothesis and observation." (Hewitt, 1970). Mowbray expressed this idea somewhat earlier (1914, quoted in Hickman, 1975) in a form more suitable for calculations:

"A dependable pure premium is one for which the probability is high, that it does not differ from the true pure premium by more than an arbitrary limit."

Mowbray wrote at a time when casualty actuaries were developing the foundations of premium rating for workmen's compensation insurance and when both equity and competition produced a premium structure incorporating experience rating princi­ples. At this period, about 1914-1920, American actuaries were developing the basic notions of credibility theory which development they continued for 30 years (Whit­ney, 1918; Perryman, 1932, 1937; Bailey, 1945,1950). These investigations proceeded along these classical lines until Mayerson (1964a, 1964b) and de Finetti (1964) began

© 1986 American Mathematical Society

57

http://dx.doi.org/10.1090/psapm/035/849141

Page 71: Actuarial Mathematics Output

58 P.M. KAHN

to place credibility ideas in the context of Bayesian statistics. About this time Euro­pean actuaries began an interest in credibility which, beginning with Buhlmann and his students in the mid-1960's, has led to "a virtual explosion of research into vari­ous extensions and mathematical properties of linearlized Bayesian credibility models" (Jewell, 1980). In 1974 the first research conference in credibility was held at the University of California (Kahn, 1975) with a follow-up conference in 1984.

In this introduction, we describe first some models from the classical, or tradi­tional, viewpoint and then consider the basic estimation problem from the Bayesian viewpoint. We then touch briefly on recent developments and provide a list of refer­ences with those containing extensive bibliography so indicated. It may be noted that Hewitt (1975) has written a fine non-technical discussion of credibility theory.

Classical Credibility. In a fine introduction, Hickman (1975) classified contribu­tions to credibility theory as to whether the models view the parameters of the claims process as fixed constants or as random variables. In the classical or sampling theory point of view, the parameters are considered fixed and must be estimated using actual insurance data. In the classical approach, the actuary must first deter­mine the size of the experience which warrants full credibility, i.e., a credibility fac­tor Z(t) of 1, where t measures the size of the exposure or insurance experience which genertated the level of claims x. The next step is to determine partial weights, or partial credibility factors, for smaller groups. Then the adjusted esti­mate of claims may be expressed as

Z{t)x + [1-Z(t)]m'

where m1 is the prior estimate of expected claims, say, a manual premium.

In an introductory monograph on credibility theory with a discussion of its early development in the United States, Longley-Cook (1962) discusses several models of the classical type. The first he considers deals only with the frequency of claims and not their severity. Here the probability of a claim in any time interval is assumed to be the same for all risks, and this probability is proportional to the length of the time interval for small intervals. The distribution of the number of claims is therefore the Poisson with the mean equal to the expected number of claims in a unit interval of time. The normal distribution is used as an approxima­tion to find the number of claims which will induce an actuary to assign 100% credibility to new experience.

Page 72: Actuarial Mathematics Output

CREDIBILITY THEORY 59

If we set the probability that claims X fall within 5% of the expected value as 0.9, then

Hence,

and

Pr{.95E{X] <X< 1ME\X}} > 0.9.

X-X* f¥{- .05Vx7 < - ^ j p - < .05Vx7} > 0.9.

By applying the normal approximation, we have that full credibility may be assumed if the expected number of claims (\t) is at least 1084. In practice, full credibility is used if the actual number of claims is at least 1084. To determine partial credi­bility one approach is to consider it a function of the volume V of claims. If V is the volume required for full credibility, then the credibility Z(V) may be found from one of two functions:

Z(V) = V/[V+k),

or

Z ( V ) « | \VV/Y for V<V

1 for V>V

where A: is a conveniently chosen constant.

Longley-Cook then proceeds to treat several other models. One uses the log-normal as the distribution of individual claim amounts. This is not tractable, so Mayerson (1964a) suggested the use of the Gamma distribution as a reasonable approximation, because of both its tractability and the richness of this family of dis­tributions. Another model allows for variation in the inherent loss frequency within the population exposed. Here, Longley-Cook proposes the Gamma distribution as the distribution of claim frequencies within the population. This assumption implies that the distribution of the number of claims in a given time interval is the negative binomial, rather than the Poisson.

Page 73: Actuarial Mathematics Output

60 P.M. KAHN

Bayesian Credibi l i ty . The classical approach to credibility theory uses a credibil­ity factor Z to weight actual experience and its complement, 1—Z, to weight prior knowledge, say a manual premium derived from historical experience. The Bayesian approach essentially serves to quantify this prior knowledge. This may be expressed precisely as follows: if we denote the available data by D and the hypothesis to be investigated by H, then Bayes Theorem may be expressed as

where P{H) is the prior probability of H. (Mayerson, 1964a; Kahn, 1967, 1968). The Bayesian approach assumes that our prior knowledge about the state of nature can be adequately described by a parameter, or set of parameters, with density p'(0). In Buhlmann's terms (1967) this density is a structure function. It represents

the distillation of our past experience and of our subjective interpretation of this experience. Let us assume also that we have some new data, the claim experience of a recent year, say. We may represent the likelihood of these data as l(x \0). We wish somehow to blend our intuition and this latest information. Bayes Theorem allows us to do this by combining l(x \0) and p\9) to produce what is called the posterior density pn(9 \x) of 0, i.e., the distribution of 6 conditioned on the occurrence of x, the claim experience:

p»(6\x) = l(x\8)P'(6)N(x),

where N{x) is a normalizing constant.

By combining in this way our prior knowledge with the new experience, we are able to modify the distribution of the parameter 0. What this means is that we had some knowledge, albeit possibly subjective, about the state of nature, as, for exam­ple, the value of the pure risk premium. After analyzing some new data, or in sta­tistical decision-theoretic terms, conducting an experiment (Raiffa and Schlaifer, 1961), we are willing to modify this opinion about the value of the pure premium. For reasonably chosen functions p(9) and /(a:|0), we hope that the amount of experience in the new data will be reflected in some proportional way in the poste­rior distribution. If, for example, 6 is the pure risk premium, i.e., the mean of X , then the best estimate of the risk premium is the mean of the posterior distribution. The mechanics of such an analysis is a formidable one, but the problem is made easier by the use of conjugate distributions. For certain forms of the density p'(0)}

Page 74: Actuarial Mathematics Output

CREDIBILITY THEORY 61

said to be conjugate to certain forms of the likelihood l(x\B), we have the result that p"(B \x) and p'(B) are of the same family, although with different parame­ters. For example, if p'(B) is the density of a Beta distribution and l(x \B) is the density of a binomial distribution, then pn(B \x) is also a Beta density. If p'(B) is a Gamma density and l(x \6) is Poisson, then they combine to give a Gamma den­sity for p"(B \x).

Arthur Bailey (1945, 1950) showed that in each of these two special cases, a credibility factor Z can be found of the form

Z = n/(n + k),

so that

E"{B\x) = Zx + (1-Z)£ ' (0) ,

where E" is the expectation with respect to pu(B \x) and E1 the expectation with respect to p\B) (Mayerson, 1964a).

In choosing the prior distribution p'{B)f we should look for a family of tractable distributions, that is, it should be relatively simple to find the posterior distribution from the prior distribution and the likelihood of the new data. It should have the property that the posterior is also of the same family as the prior distribution. The family should be sufficiently rich so that the actuary's prior experience and judg­ment can be expressed. Also, the parameters of the family of distributions should be able to be interpreted readily enough to verify that the prior distribution which is in fact chosen is an adequate expression of prior belief and experience. In fact, how­ever, the problem of choosing a prior distribution can best be put into proper per­spective by quoting the following observation from Raiffa and Schaifer (1961) which is derived from their own experience:

"In the great majority of applications the method of fitting will have abso­lutely no material effect on the results of the final analysis of the decision problem at hand."

If the end product of the credibility procedure is, for example, a new rate based upon our old rate and some new experience, the problem is then to estimate E"(B \x) as defined above. Both Biihlmann (1967, 1970) and Mayerson (1964a) pro­

vide the best linear approximation to E"(B \x) which in the special cases of the

Page 75: Actuarial Mathematics Output

62 P.M. KAHN

binomial-Beta and the Poisson-Gamma conjugates becomes

E'\e\x) = P2x + {i-p2)E\e)

where the credibility factor is p2, the square of the correlation coefficient between x and 0.

In the binomial case,

Z = n/\n+k)

where n is the number of trials or number exposed in the binomial model and

E'(0)\l-E'(9)]-V(e) V(0)

In the Poisson-Gamma case,

Z = n/{n + k),

where n is the number exposed and

These formulas represent finding a linear approximation to the posterior premium in terms of x or Yjxi- Bvihlmann suggests that this is satisfactory where normal, Poisson, or binomial distributions are involved, but for the Gamma (with one param­eter) and the lognormal, we should seek such a function in terms of ^ log x,«. If

i

we are dealing with the special cases illustrated above, one wonders whether in theory we need to have a limit for full credibility. In practice, of course, the limit can be determined as that claim or exposure size having credibility sufficiently close to one. But partial credibility appears to fall out of the expressions without any necessary reference to the full credibility limit.

Page 76: Actuarial Mathematics Output

CREDIBILITY THEORY 63

The next step in the development of credibility theory was taken by Biihlmann (1967) in applying least-squares theory to approximating a posterior mean by a linear function of the prior mean and the mean of the claims data. (Hickman, 1974; Jewell, 1976). The problem is to find a and b so as to minimize

[E»($ |x) - a - b J ] 2 .

Biihlmann showed that

a-(l-MEV),

« MO Var^)+(l /n)E'[(r2(^)]

and the optimal linear estimator for the experience-rated fair premium is exactly the credibility form

WF+(1-6)E'(0),

where

b = n/(n-f k)

and

k = El<r*($)]/Var'{9).

Recent Developments. With the growing interest in credibility theory on the part of European and Australian actuaries, the actuarial literature is now full of new extensions and modifications of credibility models.

One result by Jewell (1976) is the extension of the class of conjugate families for which the credibility approximation is exactly the Bayesian conditional mean. If the likelihood function is of the Koopman-Pitman-Darmois family,

Page 77: Actuarial Mathematics Output

64 P.M. KAHN

i(x\0\- o ( * ) e x p ( - M [Xl >- c(6)

where c(8) is a normalizing factor, then the natural conjugate prior density is

[c(g)P°exp(-gIo)

d(no>xo)

and the family is closed under sampling, i.e., the posterior and prior densities have the same form, though different prameters. Jewell (1976, 1980) has also developed extensions of credibility theory to multivariate situations.

Some other models recently investigated are, among others (Jewell 1980), those related to the problem of claim frequency and severity, the effect of premium volumes, the estimation of extreme values and probabilities, minimax estimators, hierarchical models, seasonal and other non-stationary models, and general non-parametric models. Not only does the term credibility model necessarily no longer imply linearity, but we have even seen some papers on an "unbayesed" approach to credibility. (Gerber, 1982; Norberg, 1985).

BIBLIOGRAPHY

In the following list those references with extensive bibliographies are indicated by an asterisk *.

1. Bailey, A.L. (1945). A generalized theory of credibility, Proceedings of the Casualty Actuarial Society, 32.

2. Bailey, A.L. (1950). Credibility procedures, Proceedings of the Casualty Actuarial Society, 37.

3. Buhlmann, Hans (1967). Experience rating and credibility, ASTIN Bulletin, 3.

4. Buhlmann, Hans (1970). Mathematical methods in risk theory. New York: Springer Verlag.

5. de Finetti, Bruno (1964). Sulla teoria della credibility, Giornale deiristituto Italiano degli Attuari, 27.

Page 78: Actuarial Mathematics Output

CREDIBILITY THEORY 65

*6. D'Hooge, L. and M.J. Goovaerts (1976). A bibliography on credibility theory and its applications, Journal of Computational and Applied Mathematics, 2.

7. Gerber, H.U. (1982). An unbayesed approach to credibility, Insurance; Mathematics & Economics, 1.

8. Hewitt, Charles C. (1970). Credibility for severity, Proceedings of the Casualty Actuarial Society, 57.

9. Hewitt, Charles C. (1975). Credibility for the layman, in Kahn (1975).

10. Hickman, James C. (1975). Introduction and historical overview of credibility, in Kahn (1975).

11. Jewell, W.S. (1976). A survey of credibility theory, Operations Research Center Report No. 76-31. Berkeley: Operations Research Center, University of California.

12. Jewell, W.S. (1980). Models in insurance: paradigms, puzzles, communications and revolutions, Transactions of the International Congress of Actuaries, 21.

13. Kahn, P.M. (1967). An overview of credibility, a paper read before the Actuarial Research Conference at Yale University.

14. Kahn, P.M. (1968). A survey of some recent developments in credibility theory and experience rating from the Bayesian viewpoint, Transactions of the Inter­national Congress of Actuaries, 18.

*15. Kahn, P.M., ed. (1975). Credibility: theory and applications. Proceedings of the Berkeley Actuarial Research Conference on Credibility. New York: Academic Press.

*16. Longley-Cook, L.H. (1962). An introduction to credibility theory, Proceedings of the Casualty Actuarial Society, 49.

*17. Mayerson, A.L. (1964a). A Bayesian view of credibility, Proceedings of the Casualty Actuarial Society, 51.

18. Mayerson, A.L. (1964b). The uses of credibility in property insurance ratemak-ing, Giornale dell'Istituto Italiano degli Attuari, 27.

19. Mowbray, A.H. (1914). How extensive a payroll exposure is necessary to give a dependable pure premium, Proceedings of the Casualty Actuarial Society, 1.

20. Norberg, R. (1985). Unbayesed credibility revisited, ASTIN Bulletin, 15.

*21. Norberg, R. (1979). The credibility approach to experience rating, Scandana-vian Actuarial Journal, 1979.

Page 79: Actuarial Mathematics Output

66 P.M. KAHN

22. Perryman, FA. (1932). Some notes on credibility theory, Proceedings of the Casualty Actuarial Society, 19.

23. Perryman, F.W. (1937). Experience rating plan credibilities, Proceedings of the Casualty Actuarial Society, 24.

24. Raiffa, H. and R. Schlaifer (1961). Applied statistical decision theory. Boston: Harvard University.

25. Whitney, A.W. (1918). The theory of experience rating, Proceedings of the Casualty Actuarial Society, 4.

P.M. KAHN & ASSOCIATES 2430 PACIFIC AVENUE SAN FRANCISCO, CA 94115

Page 80: Actuarial Mathematics Output

Proceedings of Symposia in Applied Mathematics Volume 35, 1986

A Survey of Graduation Theory

ELIAS S.W. SHIU1

ABSTRACT. Graduation is the process of obtaining, from an irregular set of observed values, a corresponding smooth set of values consistent in a general way with the observed values. This is a survey of various methods of graduation used by actuaries.

1. In t roduc t ion . The actuary is concerned with the contingencies of events such as death, disability, retirement, sickness or marriage. He must know the probabili­ties of such events in order to predict their future occurrence and to calculate prem­iums, reserves, annuities, etc., for insurance operations. Tables of these probabilities have to be constructed. Graduation is a key step in constructing such tables.

A set of observed mortality or morbidity probabilities usually contains irregular­ities which we do not believe to be a feature of the true, underlying mortality proba­bilities. Graduation may be defined as the process of securing, from an irregular set of observed values of a dependent variable, a corresponding smooth set of values consistent in a general way with the observed values.

Graduation is characterized by two essential qualities: (i) smoothness and (ii) fit, or consistency with the observed data. The graduated values should be smooth as well as consistent with the indications of the observed values.

Smoothness and fit are not independent of each other. In smoothing a set of observed values, some, if not all, of its values must be changed, and the new values will depart from the observed values. Generally, an increase in smoothing will result in a reduction of fit. Conversely, when the graduated values are drawn closer to the observed values, thus improving the fit, smoothness usually suffers. The ultimate of fit is the exact reproduction of the observed values without any smoothing, i.e., no graduation at all.

These two criteria are, therefore, basically contradictory in the sense that smoothness may not be improved beyond a certain point without some sacrifice of fidelity to the observed values, and vice versa. Thus a good graduation must, of necessity, follow a middle course between optimum fit and optimum smoothness; it is

1980 Mathematics Subject Classification. 62P05, 65D05, 65D10. Martially supported by the Great-West Life Assurance Company. @ 1 9 g 6 A m e r i c a n M a t h e m a t i c l l l society

67

http://dx.doi.org/10.1090/psapm/035/849142

Page 81: Actuarial Mathematics Output

68 E.S.W. SHIU

the result of a compromise between these two criteria. To be of general use, a gra­duation method must allow the actuary some latitude in choosing the relative emphasis to be placed on smoothness and fit. For a more detailed discussion on the concept of smoothness and fit, see [13, Chapter l] .

A "quick and dirty" method of graduation is the graphic method. The observed data are plotted on graph paper and a smooth curve is drawn among them, perhaps with the help of a mechanical aid such as a French curve or a spline with weights. Since the discussion here will be limited to those methods in which the gra­duated values are obtained by some computational procedure, the reader is referred to [l, Chapter 12] and [13, Chapter 2] for details on the graphic method.

Graduation may be classified in several ways. Following Greville [5] we classify the graduation methods into four categories. There are discrete methods, in which each observed value is replaced by a corresponding adjusted value, and continuous methods, in which a smooth curve is fitted to the observed data, yielding a gra­duated value for each argument within a prescribed range. From another point of view, there are local methods, in which the graduated value for a given argument depends only on the observed values for arguments within a stipulated distance from the given argument, and global methods, in which each graduated value depends on all observed data. These two dichotomies - discrete versus continuous and local versus global - give rise to the four categories of graduation methods.

This paper will discuss the discrete-global, discrete-local and continuous-local graduation methods in some detail. Continuous-global graduation methods essen­tially consist of the fitting of an appropriate mathematical function to the data. Since there is much mathematical literature on curve fitting, spline approximation, etc., continuous-global graduation methods will not be treated here.

In preparing this survey, I have made liberal use of the article [5] by T.N.E. Greville. I wish to take this opportunity to thank Dr. Greville for the help he has given me on this subject over the past ten years.

2. N o t a t i o n and Pre l iminar ies . The sequence of observed data will be denoted by {ti,}. The letter v will be reserved to denote the graduated values. Thus, in a discrete method, we seek a sequence of numbers {v{} and, in a continuous method, a function v{x).

Let E denote the translation operator,

Page 82: Actuarial Mathematics Output

GRADUATION THEORY 69

E'f(a)= / ( a + s ) .

Let A and S denote the forward-difference operator and central-difference opera­tor, respectively:

A = £ - /

and

Notice that

6 = E1^ - ETl*.

A = El^6 = 6El*.

A sequence of graduated values {vl7t>2, • • - >vn) *s considered to be smooth, if, for a given positive integer z, the values of

|A*vf|, i = 1,2, . . . , n - * ,

are small. The value z = 3 is a popular choice. Consider the (n—z) row by n column matrix

/c = ( - i ) 2 +*'- ' z

where = 0 for k<0 and for k>z. For example, if n = 7,

Page 83: Actuarial Mathematics Output

70 E.S.W. SHIU

K,=

- 1 3 - 3 1 0 0 0 0 - 1 3 - 3 1 0 0 0 0 - 1 3 - 3 1 0 0 0 0 - 1 3 - 3 1

Thus, if v is the vector

(vi,v2, . . . , v n ) r ,

K,v is the vector

*«, \T (A*vl}A2v2, . . . A'vn-)

Let —— denote the gradient operator ax

dxi dx2 &xn

For a £ Rn, consider the function

/ (x) = a r x = x r a , x 6 Rn.

It is easy to check that the gradient of / is a, i.e.

dx f(x) = a.

Given an nXn matrix A, let

g(x) = xTAx, x e Rn

then it follows from (2.1) that

Page 84: Actuarial Mathematics Output

GRADUATION THEORY 71

-j- g(x) = Ax + ATx. (2.2) dx

Formula (2.2) can be easily generalized as follows.

Lemma 1. Let h{x) = (£x+b) T (Cx+c) , x € H n , where b , c c Rn and B and C are nXn matrices. Then

— &(x) = BT(Cx-f c) + Cr(J3x-fb). ax

3. Discrete-Global Methods. The Whittaker-Henderson method of graduation is the principal discrete-global method. It is based on the minimization of the quantity

F+kS, (3.1)

where F is a measure of the departure of the graduated values from the observed data, 5 is a measure of roughness of the graduated values and k is a positive con­stant chosen by the graduator to reflect the relative importance attached to fit and smoothness. For mathematical convenience, one usually chooses

s = EV*.-)2

and

n

where each Wj is a positive weight assigned to the corresponding observation ut-.

Let W be the diagonal matrix with diagonal entries wlyw2, . . . ,wn- Denote

expression (3.1) by <?(v). In matrix notation we then have:

Page 85: Actuarial Mathematics Output

72 E.S.W. smu

g(v) = (v-u) rV1'(v-u) + fc(/C,v)TK,v.

To minimize q, we solve the equation

0 = - 7 - ?(v). (3.2) av

By Lemma 1,

~ g(v) = 2(W(v-u) + kKjK2v). av

Hence, (3.2) becomes

(W+fc/ff/C,)v = VVu, (3.3)

or

v = {W+kk?KM)-lWu. (3.4)

Since the second derivative (Hessian) of g, 2{W+kKjKz), is positive definite, the vector v given by (3.4) is, in fact, the global minimum of g. An efficient method for solving equation (3.3) is the Choleski square-root method.

There are many choices for the functions F and S in expression (3.1). To diminish the effects of the outliers, one may define F as

i

To ensure that the graduated values are "uniformly" smooth, one may define S as

max IAzv{ |.

For elaboration on the choice of F and 5 , see [14].

Page 86: Actuarial Mathematics Output

GRADUATION THEORY 73

Another discrete-global method is the method of Baycsian graduation. Let p(t) denote the prior density function of the random vector of true values. Let the

conditional density function of u, given t, be / ( u | t ) . By Bayes* Theorem, the posterior density function of t , given the observed data u, is proportional to

p ( t ) / (u | t ) .

The vector of graduated values, v, is taken as the mode (or mean) of the posterior density function.

For mathematical simplicity, some authors assume that the distributions above are multivariate normal, since such distributions are closed under sampling. How­ever, it seems very desirable that when the prior distribution and sample distribution are in conflict, with neither information source dominating the other, the posterior distribution should be fairly diffuse or, perhaps, bimodal. In the case of multivariate normal distributions, the posterior distribution never distinguishes prior information from sample information, no matter how severe their apparent conflict, since prior information is treated as if it were a previous sample. As the variance of the poste­rior distribution is always smaller than each of the variance of the prior distribution and the sample distribution, the posterior distribution is always more concentrated than the prior and sample distributions. "We may view this as a shortcoming of the normality assumption" [11, p. 56].

4. Moving Weighted Averages. The most obvious discrete-local method of gra­duation is the use of moving weighted averages. It employs an adjustment formula of the form

m

where A: and m are nonnegative integers. The purpose of this section is the determination of the coefficients {CJ}.

It is customary to ensure a reasonable fit to the observed data by requiring that (4.1) be exact for polynomials up to a certain degree. We say that (4.1) is exact for the degree r if the coefficients {CJ} are such that, for each polynomial p(x) of degree r or less,

Page 87: Actuarial Mathematics Output

74 E.S.W. SHIU

p(x) = £) CjP{x + j) (4.2) J — *

for all x, but (4.2) is not true in general for polynomials of degree greater than r . Usually, r = 3.

Let {PQ,PI, . . . ,pr) be a basis for all polynomials of degree r or less. Let

Po(-*) P i M O ' ' ' Pr(-&)

Po(rn) p,(m) • • • p r(m)

and

c — vc —Jt-» • • • >co> - • • >cm) t

A: m

Then condition (4.2) can be expressed as

\TP = c r P. (4.3)

A sequence of graduated values {*>,-} is considered to be smooth if its z-th

differences are numerically small. Observe that

A*tV = £ cyA^,+ y j — *

cT/\'.i

Page 88: Actuarial Mathematics Output

GRADUATION THEORY 75

where

Since

cTK,u,. = (Kjcfui,

we have, by the Cauchy-Schwarz inequality,

As we wish |Azi;t- | to be small, it seems reasonable that we would determine the vector of coefficients, c, by minimizing (Kjc)TKjc subject to condition (4.3). This problem can be solved by the method of Lagrange multipliers.

Consider

/(c) = (KjcfKjc - cTP\,

where X is the vector of multipliers. By Lemma 1,

0 = 77 W ac

= 2KrKTc - PX,

or

c = (l/2)(K2Kj)-1PX. (4.4)

Thus

Page 89: Actuarial Mathematics Output

76 E.S.W. SIIIU

PTc = (l,/2)PT(KgKl)-lP\. (4.5)

By (4.3), the left side of (4.5) can be replaced by PTy Hence

X = 2(PT(K2KT)-lP)-lPTl (4.6)

The substitution of (4.6) into equation (4.4) yields

c = (Kj<T)-lP(PT(Kj<T)-lP)-lPTy

Assume that, the number of terms in (4.1) is N, i.e.,

m + 1 + k = N.

Hence the matrix

.4 = (KzI<T)-lP{F*r(KtKj)-lP)-lPr

is an NxN matrix. Since A is idempotent, I—A is also idempotent. It can be shown [3, p. 148] that

I-A = Kj^{KT^zKj^zT'Kr^zKl

The inverse of the matrix KKT has been derived in [10]. Note that the dimensions of the differencing matrices K, which are rectangular, are uniquely determined by A', r and z.

Since c = .4j and j is a vector with only one nonzero entry, each column of .4 represents the coefficents of a different moving-weighted-average formula. From

the p-th column of A, we get the moving-weighted-average formula

Page 90: Actuarial Mathematics Output

GRADUATION THEORY 77

N-p

These formulas are asymmetrical unless TV—p = p—1.

For r = 3, 2 = 3 and m = k = ( N - l ) / 2 , it has been shown ([8, p. 60], [4, p. 18]) that

Cj = c_y = {315[(m + l ) 2 - r ] I (m+2) 2 - / ] [ (m-f 3 ) 2 - / ]

(3m 2 +12m-4-11 j*)} -̂ {8(m + l)(m +2)(m-f 3)(2m-l)

(2?n + l)(2?n + 3)(2m + 5)(2m + 7)(2m+9)}.

For further details on graduation by moving-weighted-averages, we refer the reader to [l, Chapter 13], [2], [6, Section II], [9] and [16].

5. Smooth-Junction Interpolation. This section discusses a continuous-local method of graduation which is used by actuaries when the observed data observa­tions are grouped, say, in quinquennial age groups. Without loss of generality, assume that the arguments for which the observations are available are consecutive integers.

The Everett interpolation formula is

E* = F ( l - x ) + F(x)E9 (5.1)

yhere

F(*)=-§^br=E x+m 2?n + l

:2m (5.2)

Formula (5.1) can be generalized as follows. Let G(x) be an operator of the form

Page 91: Actuarial Mathematics Output

78 E.S.W. SHIU

G{x) = a{x)I + b{x)S2 + c{x)5A + ....

where a (a:), b(x), c(x) are functions to be specified. We call the operator

G(l-x) + G{x)E

a generalized Everett interpolation operator. Thus

v(t + x) = G(l-x)ui + G(x)ui+1. (5.3)

The values of x in formula (5.3) are restricted between 0 and 1. As one applies formula (5.3) from one unit interval to the next, there is a strong analogy to the moving-weighted-average graduation. Indeed, as in the derivation of moving-weighted-average formulas, we impose the condition that formula (5.3) be exact for polynomials up to a certain degree.

It follows from (5.1), (5.2) and

that

Ex OO 1

/ \ 1 — X + 771

2771-fl 5*" +

f V

x+m 2m4-lj

fi2m(l+A)

2 m«0

1—X4-771

2m4-l + x4-m

2m + l :2m

+ x+m

27714-1 52mA

Page 92: Actuarial Mathematics Output

GRADUATION THEORY 79

Si m=0

x+m—l

2m + x-fm

2ro + l 6 2 m A (5.4)

which is the Gauss forward formula. Similarly,

G ( l - x ) + G{x)E = ( a ( l - x ) 4- a{x))I 4- a(x)A

+ ( 6 ( 1 - * ) + 6(x))£ 2 4 - 6(x)<$ 2A (5.5 )

4- ( c ( l - s ) + c(x))<5 4 + c(x)<5 4A + • • •

On comparing (5.5) with (5.4), we see that the generalized Everett interpolation for­mula is exact for degree r if and only if the conditions in the following table up to and including the one corresponding to the degree r are satisfied:

Degree 'Condition

0 1 2

3 1

a ( l - x ) 4- a(x) = 1 a(x) = x 6(1—x ) 4- 6(x ) = x ( x - l ) / 2 b(x) = (x + l ) x ( x - l ) / 6 c ( l - x ) 4- c(x) = (x + l ) x ( x - l ) ( x - 2 ) / 2 1

For smoothness we impose the requirement that the curve of interpolated (gra­duated) values not only be continuous but should be at least once differentiable. Thus it is required that for all i and for m = 0,1, . . . ,A%,

u<mH;-i+z)Ui = <;(m)('"+z)Uo (5.6 )

Equation (5.6) is equivalent to the operator equation

Page 93: Actuarial Mathematics Output

80 E.S.W. SHIU

(±)mlG(l-x)E-1 + C (* )U = ( ^ r | G ( l - » ) + G(x)E]I.0,

or

(_l)"»G«m)(0)E-1 + G ( m )(l) - ( - l )mG , ( m ' ( l ) + G<m)(0)£.

An interpolation formula is called tangential if (5.7) holds for m = 0 and 1 is called osculaiory if (5.7) holds for m = 0, 1 and 2.

For m even, (5.7) becomes

G{m)(0)(E-E~l) = 0,

i.e.

G(m)(0) = 0

or

a(m)(0) = fc(m)(0) = c<m>(0) = • • • = 0 .

For m odd, (5.7) becomes

2G<m>(l) = G ,(m)(0)(£'+£'-1)

= G<m>(0)(2/+a2)

or

a(m)(l) = a(m>(0)

26<m>(l) = 26<m>(0) + a<m>(0)

Page 94: Actuarial Mathematics Output

GRADUATION THEORY

2cf,n'(l) = 2cfm'(0) + &(w)(0), etc.

An interpolation formula is called reproducing if

v(i) — ui}

which implies that

/ = G(l) ,

i.e.

and

a ( l ) = l

6(1) = c ( l ) = • • • = 0 .

Here are some examples of smooth-junction interpolation formulas.

( l ) G(x) = xl + b(x)52

( a ) b(x) = x 2 ( z - l ) / 2

This is known as the Karup-King formula. It is reproducing, tangential and exact for quadratics.

f - x 2 / 4 0 < x < l / 2 b ^ = j ( (2x- l ) 2 -x 2 ) /4 1 / 2 < X < 1

This is a formula due to H. Vaughan. It is also reproducing, tangential and exact for quadratics.

Page 95: Actuarial Mathematics Output

82 E.S.W. SHIU

(ii) / \ G(x) = xl + r 3 \62+ c(x)Si

( a ) c(x) = x 2 (x - l ) (x -5 ) /48

(b)

This is Shovelton's formula. It is reproducing, tangential and exact for quartics.

c(x) = x 3 (x- l ) (5x-7) /24

This is called Sprague's formula. It is reproducing, osculatory and exact for quartics.

(c) c(x) = -x 3 / 36

This formula is due to W.A. Jenkins. It is osculatory and exact for cubics, but not reproducing. Since the curve of interpolated values is a piecewise cubic with second-order continuity, it is a cubic spline.

It is interesting to note that Greville and Vaughan [7] have used L. Schwartz's theory of distributions to develop a general theory of smooth-junction interpolation formulas. The paper is reprinted in [6] which is an important book for any person who wishes to study graduation theory.

We now conclude this discussion with two remarks. For the reader interested in the history of graduation theory, we recommend the article [15] by the late H.L. Seal. To the reader who is statistically inclined, we wish to point out that G.C. Taylor [17] argues that it is not correct to test the goodness of fit of a linear-compound graduation by the chi-square test. The class of linear-compound gradua­tions includes Whittaker-Henderson graduations and graduations by moving-weighted-average and smooth-junction formulas. A recent paper which discusses this point is [9].

Page 96: Actuarial Mathematics Output

GRADUATION T H E O R Y 83

BIBLIOGRAPHY

1. Benjamin, B. and J.H. Pollard, TJie Analysis of Mortality and Other Actuarial

Statistics, Heinemann, London, 1980.

2. Borgan, 0., "On the theory of moving average graduat ion" , Scand. Actuarial

J., 1979,83-105.

3. Greville, T.N.E., "On smoothing a finite table: a matrix approach", J. Soc.

Indust. Appl. Math., 5 (1957), 137-154. Reprinted in [6j.

4. Greville, T.N.E., Graduation, P a r t 5 Study Note 53.1.73, Society of Actuaries,

Chicago, 1974.

5. Greville, T.N.E. , " G r a d u a t i o n " in Encyclopedia of Statistical Sciences, edited

by S. Kotz and N.L. Johnson, Wiley, New York, 3 (1983), 463-469.

6. Greville, T.N.E., Selected Papers of T.N.E. Greville, Charles Babbage Research

Centre , Box 370, St. Pierre , Manitoba, 1984.

7. Greville, T.N.E. and H. Vaughan, "Polynomial interpolation in terms of sym­

bolic opera tors" , Trans. Soc. Actuar., 6 (1954), 413-476. Reprinted in [6].

8. Henderson, R., Mathematical Theory of Graduation. Actuarial Society of

America, New York, 1938.

9. Hoem, J.M., "A contribution to the statistical theory of linear graduat ion",

Insurance: Math, and Econ. 3 (1984), 1-17.

10. Hoskin, W.D. and P . J . Ponzo, "Some properties of a class of band marices",

Math. Computation, 26 (1972), 393-400.

11. Learner, E.E., Specification Searches, Wiley, New York, 1978.

12. London, R.L., Graduation: The Revision of Estimates, P a r t 5 Study Note 54-

01-84, Society of Actuaries , Itasca, 1984.

13. Miller, M.D., Elements of Graduation, Actuarial Society of America, New

York, 1946.

14. Schuette , H.D.R., "A linear programming approach to graduat ion", Trans. Soc.

Actuar. 30 (1978), 407-431; Discussion 433-445.

15. Seal, H.L., "Gradua t ion by piecewise cubic polynomials: a historical review",

Bldtt. Deutsche Ges. Versich. Math., 15 (1981), 89-114.

16. Shiu, E.S.W., "Minimum-/? z moving-weighted-average formulas", Trans. Soc.

Actuar. 36 (1984), 489-500.

Page 97: Actuarial Mathematics Output

84 E.S.W. SHIU

17. Taylor, G.C., "The chi-square test of a graduation by linear-compound for­mula", Bull. Ass. Roy. actu. Beiges, 71 (1976), 26-42.

DEPARTMENT OF ACTUARIAL AND MANAGEMENT SCIENCES UNIVERSITY OF MANITOBA WINNIPEG, MANITOBA R3T 2N2

Page 98: Actuarial Mathematics Output

Proceedings of Symposia in Applied Mathematics Volume 35, 1986

Actuarial Assumptions and Models for

Social Security Projections

JOHN A. BEEKMAN

ABSTRACT. This paper describes the economic, programmatic, and demographic assumptions, and mathematical models used to project the financial status of the Old Age, Survivors, and Disability Insurance (OASDI) System. A Social Security Area population projection starts with an estimate of the current population by age, sex, and marital status, and carries the estimate forward as far as 2080. Comparable projections are made for real Gross National Product, average wages in covered employ­ment, consumer price index, average annual interest rate earned on the OASDI trust funds, and average annual unemployment rate. The actuarial balance of the OASDI program is the difference, for a time period, between the projected average income rate and the projected average cost rate, both expressed as percentages of effective taxable payroll. Such balances are projected 75 years. The Office of the Actuary, Social Security Administration, performs sensitivity analyses to examine actuarial balances as demographic, economic, and programmatic assumptions are varied.

1. In t roduc t ion . Over 91% of all employed people in the United States are covered under the Old Age, Survivors, and Disability Insurance (OASDI) System. With the Social Security Amendments of 1983 this percentage will increase. The Office of the Actuary of the Social Security Administration must make economic, programmatic, and demographic assumptions, and use them to project the financial status of the OASDI program. These estimates are made over three time periods: short range (5-10 years), medium range (25 years), and long range (75 years). Short-range estimates can reveal a need for legislative action to maintain the pay­ment of benefits. Long-range and medium-range estimates can evaluate the size of

1980 Mathematics Subject Classification, 90A30

© 1986 American Mathematical Society

85

http://dx.doi.org/10.1090/psapm/035/849143

Page 99: Actuarial Mathematics Output

86 J.A. BEEKMAN

the financial obligation that the OASDI program will place on future generations.

Sections 2, 3, and 4 of this paper are concerned with demographic assumptions and techniques which the Office of the Actuary utilizes. Cost estimates for OASDI must rely on population projections. A Social Security Area population projection starts with an estimate of the current population by age, sex, and marital status, and carries the estimate forward in time. A primary requisite for such projections is that of survival probabilities. These involve the construction of life tables, an analysis of mortality experience by cause of death as well as by age and sex, postu­lated annual improvement rates in mortality, and the estimation of probabities of death. Dependency ratios are described in the fourth section. These are ratios of retired versus working populations, youth versus working populations and youth plus retired versus working populations.

Sections 5 through 8 discuss the economic assumptions, and mathematical models which the Office of the Actuary uses in its cost estimates. These include real Gross National Product (GNP), average wages in covered employment, consumer price index, average annual interest rate earned on the OASDI trust funds, and average annual unemployment rate. Projections of these quantities are prepared on the basis of four sets of assumptions about the state of the economy in future years. Mathematical models are used to provide linkages between productivity and average wages, increases in average wages as related to increases in productivity and other factors, labor force participation rates, and projections of Potential GNP. Further actuarial assumptions and models include regression equations for covered worker rates, projected covered populations, relations between Effective Taxable Payroll (ETP) and GNP, projections of linkages between ETP and GNP, disability incidence and termination rates, and actuarial present values of monthly annuities payable to disabled workers. The actuarial balance of the OASDI program is the difference, for a time period, between the projected average income rate and the projected average cost rate, both expressed as percentages of ETP. Such balances are projected 75 years. The Office of the Actuary performs sensitivity analyses to examine actuarial balances as demographic, economic, and programmatic assumptions are varied.

2. Population Projections - Preliminary Models. Various mathematical models can be used to project population changes (see Keyfitz and Beekman, 1984). Let a population count at time t be denoted by Pt when discrete time values are used, or P(() over a continuous time. The model

Page 100: Actuarial Mathematics Output

SOCIAL SECURITY 87

Pt = P0(l+xY

reflects geometric growth. The unit of time may be a month, a year, or 5 years, provided a: is a fraction of increase per unit of time.

The differential equation

K) P(t) dt

describes a population with a variable increase r(t). This leads to the solution

P(t) = P(0)exp[/ r(s)ds].

It is possible to use stochastic models for population projections. For example, the following appears in (Beekman, 1984). Consider a Markov chain with states of nature u(urban dweller) and r(rural dweller). Let the matrix of transition proba­bilities be

A/ = •"uxi ""ur

Assume that the initial distribution of urban and rural dwellers is

p(0) =

The superscript (0) of P denotes the initial time. The distribution t time units later is given by

pit) = A/^o)

where M is the transpose of M. The Markovian model can be improved to pro­

vide for growth. Let the crude rate of natural increase be given by

Page 101: Actuarial Mathematics Output

88 J.A. BEEKMAN

A B D

where B and D refer to births and deaths in a year, and P is the population at midyear. The word "crude" reflects the disregard of age. The population projec­tions then become

^ = MeA^ = M2e2AP^°)

Such models are valuable for instructional purposes. Students can project a country's population forward 25, 50, 75, even 100 years, and see how small yearly growth rates can lead to enormous populations for future years. The Markov chain model can show a future reduction in births (due to rural to urban moves) yet enlargement of social security needs.

Two complete issues (Volume 3, Number 4, 1984 and Volume 4, Number 1, 1985) of the journal Insurance: Mathematics and Economics are devoted to the techniques and implications of population projections. These issues contain thir­teen papers, most of which pertain to the theme of this paper. In particular, atten­tion is called to the papers (Keyfitz, 1984), (Myers, 1985, with discussions), (Wilkin, 1985, with discussion), and (Boyle and Freedman, 1985).

3. Social Security Area Population Projections by Marital Status. The OASDI program has so many components that models of the type discussed in Sec­tion 2 do not provide enough accuracy for the projections of the subpopulations. Actuarial Study No. 85 (Faber and Wilkin, 1981) used an estimate of the July 1, 1979 population within the Social Security Area as starting population. Since OASI and DI benefits are linked to age, sex, and marital status, it is those categories into which the projections are distributed. The projections are for 25 and 75 years into the future. The vital events which must be estimated are:

Page 102: Actuarial Mathematics Output

SOCIAL SECURITY 89

(1) the number, and distribution by sex of births in each of the years;

(2) the number of deaths (by age, sex, and marital status)

(3) the number of marriages and divorces (by ages of the couple)

(4) the number of widowings (by age and sex)

(5) the number of net immigrations (by age, sex, and marital status).

The starting population for Actuarial Study No. 85 was an estimate of the popula­tion in the Social Security Area as of July 1, 1979 by age, sex, and marital status, and the number of existing marriages by age of husband crossed with age of wife. The starting population for Actuarial Study No. 88 (Wilkin (1983)) was an estimate of the population in the Social Security Area as of July 1, 1981 by single years of age, sex, and marital status. The starting population for Actuarial Study No. 92 (Wade, 1984) was the estimated population in the Social Security Area as of July 1, 1982 by the same components.

To further describe such projections, some notation will be adopted, which is from a forthcoming monograph on Social Security (Andrews and Beekman, 1986). Let

p = B = D = C = W = 5 = / =

the size of a population, the number of births, the number of deaths, the number of marriages, the number of widowings,

the number of divorces, and the number (net) of immigrants.

Appropriate subscripts and superscripts indicate dates, age and sex designations. As three examples,

lF% = the number, at mid-year z, of single males then age x last birthday, 2F% = the number, at mid-year z, of married females then age y last birth­day, PI = the number, at mid-year z, of males then age x last birthday.

The year-by-year projections for two components of the population can be made (for £>0) using the formulas

Page 103: Actuarial Mathematics Output

90 J .A. BEEKMAN

2ff+} = 2F* - 2£>J -f XC% 4- 3 CJ + ACl - Vvf - S* + 2 /J .

The symbol 3C* = the number of marriages from 7/l/z to 6/30/s-f 1 involving a widowed male age a: last birthday at the time of marriage. The symbol 4CX = the number of marriages from 7/l/z to 6/30/?-f 1 involving a divorced male age x last birthday at the time of marriage.

There are comparable formulas involving ZP*X, 4F%, as well as a similar list for females.

For each age pair xy and each year z, the year by year projections of the number F%p, of marriages can be made using the formula

P \z-H _ ryz r\z , s~*z qz . jz

x-H,y+l "" rxy ~~ uxy "T W y "*" ° x y "+* -<*jr

Two charts which show existing distributions of the population by marital status and their long-range projections are given in the Appendix as Charts 1 and 2.

To calculate probabilities of survival by age last birthday, life tables for each sex were constructed for each year from 1982 to 2080. Let lX} 0<#<u;, be a func­tion which models the diminution of a cohort as it grows older. (Here u is the assumed terminal age). A life table shows values of lx for non-negative integral values of x, (with /0 equalling a convenient number such as 100,000), as well as the numbers of dying function dx = lx — lx+t. A life table portrays the probabili­ties of dying at the several ages qx = dx/lx, x = 0,1,2,... and models a stationary

l

population. Let Lx denote the stationary population function jlx+tdt. Let LJ o

be from a life table for the year z. The survival factor Lx+X/Lx was used to pro­ject (in absence of migration) the population F% to P^Xl- The factor LQ/IQ was used as the probability of survival for new borns. In the preparation of projected life tables, the mortality experience was analyzed by cause of death as well as by age and sex. The ten categories into which the causes were classified are: (1) heart diseases, (2) malignancies, (3) vascular diseases, (4) accidents, suicide, homicide, (5) respiratory diseases, (6) congenital malformations, diseases of early infancy, (7) digestive system diseases, (8) diabetes, (9) cirrhosis, and (10) other. Postulated annual improvement rates from those mortality causes were developed as follows. Let (D*k')z = the number of deaths due to cause A: during calendar year z

Page 104: Actuarial Mathematics Output

SOCIAL SECURITY 91

among those people age x last birthday at the time of death. The population cen­tral death rate between ages x and x-frc due to cause k was computed as

( M(k)y = (^)r+(pj^r+---+(gi*.)B-1r

More than one age was used to improve the accuracy of the rates. If needed, one could then use graduation of data techniques to obtain single age rates.

In Actuarial Study No. 85, an average annual rate of improvement in mortality was developed (by sex and age group) for each cause of death by fitting a regression line of the form

Iog(BMJ*>)* = akz + log bk

for the years z = 1968 to 1978. Assuming the fit is good, then

in which case

[(.Mi*))* - iMk)Y+l]A»M?]Y = i - «'*•

Hence 1—e k can be characterized as an average annual rate of improvement in the mortality due to cause k.

The estimation of the probabilities of death is fundamental to the construction of life tables. The estimates qz

x for the theoretical life table values qzx are based

on the projected central death rates nA/J, except for ages 95 and higher. A rela­tionship between the g's and the A/'s must be identified using the available experi­ence. For the infant ages 0 through 4, it was

1% = W*AMl*UMl

Page 105: Actuarial Mathematics Output

92 JA. BEEKMAN

ql = til /M*UM\ for x = 1,2,3,4

for future years z. The relationship for ages 5 through 94 is much more compli­cated and will not be included here. The values of n MJ were either computed from available population data or by estimation using some projection assumption.

The studies prepared by The Office of the Actuary include tables which reflect the different fertility and mortality assumptions under the named Alternatives I, II, and III sets of assumptions. Under the intermediate assumption (Alternative II), the ultimate total fertility rate was assumed to be 2.1 children per woman in Actuarial Study No. 85. A total fertility rate of about 2.1 is sufficient to replenish the popu­lation, in the absence of net migration. Alternative I (optimistic) and Alternative II (pessimistic) assumptions used total fertility rates of 2.4 and 1.7 children per woman, respectively. The optimistic, intermediate, and pessimistic sets of assump­tions go from a low-cost estimate to a high-cost estimate. The actual total fertility rates for the years 1920 through 1980 with projections (by alternative) to 2080 are portrayed in the Appendix as Chart 3. Past improvement in mortality has varied greatly by cause of death. Since it was expected that future improvement in mortal­ity would also vary greatly by cause of death, Alternative II death rates by age and sex were projected separately for the ten groups of causes of death. Central death rates were projected to 2080 by age group, sex, and cause of death from their assumed 1980 levels by applying assumed annual percentage improvements. Under Alternatives I and III, the central death rates for 1979 and 1980 were assumed to be the same as for Alternative II. For 1980-2080, Alternatives I and III annual improvements in central death rates were assumed to average half and twice the Alternative II improvements, respectively. Actual life expectancies (for males and females) for the years 1900 through 1980 with projections (by alternatives) to 2080 are given in the Appendix as Charts 4 and 5.

Projections of the Alternatives I, II, and III Populations in the Social Security Area by Age, Sex, and Marital Status through the year 2080 appear in Actuarial Studies 85, 88, and 92.

4. Dependency Ratios. Dependency ratios give global measures of ratios of sub-populations receiving benefits, to the working population. In particular, they are ratios of retired versus working populations, yoiath versus working populations, and youth plus retired versus working populations. For a stationary population, represented by a life table with cohort survival function l{y), with a retirement age of 65, and first working age of 20,

Page 106: Actuarial Mathematics Output

SOCIAL SECURITY 93

w 65

R(65) = J l(y)dy/J l(y)dy 65 20

expresses broadly the financial burden of a social security pension scheme. For a stationary population, one assumes a constant number B of births each year. In a stable population, one assumes that births follow an exponential function Bert, t>0 , where r is the growth rate. This will affect the age distribution in future years. Thus, the function l(y) should be replaced by e"Tyl{y)y 0<z<u, to reflect the smaller number of births y years ago:

Je^l(y)dy _65

65

Je^l(y)dy 20

The financial burden of the social security scheme in a country with a stationary population can be contrasted with that of a country with a stable population. The solution follows that of Problem 38, Ch. Ill of Keyfitz and Beekman (1984),

w 65

In R; = ln(Jc-**l{y)dy) - ln(fe-"l{y)dy), 65 20

u 65

j ye"Ty l(y)dy Jye^yl(y)dy 65 20

+ —b = m , - m 2

Je^l{y)dy Je^l(y)dy 65 20

where ml = mean of the population 20 to 65, ra2 = mean of the population over 65. So

/?(65) =

d ln/? r

Page 107: Actuarial Mathematics Output

94 J A . BEEKMAN

±R d \nRr dr T d _ &K

; = — ^ = mi — m2 , *T~"/xr = — - — .

dr Rr l '' dr r Ar

ARr Suppose that mx = 35, and m 2 = 75. Then ——— = —40 Ar. Thus, for each per-

K centage point of difference in r there will be a difference of 40% of Rr in the opposite direction. For example, if one country is increasing at 2% per annum, and another country is stationary, the first country's burden is only 20% of the second country's burden.

Tables 22A, B, C of Actuarial Study No. 85 present aged and total dependency ratios for the years 1960, 1965, 1970, 1975, 1980 with quinquennial year projections to 2080. The aged dependency ratio is the population 65 and over divided by the population 20 to 64. The total dependency ratio is the population under 20 plus the population 65 and over divided by the population 20 to 64. Under Alterna­tive II (intermediate assumptions), the aged dependency ratio is projected to increase to .386 in the year 2035 and then to remain level. The total dependency ratio is projected to fall from a level of .753 in 1980 to .683 in 2005 and then to rise to .859 in 2030 and stay about level thereafter. Under Alternative I (optimistic

assumptions), the aged dependency ratio is projected to increase from .195 in 1980 to .330 in 2030 and then to decrease to .286 in 2080. The total dependency ratio is projected to fall from a value of .753 in 1980 to .712 in 2005, rise to .890 in 2030, fall to .851 in 2050 and remain at about that level to 2080. Under Alterna­tive III (pessimistic assumptions), the aged dependency ratio is projected to steadily rise from a value of .195 in 1980 to .662 in 2080. The total dependency ratio is projected to decrease from a value of .753 in 1980 to .646 in 2010, then rise to a value of 1.033 in 2080.

Youth dependency ratios, aged dependency ratios, and total dependency ratios for Canada, 1901-71 with projections for 1976-2071 are presented in Table 2 of Brown (1982). Table 3 of the same reference presents aged dependency ratios for ten industrial countries for 1950-2000. Table 4 projects dependency ratios for Canada through 2031, with recognition of the fact that per capita costs for the aged are 2.5 times those of youth.

5. Economic Assumptions for Social Security Projections. The Office of the Actuary, Social Security Administration, must make basic economic assumptions in order to examine the financial status of the OASDI system for various periods of

Page 108: Actuarial Mathematics Output

SOCIAL SECURITY 95

time. These include real Gross National Product (GNP), average wages in covered employment, consumer price index, average annual interest rate earned on the trust funds, and average annual unemployment rate. Projections of these quantities are prepared on the basis of four sets of assumptions about the state of the economy in future years. One analysis of the changes in average real wages is based on the link­ages between productivity and such average wages. Let us define the quantities:

P = Units of production per hour paid C = Compensation per unit of production

W = Ratio of wages to compensation H = Average hours paid per week per worker R = Average weeks worked per year, plus residual A = Average real wages per year per worker.

Then A = PxCxWxHxR. The linkages allow the estimation of A from the more elementary variables. The following additive relation is more helpful in show­ing how increases in productivity and other factors contribute to increases in aver­age real wages. Let At- = Average annual rate of increase in t, for i = p9 C, W, Hy R, and A, expressed as percent. Then (l+.OlA^) = (l-f.01AF)

(1+.01AC) (l+.OlA^Xl+.OlAtfXl+.OlA*) and AA = A F + Ac + A ^ 4- AH + A#. Table 7 of Actuarial Study 90 contains values for the A t 's for the

periods 1952-62, 1962-1972, 1972-1982, and 1952-82. The ultimate assumptions about AA varied from 2.50 for the optimistic set of assumptions to 1.00 for the pessimistic set of assumptions.

The total labor force participation rate is calculated as the total labor force (i.e., the number of people working or seeking work) divided by the noninstitutional-ized U.S. population. Actuarial Study 90 has documented the variation of such rates through time from age group to age group within either sex. The variation has been significant, especially for females. These rates were projected to the year 2060, under the four alternative sets of assumptions.

The concept of potential GNP is an important instrument in short-range economic forecasts. It "... is an estimate of what the economy could produce at high rates of utilization of the factors of production - labor, capital, and natural resources" - (page 83, Economic Report of the President (January 1978)). A com­parison of actual GNP with potential GNP provides insight into the likely unem­ployment rate, and also provides a smooth baseline series that can be projected more easily than actual GNP, which fluctuates widely. Many economic parameters can be analyzed against the potential GNP. Let PRGNP(t) be the potential real GNP for year t. One short range model says that

Page 109: Actuarial Mathematics Output

96 J .A. BEEKMAN

PRGNP(t) = Fx(t) X F2{t) x F 3 (0 X F4(t)

where

Fj(^) = average full-employment employed labor force for year t F2(t) = average level of real output per hour paid (productivity) for year t F3(t) = average number of hours paid per week for year t FA(t) = residual for year t (mainly weeks per year).

Furthermore,

F1(0 = //I(0x//2(<)x//3(t)

where

Hx{t) = mid-year United States population at time t H2{t) = average full-employment labor force participation rate for year t Hz{t) = 1 - r ( 0 where r(t) = average full-employment unemployment rate for year t. (Even in a year of "full-employment", there are some unemployed people).

A recursive model for potential GNP can be described as follows. Let

Aj^—1) = projected percent increase in the full-employment employed labor force from year £—1 to year t

A2(<— 1) = assumed percent increase in productivity from year t— 1 to year t A3(f—1) = assumed percent decline in average hours paid per week A4(t—1) = assumed change in residual factor.

Then

PRGNP{t)= [l + .0lA1(«-l)][l+.01A2(<-l)][l+.01A3(e-l)][l+.01A4(*-l)J

X PRGNP{t-l).

To project actual real GNP, percentage rates of change were postulated for each alternative for the first few years until the ratio of actual to potential GNP reached its ultimate level. These ultimate levels were determined by regression

Page 110: Actuarial Mathematics Output

SOCIAL SECURITY 97

equations of the form yt = axt -h fi, where

yt = age/sex-specific unemployment rate in year t xt = ratio of actual GNP to potential GNP in year t

A boundary condition is that when yt is equal to the full-employment unemploy­ment rate, then xt = 1.00, indicating complete agreement between actual GNP and potential GNP. After the ultimate ratio is achieved, actual GNP is assumed to increase at the same rate as potential GNP.

6. Further Actuarial Assumptions Specific to OASDI (Programmatic Assumptions). The Office of the Actuary must project numbers of covered work­ers. The number of covered workers for a calendar year represents the number of workers who pay Social Security taxes at any time during that year. A projection of future covered workers requires an appropriate projected population, and projected labor force participation rates, and unemployment rates.

Effective taxable payroll (ETP) consists of the total earnings subject to Social Security taxation, including deemed wages based on military service. Some adjust­ments are required because of lower tax rates which apply to tips, multiple-employer "excess wages", and self-employment income through 1983. After the adjustments have been performed, the ETP can be multiplied by the combined employer-employee tax rate to produce the accrued payroll tax amount for that year. Expressing the costs of the OASDI program as a percentage of ETP permits a com­parison of the program costs with the combined employer-employee tax rate to see if the program is operating with a surplus or deficit. A disadvantage of this com­parison is that it views the program in isolation from the national economy. By expressing OASDI costs as a percentage of GNP, the national commitment to the program is demonstrated.

The relation between ETP and GNP is developed as follows:

Page 111: Actuarial Mathematics Output

98 J.A. BEEKMAN

PTP GNP~ ETP -f

GNP

ETP __ .5(Employee Taxable Wages) .5(Employer Taxable Wages) GNP ~~ GNP + GNP

Effective Taxable Self-Employed Income * GNP

Employee(er) Taxable Wages __ Employee(er) Taxable Wages Covered Wages GNP Covered Wages Wages

Wages Employee(er) Compensation Employee(er) Compensation GNP

EfT. Taxable SEI Eff. Taxable SEI Covered SEI SEI X r rn X GNP Covered SEI SEI GNP

where SEI = Self Employed Income. Table HA of Actuarial Study 90 gives values of the linkages between GNP and ETP for the years 1951-1982. The values for ETP/GNP in the years 1951, 1956, 1961, 1966, 1971, 1976, and 1981 were .3544, .3900, .3865, .4011, .3853, .4186, and .4346 respectively. For each of the alter­

native sets of assumptions, values of the linkages were projected from 1983-2060. The ETP projections for 1983 through 1992 were obtained from the short range economic model. Beyond 1992 the taxable payroll for employees, employers, and the self-employed were determined separately by multiplying the previous year's payroll by the percentage increase in the number of covered workers, and by the percentage change in average covered wages. Each of the projected payrolls was multiplied by the appropriate tax rate, and the results were added to yield total taxes. The ETP was obtained by dividing the total taxes by the combined employer-employee tax rate. GNP is derived by dividing ETP by the ratio ETP/GNP. Tables 18B-E of Actuarial Study 90 summarize GNP, ETP, and connecting values for 1983-2060 for the four sets of hypotheses.

The most highly involved economic parameter is ETP. Projection of ETP requires careful consideration of all the other economic parameters. The most basic economic parameter is labor productivity. Assumptions about future changes in pro­ductivity constitute the starting point for developing the projections for the other parameters.

Page 112: Actuarial Mathematics Output

SOCIAL SECURITY 99

In order to estimate the financial status of the OASDI program, the economic parameters must be projected. One track of development begins with the produc­tivity assumption and yields projections of aggregate values for potential and actual GNP. Actuarial Study No. 94 (Goss, Glanz, An, 1985) refers to this as the "aggre­gate method". This method produces projections of ETP through the application of a series of linkages between GNP and payroll. The second track of development starts with the productivity assumption, and then yields projections for the average levels of earnings for both workers in the total U.S. economy and for those in OASDI covered employment. This method is referred to as the "average method", and also produces projections of ETP. Two tracks of development are necessary because of the need for a broad spectrum of economic projections. Pages 8 and 9 of Actuarial Study 94 are devoted to charts which portray the progression of develop­ment of parameters in each track. Annual percent changes in productivity, U.S. population, average working hours per week, unemployment rates, and other basic parameters are traced along with the derived parameters such as full-employment labor force, real potential GNP, actual real GNP, and effective taxable payroll.

The advantages of the two methods are described on pages 24-25 of Actuarial Study 94. "The primary advantage of the aggregate method is that the develop­ment of actual GNP from the relatively smoothly growing potential GNP and pro­ductivity index permits careful analysis of historical and possible future economic cycles. For more distant future periods when actual GNP is assumed to grow in tandem with potential GNP and productivity, consideration of cycles is unnecessary, and the average method becomes a more efficient means to the ultimate projection of effective taxable payroll." In summary, it is felt that the aggregate method is more useful in preparing short-run projections, while the average method is better in the preparation of long-range projections.

7. Disability Insurance. The DI portion of the OASDI system is large. For example, disability benefits were paid to more than 2.8 million disabled workers in April, 1980. The granting of disability benefits starts a series of monthly payments which continues until one of three events occurs: (l) death, (2) recovery, (3) attainment of age 65. The following notation summarizes functions which are used in calculating annuities payable until one of events (l), (2), or (3) occur.

Page 113: Actuarial Mathematics Output

100 J .A. BEEKMAN

Symbol Meaning

[x] Calendar age at entitlement

l\x\+n Expected number of lives surviving in disability status, who became entitled to disability benefits at age x, n years ago

tffxl+n Probability that a person who became disabled n years ago at age x will die between ages i + n and x + n + 1.*

q\l\+n Probability that a person who became disabled n years ago at age x will recover between ages x-f n and x+n-j-1.*

qfy+n Probability that a person who became disabled n years ago at age x will terminate between ages x + n and x + n-f l because of death or recovery.

The last probability is related to the first two probabilities through the formula

*£}+. = i - (i-9$+»)(i-9f;|+.)-

Select rates q$+n* n — 1*2,3,4 were graduated by a two dimensional Whittaker-Henderson Type B formula developed by Steven F. McKay and John C. Wilkin in an appendix (pages 34-44) to Actuarial Study 74. Ultimate rates q^b for x = 20,21, . . . ,64 and similar select and ultimate recovery rates were also derived.

Termination probabilities (and hence continuance probabilities) are used to calculate the actuarial present values of monthly annuities payable to disabled workers. Tables 17 and 18 of Actuarial Study 81 show such values of annuity payments (by age at entitlement, for males and females) that cease at attainment of age 65.

*In actuarial terminology, these are absolute rates, that is, probabilities related to associated single decrement tables.

Page 114: Actuarial Mathematics Output

SOCIAL SECURITY 101

8. Long-Range Cost Estimates for the OASDI System. Long-range cost esti­mates express the size of the financial obligation that the OASDI program places on future generations. In this final section, we will outline the steps in their prepara­tion.

a. Introduction. Principal components for such estimates are projected popu­lations by age-sex and marital status groups, under alternative assumptions; ETP as a base for expressing benefit percentage costs; disability incidence rates and disabil­ity termination rates. Actuarial Study 91 (Goss (1984)) describes the projection of fully insured rates, and disability insured rates.

b. Beneficiaries Related to Retired Workers. Projected numbers of retired workers and their spouse and child beneficiaries were Msed through 2060.

c. Beneficiaries Related to Disabled Workers. By applying disability incidence rates to the exposed population, the number of newly entitled disabled workers and their beneficiaries is obtained. Termination rates are used to project the number of current disabled workers who will not continue in that status.

d. Survivors of Deceased Workers. It is necessary to project the numbers of widow beneficiaries, orphans (paternal, maternal, or full orphans), mother benefi­ciaries, and parent beneficiaries.

e. Dual Eligibility. The Office of the Actuary must also estimate the numbers of people who are eligible for both a primary benefit (as either a retired or disabled worker) and an auxiliary or survivor benefit.

f. Average Benefit Levels. To obtain the average benefit levels for monthly benefits, two steps are needed. In the first stage, the level of the average primary insurance amount (PIA) based on the earnings record is projected. Secondly, the percentage of PIA payable is projected.

g. Benefit Payments . The products of the number of beneficiaries and their corresponding average benefit amounts yield the monthly benefit payments. Adjust­ments for retroactive payments must be included. The number of lump-sum death payments is also projected.

h. Miscellaneous Elements. As a result of the 1983 Social Security amend­ments, some OASDI monthly benefits are now subject to federal income taxation. The income from such taxes are transferred to OASDI trust funds. Projections of these amounts are needed in the cost estimates. Administration expenses for the OASDI program are also projected, and included in the cost estimates.

Page 115: Actuarial Mathematics Output

102 J.A. BEEKMAN

i. Actuarial Balance. The actuarial balance is the difference between the pro­jected average income rate and the projected average cost rate, expressed as a per­centage of taxable payroll. The long-range actuarial balance is that difference over a 75-year period, and the medium-range actuarial balance is the difference over a 25-year period. The actuarial balances are projected to be positive (surpluses) for 38 years, starting in 1984. Those surpluses, with the resulting interest income are

sufficient to more than offset projected deficits during the next 37 years. The sur­pluses help the trust funds accumulate to 548 percent of the annual expenditure level in 2017. After that, the trust fund ratios decline steadily, but still are at the level of 83 percent of the annual expenditure level at the end of the 75 years, on the basis of the intermediate assumptions. Table 29 of Actuarial Study 91 provides the projected cost rates, income rates, and actuarial balances based on the four sets of assumptions for the years 1983-2057. Projections of cost rates, income rates, (and hence actuarial balances) to the year 2060 are given in the Appendix as Chart 7. Projections of the trust fund assets as percentages of the yearly expenditures to the year 2060 are given in the Appendix as Chart 8.

Sensitivity analyses were made on the cost rates, income rates, and actuarial balances as the assumptions were varied, one at a time. The assumptions which were varied were: real-wage differential (ultimate percentage increase in wages minus CPI), CPI, total fertility rates, mortality improvement, net immigration, and disability incidence rates.

Four alternative sets of economic and demographic assumptions were used in projecting OASDI costs. The average cost rates over the 75 years range from a low of 9.81 percent of taxable payroll on the basis of the optimistic assumptions to 16.56 percent of taxable payroll on the basis of the pessimistic assumptions. The 75 year cost rate is 12.84 percent of taxable payroll on the basis of the intermedi­

ate assumptions, and 11.99 percent of taxable payroll on the basis of the intermedi­ate assumptions with greater economic growth. The long-range income rates vary very little since the main component, the tax rates, are fixed by law. The variation was from 12.73 to 13.04 percent of taxable payroll.

Page 116: Actuarial Mathematics Output

SOCIAL SECURITY 103

BIBLIOGRAPHY

1. Andrews, George H. and John A. Beekman, Actuarial Projections for Social Security in the United States, to appear in 1986.

2. Bayo, Francisco and John C. Wilkin (1977), "Experience of Disabled-Worker Benefits under OASDI, 1965-74", Actuarial Study No. 74, Social Security Administration, Baltimore, Md.

3. Beekman, John A. (1984), "Demography for actuarial students", Insurance: Mathematics and Economics 3, 271-277.

4. Boyle, Phelim P., and Ruth Freedman (1985), "Population Waves and fertility fluctuations: Social security implications", Insurance: Mathematics and Economics 4, 65-74.

5. Brown, Robert L. (1982), "Actuarial aspects of the changing Canadian demo­graphic profile", Transactions of the Society of Actuaries 34, 13-30.

6. Economic Report of the President, United States Government, Washington, D.C., 1978.

7. Faber, J.C. and J.C. Wilkin (1981), "Social Security Area Population Projec­tions, 1981", Actuarial Study Note No. 85, Social Security Administration, Bal­timore, Md.

8. Goss, Stephen C. (1984), "Long-Range Estimates of the Financial Status of the Old-Age, Survivors, and Disability Insurance Program, 1983", Actuarial Study No. 91, Social Security Administration, Baltimore, Md.

9. Goss, Stephen C , Milton P. Glanz, Seung H. An (1985), "Economic Projections for OASDI Cost and Income Estimates, 1984", Actuarial Study No. 94, Social Security Administration, Baltimore, Md.

10. Keyfitz, Nathan (1984), "Technology, employment, and the succession of gen­erations", Insurance: Mathematics and Economics 3, 219-230.

11. Keyfitz, N. and J.A. Beekman (1981), Demography Through Problems, Springer-Verlag, New York, Berlin.

12. Myers, Robert J. (1985), "Implications of population change on social insurance systems providing old-age benefits", Insurance: Mathematics and Economics 4, 3-8.

13. Schobel, Bruce D. (1980), "Experience of Disabled-Worker Benefits under OASDI, 1974-78", Actuarial Study No. 81, Social Security Administration, Bal­timore, Md.

Page 117: Actuarial Mathematics Output

104 JA . BEEKMAN

14. Wade, Alice II. (1984), "Social Security Area Population Projections, 1984", Actuarial Study No. 92. Social Security Administration, Baltimore, Md.

15. Wilkin, John C. (1985), "Population projections for social security cost esti­mates", Insurance: Mathematics and Economics 4, 23-27.

16. WTilkin, John C. (1983), "Social Security Area Population Projections, 1983", Actuarial Study No. 88, Social Security Administration, Baltimore, Md.

17. Wilkin, John C , Milton P. Glanz, Ronald V. Gresch, and Seung H. An (1984), "Economic Projections for OASDI Cost Estimates, 1983", Actuarial Study No. 90, Social Security Administration, Baltimore, Md.

DEPARTMENT OF MATHEMATICAL SCIENCES BALL STATE UNIVERSITY MUNCIE, IN 47306

Page 118: Actuarial Mathematics Output

SOCIAL SECURITY 105

APPENDIX

Chart 1* Dis t r ibut ion of t h e To ta l Popula t ion by Mar i t a l S t a tus , Ages 0-100

July 1, 1979

July 1, 2080 (Alternative I)

1.0

0.8

0 . 6 - j

0.4—I

0.8—1

0.0

SINGLE

i 10 20 30 40 50 60 70 80 90 100

*Chart from Actuarial Study No. 85, by Joseph F. Faber and John C. Wilkin, 1981.

Page 119: Actuarial Mathematics Output

106 JA. BEEKMAN

Chart 2*

Distribution of the Population by Marital Status, Ages 0-100

July 1, 1982

July 1, 2080 (Alternative II)

1 0

0 8

a . 6

e . 4

0 . 8 H

0 .0

SINGLE

0 10 i i j. i. i ,1 u i i

*Chart 6 from Actuarial Study No. 92, by Alice H. Wade, 1984.

Page 120: Actuarial Mathematics Output

SOCIAL SECURITY 107

Chart 3*

Total Fertility Rate (In Children Per Woman), 1920-2080 Actual and as Projected Under Alternatives I, II, and III

4.0

3.5H

3.0H

a.B-H

e.0H

1.5H

1.0 1980 1940 I960 1980 8000 8080 8040 8060 8080

•Chart 1 from Actuarial Study No. 85, by Joseph F. Faber and John C. Wilkin, 1981.

Page 121: Actuarial Mathematics Output

108 J.A. BEEKMAN

Chart 4* Male Life Expec tancy (In Years) , 1900-2080

Actual and as Pro jec ted Under Al terna t ives I, II and III

080 1 * 1900 19120 itf40 1S60 1980 8080 8040 8060

•Chart 2A from Actuarial Study No. 85, by Joseph F. Faber and John C. Wilkin, 1981.

Page 122: Actuarial Mathematics Output

SOCIAL SECURITY 109

Chart 5* Female Life Expectancy (In Years), 1900-2080

Actual and as Projected Under Alternatives I, II and III

tee

19*48 1963 1980 eeae ee*e 8868 8080

'Chart 2B from Actuarial Study No. 85, by Joseph F. Faber and John C. Wilkin, 1981.

Page 123: Actuarial Mathematics Output

110 JA. BEEKMAN

Chart 6*

Ratio of Population Aged 85+ to Population Aged 20-64, 1980-2080 Actual and as Projected Under Alternatives I, II and III

• .7-

• « H

I . 5 H

• 4 H

• -3H

• e—I

».!•

ACTUAL X zz

— zzz

/

y . /

y

y y

/ /

/ /

/ / /

. * * * '

I M 19t t B«M M M

•Chart 5 from Actuarial Study 85, by Joseph F. Faber and John C. Wilkin, 1981.

Page 124: Actuarial Mathematics Output

SOCIAL SECURITY 111

Chart 7* Comparison of the Projected Cost Rates and Income Rates of the OASDI Program Based on Optimistic, Intermediate,

Intermediate but with Greater Economic Growth, and Pessimistic Assumptions

(as percentage of taxable payroll)

3 0 -

es H

20-4

is H

Cost Rates:

_ _ • _ pessimistic _ _ • » _ » Intermediate « _ _ _ _ » Intermediate but with greater economic growth — — — - — — optimistic

Income rate (Intermediate) •

S

./ s / s

y /

/

«j ^ ? ^ . • - . ^ •• / s-ry

.*'./• m*

2980 aoao 8020 1040

•Chart 1 from Actuarial Study No. 91, by Stephen C. Goss, 1984.

Page 125: Actuarial Mathematics Output

112 JA. BEEKMAN

Chart 8* Projected OASDI Trust Fund Assets at the Beginning of the Year

as a Percentage of Expenditures During the Year Based on Optimistic, Intermediate but with Greater Economic Growth,

Intermediate, and Pessimistic Assumptions

3300-

eeee-H

loeo-

-leeo—j

-2eee-

Optimistic Intermediate but with greater economic growth Intermediate Pessimistic

\

1980 seee aeao

YEAR

8d4d ao6e

"Chart 2 from Actuarial Study No. 91, by Stephen C. Goss, 1984.

Page 126: Actuarial Mathematics Output

Proceedings of Symposia in Applied Mathematics Volume 35, 1986

On t h e Pe r fo rmance of Pension P lans

CECIL J. NESBITT

ABSTRACT. Assets of major retirement programs now exceed 1.3 trillion dollars (Table 2, Pension Facts, 1984/85). This growth, during past years of economic stress, provides opportunity for review and reconsideration of what is being accomplished. This paper will be devoted mainly to examin­ing the annuity operations of the companion organizations, Teachers Insurance and Annuity Association (TIAA) and College Retirement Equity Fund (CREF). In particular, annuity incomes that have been provided by these organizations to retirees of 1960, 1965, 1970, 1975 and 1980 will be reviewed. In 1981, TIAA developed a graded benefit annuity, and now CREF is developing a money-market annuity. Other developments may emerge when the Commission on College Retirement completes its work. Under recent economic conditions, vested benefits provided by defined benefit plans are being questioned, and the paper sets forth some of the problems involved.

1. Introduction. This paper will consist mainly of a review of the annuity incomes received by retirees of past years who had accumulations in the companion organiza­tions, Teachers Insurance and Annuity Association (TIAA) and College Retirement Equity Fund (CREF). These organizations have a major role in providing retire­ment benefits for college and university staffs. TIAA-CREF covers about 750,000 active members and about 135,000 retirees and survivors. The assets of the two organizations amounted to 35 billion dollars at the end of 1984, with 55% in TIAA and 45% in CREF. TIAA invests mainly in bonds and mortgages, while CREF invests almost exclusively in common stocks. Both TIAA and CREF are non-profit organizations, and inflation-fueled high rates of investment return have been passed through to participants.

In a later section, we shall consider some related matters concerning defined benefit plans. These are plans with benefits fixed by formulas and the contributions required to support those benefits determined as a consequence. If a plan has employee contributions, it may have both defined benefit and defined contribution characteristics, and this can lead to problems.

© 1986 American Mathematical Society

113

http://dx.doi.org/10.1090/psapm/035/849144

Page 127: Actuarial Mathematics Output

114 C.J. NESBITT

We shall first give in broad outline the mathematical bases of TIAA and CREF

annuity incomes.

2. TIAA Annuity Formulas. Both TIAA and CREF operates on a defined con­tribution basis, that is, a level of contributions is fixed, and the benefits are those that can be provided therefrom. Correspondence with TIAA has indicated that the mathematical basis for TIAA annuities is as follows:

G = A/f (2.1)

G + D = A/fD (2.2)

where

A = accumulation at retirement, after allowance for future operating expenses

G = guaranteed payment

fc = annuity factor based on conservative assumptions

(The annuity factor is the cost to provide 1 dollar of income under the

annnuity option chosen.)

G+D = total payment = guaranteed plus dividend payments

fD — annuity factor based on experience mortality and interest.

Any change in these benefits is determined according to the formulas

[G+D}1 = total payment being received based on the existing dividend for­mula

/ = annuity factor used to determine the existing dividend formula

/ 2 = annuity factor used to determine the new dividend formula

[G+Dj2 = total payment to be received based on the new dividend formula

= ;G+P! 1 ( / D V7 D 2 )

Page 128: Actuarial Mathematics Output

PENSION PLANS 115

Current dividends represent more than half of total payments and are fully taxable.

3. CREF Annuity Formulas- CREF has a fiscal year running from April 1, calendar year y—1 to March 31 of calendar year y. Such a fiscal year is referred to as fiscal year y. We shall follow notations and concepts in Duncan's paper (1952) but note that procedures in TIAA and CREF are changing as they move from monthly to daily accounting. From a net accumulation, A, in CREF after allowance for future operating expenses, a participant retiring at the end of fiscal year y, receives an annuity of a fixed number, N , of annuity units. The annuity unit is revalued once each year in April to reflect CREF's total experience in regard to both investment returns and annuitant survivorship; the latter requiring actuarial valuation, in terms of annuity units, of all annuities then in force. If

VB = annuity unit value at end of fiscal year y

Vy+k = annuity unit value at end of fiscal year y-f A:

jCREF _ CREF's annuity factor for 1 dollar of income under the annuity option chosen

Iy = annuity income for fiscal year y+1

Iy+k — annuity income for fiscal year y-f-fc-f 1

then

A = I9.fCREF = (NPB\f)fCREF (3.1)

NPB = Iy/Vf (3.2)

/ ,+* - NPBVf+k = Iyi^Af). (3.3)

Duncan indicated the adjustments to be made when, as usual, retirement does not occur at the fiscal year end.

It is important to note that the annuity factors, fCREF, are based on a 4% rate of investment return but that V^ varies with the total experience of the annuity fund, and is particularly influenced by the market value of the fund's com­mon stocks. If the total return of the annuity fund is more (less) than 4%, the

Page 129: Actuarial Mathematics Output
Page 130: Actuarial Mathematics Output

PENSION PLANS 117

Corresponding to the tables for retirements at age 70, Tables 2A,...,2E have been calculated for retirements at age 65. To save space, only Table 2B has been included here. For TIAA, the resulting ratios of 1984 income to initial income are moderately higher than those for retirees at age 70, but for CREF the ratios are the same. This illustrates that CREF percentage changes in retirement income are independent of the age at retirement, sex of the retiree, or of the annuity option elected. These factors (including sex prior to May 1, 1980) determine the number of annuity units acqurired at retirement, but the year-by-year values of the units depend on the combined experience of the annuity pool. For TIAA, percentage adjustments depend on ratios of annuity factors which will vary by age and annuity option but in the future may not vary by sex.

5. TIAA Graded Benefit Annuities. We have seen that TIAA provides non-guaranteed dividends on a levelized annuity basis. In the past, as interest rates went up, this produced moderately increasing TIAA incomes. For current annuities on a 12% interest basis; there may be little increase possible in the future, and

decreased dividends may even result. In 1981, TIAA introduced an alternative divi­dend principle in connection with its newly offered graded benefit annuities. These are based on a 4% rate of interest, and each year use investment gains to purchase on a 4% basis an incremental annuity payable during subsequent years. The result should be an annuity with annual increments. Under current high interest rates, the initial income for such a graded benefit annuity may be as much as 45% lower than for a level TIAA annuity. A graded benefit annuity will not have such dramatic annual changes as for CREF annuities (—17% to 4-39%), but should have annual percentage increases approximating the excess of the current rate of investment return over the 4% assumed. Only if such current rate falls below 4% would the graded benefit annuity income decrease.

As yet there has not been much acceptance of the graded benefit annuity because its initial income is so low in comparison to a level TIAA annuity income. Individuals with good longevity prospects, and institutions concerned about the income protection of their retirees who live to advanced ages, might seriously con­sider utilizing this form. The concept is not new and has been used by some institu­tions other than TIAA.

6. Recent Developments. The magnitude of TIAA-CREF funds and operations, and their importance for a large portion of the academic community, have drawn criticisms regarding investment management, cash-out provisions, communications,

Page 131: Actuarial Mathematics Output

118 C.J. NESBITT

governance, and other matters. These criticisms have been considered internally by TIAA and externally by a Commission on College Retirement [4] which is due to report in about a year's time. One response by TIAA has been the development of a money-market annuity form which would be administered under CREF. This may lead to daily evaluation of CREF accumulation units, and to registration of CREF with the Securities and Exchange Commission. It may also lead to many changes from the processes set out by Duncan (1952). It remains to be seen whether the new developments in preparation, or being proposed, will increase administrative expense.

You may also be aware that annuities commenced on or after May 1, 1980 have been adjusted to a unisex or merged gender life table basis. For annuities that con­tinue for a term certain, or during the survivorship of a pair of male and female lives, the effect is rather minimal.

7. Vesting Under Defined Benefit Plans. For simple illustration purposes, let us consider a pension plan which provides for retirement at age 65, a pension of 2% of final salary for each year of service as a member of the plan. If the

employer provides all of the funding, the plan is said to be non-contributory. If the employees contribute a percent of their pay (typically 5%), the plan is said to be con t r ibu to ry . In either case, the financial risks of the plan are mainly borne by the employer, unlike the case of a defined contribution plan where the financial risks are largely on the employees.

Let us also consider an employee who enters the plan at age 20 and begins to accrue service credit and benefits based thereon. At age 40, the employee may have accrued a pension benefit of 40% of current salary. If termination from the plan occurs, the employee may have a vested benefit equal to a deferred pension payable from age 65 with annual income equal to 40% of the salary at age 40. If the employee continues with the same employer, there will result a pension equal to 40% of salary at age 65 for the first 20 years of service, and an additional 50% of salary at age 65 for the remaining 25 years of service. Unlike the

employee who terminates at age 40, and receives for service between ages 20 and 40, a pension at age 65 based on salary earned at age 40, the career employee, for

the same 20 years of service, receives a pension which has in effect been indexed for 25 years from the age 40 salary level to the age 65 level. The indexing may provide thereby some offset to cost-of-living increases, and some recognition of merit or seniority increases. The fact that the career employee gets an indexed benefit for service between ages 20 and 40, and the terminating employee does not, reveals what many throughout the world consider to be a flaw in this particular defined benefit design. In fact, in some countries it may now be required that the employer

Page 132: Actuarial Mathematics Output

PENSION PLANS 119

provide at least a partially indexed benefit for the terminating employee. Otherwise, the terminating employee at age 40 may receive at age 65 a pension which has depreciated substantially in real value over the 25 years.

Such partial indexing would increase the employer's cost for the plan unless the benefit formula were modified. But is it good pension design for the employer's con­tributions to be channelled to protect the pensions of the persisting career employees while providing no depreciation protection for the pensions of those who serve only during their early careers?

The question takes on added urgency if the plan is contributory. In that case the accumulated contributions of the terminating employee may, under present economic conditions, be worth more than the 40% pension, deferred to age 65 and based on age 40 salary. Then it might be to the advantage of the terminating employee to withdraw the accumulated contributions, in which case all the pension plan has provided the employee for 20 years of service is a tax-deferred savings fund for the employee's own contributions.

A final comment is that a contributory defined benefit plan may be considered to engender a conflict of defined contribution and defined benefit principles. Nevertheless, many such plans exist, especially for public employees. In such a plan, the employee may regard (and in fact, it may be spelled out in the plan) that the employer provides a pension equal to the benefit defined by formula less the annuity which can be provided from the accumulation of the employee's contributions. In other words, the employer-provided pension is a defined benefit reduced by the operation of defined employee contributions. You can imagine this has some interesting consequences.

BIBLIOGRAPHY

1. American Council of Life Insurance, Pension Facts, 1984/85; 1850 K Street, NW, Washington, D.C. 20006-2284.

2. Duncan, Robert M. (1952), "A Retirement System Granting Unit Annuities an Investing in Equities", Transactions of Society of Actuaries, IV, 317-344.

3. Goss, Stephen C., Glanz, Milton P., and An, Seung, H. (1985), "Economic Pro­jections for OASDI Cost and Income Estimates, 1984", Actuarial Study No. 94, Social Security Administration, Baltimore, Maryland, 21235.

Page 133: Actuarial Mathematics Output

120 C.J. NESBITT

Commission on College Retirement, 875 Third Avenue, New York, New York, 10022.

DEPARTMENT OF MATHEMATICS UNIVERSITY OF MICHIGAN ANN ARBOR, MI 48109-1003

Page 134: Actuarial Mathematics Output

PENSION PLANS 121

TABLE 1A

T1AA AND CREF ANNUITY INCOMES PER 100:000 Dollars of Accumulation

Year of Retirement, 1960; Age of Retirement, 70; Single Life Annuity

MALE RETIREE FEMALE RETIREE

Year TLA\ CREF TL\A CREF

1960 1961

1962

1963 1964

1965 1966 1967

1968 1969 1970 1971

1972

1973

1974 1975

1976

1977

1978 1979 1980 1981

1982 1983 1984 1985

8i/^60

60

9,127

10,651

10,651 10,651 10,799

10,799 10,799

10,927

11,049 11,348 11,457

11,769 11,769

11,955 12,130 12,293

12,293

12,437

12,437

12,437 12,437

12,797

13,276 13,276

13,276

145%

42%

10,521 12,452

12,395

10,753 12,561

13,382 14,435

15,141

14,183 15,416 13,713

14,534 16,954 14,980 12,433

10,360 12.446

11,764

11,043 12,940 12,461

17,010 14,496 20,169 21,887

23,585

208%

60%

7,604 8,980

8,980

8,980 9,207 9,207

9,207

9,334 9,456 9,557

9,668 9,985 9,985 10,176

10,356

10.526

10,526

10,676

10,676

10,676 10,676 11,073

11,548 11,548 11,548

152%

44%

9,247

10,944

10,894 9.455 11,040

11,761

12,689 13,307

12,466 13,550 12,053 12,774

14,901 13,166 10,927

9,106

10,939 10,339

9,706 11,373 10.952

14,951

12,741 17,726 19,237

20,729

208%

60%

* / g 4 for CREF is taken as the mean of 1983, 1984, or 1985 incomes

** I&> is hi deflated by CPI-W

Page 135: Actuarial Mathematics Output

122 C.J. NESBITT

TABLE IB

TIAA AND CREF ANNUITY INCOMES PER 100,000 DOLLARS

OF ACCUMULATION

Year of Retirement, 1965; Age at Retirement, 70; Single Life Annuity

MALE RETIREE FEMALE RETIREE

Year TIAA CREF TIAA CREF

1965 1966 1967

1968

1969 1970 1971

1972 1973

1974 1975 1976 1977

1978 1979

1980 1981 1982

1983

1984

1985

hi/hb *

dej /T ** 84 /i65

10,885 10,885

11,044 11,196

11,478

11,619 12,022

12,022 12,267

12,500 12 722 12,722

12,922 12,922 12,922

12,922

13,444 14,045 14,045

129%

40%

10,385 11,202

11,751 11,007

11,965 10,643

11,279

13,157 11,626

9,649 8,040

9,660 9,129 8,570 10,042 9,671

13,201

11,250

15,653

16,986

18,304

164%

50%

9,308 9,308

9,465 9,617

9,766 9,906

10,314

10,314 10,564

10,804 11,033 11,033

11,242 11,242

11,242

11,242

11,782

12,364 12,364

133%

41%

9,144 9,864

10,346 9,692

10,535 9,371

9,931 11,585 10,237

8,496 7,079 8,506

8,038

7,546 8,842

8,515

11,624 9,906

13,783

14,956

16,116

164%

50%

* ISi for CREF is taken as the mean of 1983, 1984, 1985 incomes

** H'/ is / 8 4 deflated by CPI-W

Page 136: Actuarial Mathematics Output

PENSION PLANS 1

TABLE 1C

TIAA AND CREF ANNUITY INCOMES PER 100,000 DOLLARS

OF ACCUMULATION

Year of Retirement, 1970; Age of Retirement, 70; Single Life Annuity

MALE RETIREE FEMALE RETIREE

Year TL\A CREF TIAA CREF

10,051 10,549 10,549 10,860 11,162 11,453 11,453 11,723 11,723 11,723 11,723 12,435 13,145 13,145 13,145

1 3 1 %

4 9 %

8,587 9,101

10,616 9,380 7,785 6,487 7,794 7,366 6,915 8,103 7,803

10,651 9,077

12,630 13,705 14,768

160%

60%

* Iu for CREF is taken as the mean of 1983, 1984, 1985 incomes

** Iff is IB4 deflated by CPI-W

1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985

84/^70 *

' / / T O * *

11,739 12,238 12,238 12,548 12,845 13,130 13,130 13,393 13,393 13,393 13,393 14,093 14,827 14,827 14,827

126%

4 8 %

10,269 10,883 12,696 11,218

9,310 7,757 9,320 8,809 8,270 9,690 9,331

12,738 10,855 15,104 16,389 17,661

160%

60%

Page 137: Actuarial Mathematics Output

124 C.J NESBITT

TABLE ID

TIAA AND CREF ANNUITY INCOMES PER 100,000 DOLLARS

OF ACCUMULATION

Year of Retirement, 1975; Age at Retirement, 70; Single Life Annuity

MALE RETIREE FEMALE RETIREE

Year TLAA CREF TIAA CREF

1975

1976 1977

1978

1979 1980

1981 1982

1983 1984

1985

W/75*

rdeJ /r ** 84 /1lb

13,393 13,393

13,718

13,718

13,718 13,718

14,599 15,465

15,465 15,465

115%

60%

10,136

12,178 11,509

10,804 12,661

12,192 16,642

14,122

19,734 21,413 23,076

211%

111%

11,690

11,690 12,020

12,020 12,020

12,020 12,906 13,744

13,744 13,744

118%

62%

8,502

10,215

9,654

9,062

10,620 10,226 13,959 11,846

16,553 17,961

19,356

211%

111%

* J84 for CREF is taken as the mean of 1983, 1984, 1985 incomes

** It1 is Iu deflated by CPI-W

Page 138: Actuarial Mathematics Output

PENSION PLANS 125

TABLE IE

TIAA AND CREF ANNUITY INCOMES PER 100,000 DOLLARS

OF ACCUMULATION

Year of Retirement, 1980; Age at Retirement, 70; Single Life Annuity

MALE RETIREE FEMALE RETIREE

Year TIAA CREF TL\A CREF

1980 1981

1982

1983 1984 1985

^84/^80

def /T ** 84 /J80

13,676

14,710 15,685

15,685 15,685

115%

92%

9,812

13,394

11,414 15,882 17,234 18,571

176%

141%

12,071

13,113 14,063

14,063 14,063

117%

93%

8,231 11,236

9,575 13,323 14,457

15,579

176%

141%

* J84 for CREF is taken as the mean of 1983, 1984, 1985 incomes.

** 1$ is 7M deflated by CPI-W

Page 139: Actuarial Mathematics Output

126 C.J. NESBITT

TABLE 2B

TIAA AND CREF ANNUITY INCOMES PER 100,000 DOLLARS

OF ACCUMULATION

Year of Retirement, 1965; Age at Retirement, 65; Single Life Annuity

MALE RETIREE FEMALE RETIREE

Year TIAA CREF TIAA CREF

1965

1966

1967

1968

1969

1970 1971

1972

1973 1974 1975 1976 1977

1978

1979

1980

1981 1982 1983

1984

1985

hi/hb

dcf /r ** 84 /ltb

9,314 9,314 9,476

9,633

9,883 10,030

10,456 10,456 10,721

10,975

11,219 11,219

11,443 11,443

11,443

11,443

12,042 12,669

12,669

12,669

136%

42%

8,688 9,372

9,830

9,208

10,009

8,903 9,436 11,007

9,726

8,072 6,726 8,082

7,638 7,169

8,401 8,090

11,044

9,412

13,095

14,210

15,313

164%

50%

8,095 8,095

8,256

8,413

8,596

8,745 9,178 9,178 9,449 9,711

9,965 9,965 10,200 10,200

10,200

10,200

10,819 11,437

11,437

11,437

141%

43%

7,719

8,326

8,734

8,181

8,893

7,910 8,384

9,779 8,641

7,172 5,976 7,180

6,786 6,370

7,464 7,188

9,812 8,362

11,635

12,625

13,605

164%

50%

* 784 for CREF is taken as the mean of 1983, 1984, 1985 incomes.

** If/ is / 8 4 deflated by CPI-W.

Page 140: Actuarial Mathematics Output

PENSION PLANS 127

TABLE 3

TIAA AND CREF CUMULATIVE ANNUITY INCOMES PER 100,000 DOLLARS OF ACCUMULATION

Male Retiree; Age at Retirement, 70; Single Life Annuity

Through Calendar Year

1960 Retiree 1964 1969 1974 1979 1984

TIAA 51,879 106,801 165,881 227,778 292,840 CREF 58,687 131,244 203,858 262,411 348,434

1965 Retiree

TIAA 55,488 115,918 180,128 248,629 CREF 56,310 112,664 158,105 224,866

1970 Retiree

TIAA 61,608 128,047 200,014 CREF 54,376 98,222 162,639

1975 Retiree

TIAA 67,940 142,652 CREF 57,288 141,391

1980 Retiree

TIAA CREF

75,441 67,736

Page 141: Actuarial Mathematics Output

This page intentionally left blank

Page 142: Actuarial Mathematics Output

Other Titles in This Series (Continued from the front of this publication)

23 R. V . Hogg , editor, Modern statistics: Methods and applications (San Antonio,

Texas, January 1980)

22 G. H. Gohib and J. Oliger, editors , Numerical analysis (Atlanta, Georgia, January

1978)

21 P. D . Lax, editor, Mathematical aspects of production and distribution of energy

(San Antonio, Texas, January 1976)

20 J. P. LaSalle, editor, The influence of computing on mathematical research and

education (University of Montana, August 1973)

19 J. T. Schwartz, editor, Mathematical aspects of computer science (New York City,

April 1966)

18 H. Grad, editor, Magneto-fluid and plasma dynamics (New York City, April 1965)

17 R. Finn, editor , Applications of nonlinear partial differential equations in

mathematical physics (New York City, April 1964)

16 R. Be l lman, editor , Stochastic processes in mathematical physics and engineering

(New York City, April 1963)

15 N . C. Metropol i s , A . H. Taub, J. Todd, and C. B . Tompkins, editors, Experimental arithmetic, high speed computing, and mathematics (Atlantic City and Chicago, April 1962)

14 R. Be l lman, editor , Mathematical problems in the biological sciences (New York

City, April 1961)

13 R. Be l lman, G. Birkhoff, and C. C. Lin, editors , Hydrodynamic instability (New

York City, April 1960)

12 R. Jakobson, editor, Structure of language and its mathematical aspects (New York

City, April 1960)

11 G. Birkhoff and E. P. Wigner , editors , Nuclear reactor theory (New York City,

April 1959)

10 R. Be l lman and M . Hall , Jr. , editors , Combinatorial analysis (New York University, April 1957)

9 G. Birkhoff and R. E . Langer, editors , Orbit theory (Columbia University, April

1958)

8 L. M . Graves , editor, Calculus of variations and its applications (University of Chicago, April 1956)

7 L. A . MacCol l , edi tor , Applied probability (Polytechnic Institute of Brooklyn, April

1955)

6 J. H. Curt iss , editor, Numerical analysis (Santa Monica City College, August 1953)

5 A . E. Heins , editor, Wave motion and vibration theory (Carnegie Institute of

Technology, June 1952)

4 M. H. Mart in , editor , Fluid dynamics (University of Maryland, June 1951)

3 R. V . Churchill , editor, Elasticity (University of Michigan, June 1949)

2 A . H. Taub, editor , Electromagnetic theory (Massachusetts Institute of Technology, July 1948)

1 E . Reissner , editor, Non-linear problems in mechanics of continua (Brown University. August 1947)

(See the AMS catalog for earlier titles)

Page 143: Actuarial Mathematics Output