1
Looking beyond the catastrophe model THESE days, Karen Clark frequently finds herself having to explain to journal- ists that her criticism of certain aspects of the catastrophe-modelling sector is not directed at either the catastrophe models or at the risk modellers themselves. The situation largely reflects the rise in the profiles of both Clark and her Boston- based catastrophe risk consultancy, Karen Clark & Company (KC&C) over the past four years. It does, however, matter a great deal to her that she is not perceived to be disparaging an industry she virtually created from scratch in 1987 when she devised the world’s first hurri- cane cat model and founded the first cat- modelling company, Applied Insurance Research (AIR), now AIR Worldwide. She served as chief executive of AIR until 2007, when she left to set up KC&C. From the very beginning, the company adopted a distinctly independent stance in relation to the risk-modelling indus- try; it is, notably, not slow to challenge some of the industry’s most deeply held assumptions. It is an attitude that very much informs KC&C’s annual reports on the performance of the near-term (over the five-year period 2006 through 2010) hurricane models produced by the three major cat modellers: AIR, Eqecat and Risk Management Solutions (RMS). Near-term models, Clark says, were introduced in 2006, following the destructive and costly 2004 and 2005 hurricane seasons. In the wake of hurri- cane Katrina there was a lot of “model bashing”, she notes: “It was felt there were a lot of factors the models were underestimating. At the same time, a lot was being written on global warming and its relationship to hurricanes. So the dynamics of the environment were such that the modellers really felt they had to look inside their models and review the different components, and that contributed to introduction of these near-term models in 2006 by all of the modelling companies.” It is a development that reflects the importance of catastrophe risk manage- ment for the US insurance industry. Catastrophe claims make up the largest component of property losses today. “In the US homeowners’ insurance market, 30% of the premium dollar is taken up by actual and expected catastrophe losses,” Clark explains. The third and final KC&C report, which was published in January of this year, found the near-term models, designed to project insured losses in the US from Atlantic hurricanes, have significantly overestimated losses for the period. The two earlier reports came to the same conclusion for the 2006 through 2008 and the 2006 through 2009 seasons. Interestingly, all the modelling companies more or less had the same projections for the first year of the 2006 to 2010 near-term period. “All the models pretty much said hurricane activity and losses were going to be 40% above average, which is a huge amount,” she says. Little hard data As it turned out, hurricane activity for the projection period was well below average. For example, hurricanes making landfall in the US between 2006 to 2010 were 53% below average. The losses, Clark notes, were 70% below average. “We had mini- mal losses in every year except 2008. So how well did the models do? Our conclu- sion is they have not demonstrated any skill. They have not shown any ability to accurately project near-term hurricane losses. And that is not surprising because there is very little hard scientific data to support these near-term projections.” She has no doubt the modellers are using the best science available. “But we also know science can lead to numbers that go awry and that is because there is not a lot of data underlying that science. Scientists have the same problem actuar- ies do – if you don’t have a lot of data, you have high uncertainty. So our approach at KC&C is to look at the numbers and at the facts. There are so few facts we at least should be looking at those facts.” One of Clark’s central arguments is the problem is not with the models. “The models are great and we are not criticis- ing the modellers for developing the near-term models, which have the purpose of providing their clients with different views of hurricane risk. We totally think that is the right thing to do. However, models are just models. They do the best they can. What is inappropri- ate is the extent to which the marketing hype oversold the near-term models and the science underlying them.” Global warming According to Clark, an important factor not being taken into account is that the more sophisticated climate models are projecting a decrease in hurricanes as a result of global warming. “And that is a statement that comes from the Intergov- ernmental Panel on Climate Change [IPCC] in its most recent report. Now, there is some evidence storms may become more intense over time because of rising sea-surface temperatures. And if that does occur, it is going to be a gradual increase over time. It is not going to be a 40% increase over the course of a year. So we can all agree this is a development we should be monitoring, but we have to be careful how we implement it in our models.” Clark says all the marketing hype around the near-term models gave the false impression there was a general scientific consensus hurricane losses were increasing and were going to be significantly above average for the period 2006 to 2010. “RMS most aggressively marketed this new model as a replace- ment for the standard model. Rather than an alternative view, it said all com- panies should use this model.” For its part, the industry is more or less compelled to take note of Clark’s interventions. She is by far the most high-profile and decorated figure in the sector. In addition to a number of industry awards, in 2009 the National Association of Insurance Commissioners appointed KC&C as the lead consultant in developing a recommendation on the scope, timeline and potential costs of building a national catastrophe multi- peril model for personal lines risks in the US. On the international front, she was presented with an award certificate for the 2007 Nobel Peace Prize bestowed on the IPCC, an organisation with which she has worked since 1995. The IPCC chairman, RK Pachauri, said the IPCC had provided certificates of the award only to those who contributed sub- stantially to the work of the organisation over the years since its inception. Novelty This is a far cry from the 1980s, when catastrophe risk modelling was such a novelty one of Clark’s key roles was to explain its objectives to the wider world, particularly to the insurance industry. Clark recalls when she was at univer- sity in the early to mid-1980s, computers had just started to be used for financial and economic modelling. “I loved using the computer to build models to generate financial information that could be used to make decisions,” she says. After graduation, she worked with a small group of about six or seven people in the research and development depart- ment of an insurance company in Bos- ton. “We were internal consultants. Our role was to come up with ways to help this company deal with problems that were not being addressed by traditional actu- arial and underwriting techniques. Catastrophe risk, of course, came under that category. This company had signifi- cant coastal exposures and I was given the project of finding a better way of calculat- ing the losses it could sustain from a hur- ricane. That was how it all started. I just fell in love with catastrophe modelling. One thing led to another and I ended up devoting my whole career to it.” Not surprisingly, it was tough going for the first few years after she set up AIR in 1987. One of the first places she went to was Lloyd’s. At that time, there were no prominent Bermuda companies and although there were some very big US companies writing property cat business, it was mainly written from London by the Lloyd’s syndicates. One of Clark’s first contracts was with the reinsurance broker EW Blanch (subsequently absorbed into Aon Benfield) to develop a catastrophe risk model the company could use for its clients. Way off Clark says before she developed the first hurricane model in the 1980s, insurers and reinsurers were grossly underestimat- ing their potential losses by about a factor of 10. “They were way off,” she notes. There were a number of reasons for this. First, there had not been a major storm in a highly populated area for dec- ades. Second, in the mid-1980s, the largest loss the insurance industry had experi- enced was slightly more than $1bn from hurricane Alicia in 1983. Third, in 1986, there was a highly influential study by the US All Industry Research Advisory Council (AIRAC) which focused on the potential for a $7bn insured hurricane loss. “So that number became the industry benchmark for a worst-case scenario,” Clark says. “At the same time, our hurricane model said the insured losses could reach $60bn. This was very different from what the rest of the industry was thinking.” Trackingexposures But, according to Clark, the major reason the industry was underestimating their potential losses by a factor of 10 was because companies had stopped tracking their exposures in hazardous areas, particularly along the coastline. “There had been decades when there were no catastrophe events and the property values had grown exponentially, so by the time hurricane Andrew came in 1992, there were literally trillions of dollars of exposure along the Gulf of Mexico and the US East Coast and insurance companies were simply not aware of the magnitude of their exposures.” Uniquely for the time, Clark’s catas- trophe model could simulate events and estimate what the damages could be at the present time based on contempora- neous property values. “That was a really important component of the model. As you know, even today that is an issue in terms of the under-evaluation of poten- tial losses,” she says. Hurricane Andrew The insurance market finally embraced catastrophe modelling after hurricane Andrew hit. “I remember it like it was yesterday. It made landfall at about 5 am on August 24, 1992, a Monday morning. We ran scenarios with our hurricane model to try and give our clients some estimate of what the losses could be and by 9 am that morning we issued a state- ment that insured losses could exceed $13bn. Our clients simply did not believe it. Of course, the storm’s total losses ended up coming in at more than that, between $15bn and $16bn.” Clark was besieged by phone calls, especially from underwriters in the Lon- don market who were convinced the max- imum loss figure would not be more than $6bn, particularly as Andrew had made landfall south of Miami. “The response was: ‘A few mobile homes and an Air Force base, how much can it be?’” According to Clark, it would take nearly a year after the hurricane made landfall for the industry to fully appreciate the potential of catas- trophe modelling. “But it eventually clicked. The industry realised these mod- The Japanese earthquake exposed the limitations of cat models. Here, the industry’s founding figure talks to RASAAD JAMIE about how catastrophe- modelling companies can improve the accuracy of their loss projections els were telling us something very valua- ble and we needed to wake up and figure out how we can best make use of them.” Rating agencies The over-reliance on cat models by the rating agencies, particularly their reli- ance on point estimates, is another important theme for Clark. “The industry has become wedded to these one-in-100 and one-in-250-year numbers. So many decisions are hanging on these point esti- mates and then, of course, when the point estimates change by 100%, by 200%, eve- rybody is at sea. The rating agencies think they are being consistent because they are using a modelling approach. But dif- ferent models, different model versions and different levels of data quality all lead to very different numbers. The rating agencies claim they make adjustments for these differences, but they can’t really be sure they are making the right adjustments to be able to com- pare like with like. So one of the messages of KC&C is the rating agencies certainly need a different approach to be able to effectively compare the financial strength of different insurers.” The ten- dency of the rating agencies, she explains, is to be conservative. “Their responsibility is to give ratings on the financial health of companies, so they are going to be much more focused on the downside, on the question of how badly can this go wrong.” Clark would strongly advise rating agencies to adopt a robust set of trans- parent scenarios around characteristic catastrophe events (such as the Great New England Hurricane of 1938 in the north-east of the US) to represent catas- trophe risk in each peril region. These benchmark scenarios could be applied to every company’s portfolio, so the rating agencies are truly comparing like with like. “Another characteristic event for the New England region could be created by increasing the intensity of the 1938 New England Hurricane by 10% or by the amount scientists think is credi- ble.” This is an approach Clark had rec- ommended to the rating agencies previously, but she now thinks, given the turmoil caused by recent model updates, they might be a little bit more open to this idea. “What we have is a set of scientifi- cally defensible scenarios. So while nobody knows what the right answer is in terms of the numbers, we have a set of characteristic events for each peril region that credibly represents the risk,” she says. Clark’s view is these characteristic events are robust and are not going to change frequently like the models do. “And you can monitor the model changes relative to the characteristic event sets. KC&C believes this is the future and is better than what the rating agencies are doing now. At present, a rating agency could be getting information that tells it one company is 100% higher risk than another, when in reality that could be reversed,” she claims. Over-specification For Clark, the cat-modelling industry is going down the road of over-specification. In trying to model things that cannot even be measured, the loss estimates end up being highly volatile. “The cat modellers talk about scientific knowledge – about what we know. At KC&C, we talk about what scientists don’t know. The cat mod- ellers need to do the same so those who use the models can get a sense of the uncer- tainty in the data on which the models are based. What scientists know is minuscule relative to what they don’t know, but it is not necessarily in the modellers’ interests to talk about what is not known.” But Clark believes to manage catastro- phe risk effectively, you need to have at least an idea of what the range of uncer- tainty is in different peril regions. She sees the Japan earthquake as a good example of why the industry should look at other information beyond the models. She points out the modelling companies did not have a magnitude 9 event in their Japan earthquake models in the seismic region where the main event occurred. “Nor did they have a large-magnitude earthquake combined with a major tsu- nami. And they certainly did not have a nuclear disaster,” she adds. “But if you had taken a few smart underwriters before the event and put them in a room for a few days to come up with the most extreme and worst-case scenarios for Japan and given them the historical data relevant to Japan, they probably would have come up with a magnitude 9 or higher magnitude event with an associated tsunami. They would have known approximately every 15 years there is a magnitude 8 or greater earth- quake in or around Japan. And while something like this has not had happened in Japan, there have been four earth- quakes of magnitude 9 or greater since 1950 along the so-called Ring of Fire – the most seismically active region in the world, of which Japan forms a part – the greatest one being a magnitude 9.5 quake that happened in Chile in 1960.” Clark’s underwriters would also have been told large-magnitude earthquakes in Japan have caused tsunami waves of 30 metres high and greater in at least three historically events. “If we gave our group of non-scientists those facts, I am pretty sure they would have come up with at least a magnitude 9 earthquake, com- bined with a large tsunami wave, as a worst-case scenario. They may even have thought of the possibility of that [sub- sequent] nuclear disaster,” she says. Updates But many model users, she says, are not doing this type of thinking because they have been lulled into this false sense of security, believing the models have fig- ured it all out for them. “The irony of the whole thing is, now we know there could be a magnitude 9 earthquake, what do we do? Are we going to wait for two or three years for the new earthquake models to come out while the modellers update their models? Why can’t we have a more open approach, so now we know we can have a magnitude 9 earthquake, we can immediately adjust our risk-manage- ment decisions to accommodate that fact. Why do we have to wait several years for the new models to come out?” One reason why it takes the modelling companies so long to update their models is the models are overly complex. “There are only four major model components but there are a great many variables, so a number of experts and scientists are going to have to do more research. Then they need to get the results of their research incorporated into these complex models and they need to test it. For exam- ple, one of the things the scientists are supposed to tell us is what the probability of this magnitude 9 earthquake is. Which, of course, is something they don’t know. It’s going to be a scientific guesstimate. They could call it a one-in-100-year, one- in-500-year or one-in-1,000-year event. So anything the scientists give us will just be best guess because they don’t know.” When you think about it, it is a bit crazy. The models, she explains, are in one sense backward-looking tools because they are constantly being cali- brated to the last major event, which is usually a couple of years old by the time the new models are released. Mesmerising For Clark, the main issue for the industry is the science underlying these models sounds so impressive. “You can go to these presentations and get mesmerised by all the scientific jargon. Companies just get lulled into this false sense of secu- rity the modellers have it all figured out to the extent that even when companies look at numbers coming out of the mod- els that are obviously way off, they still feel compelled to use them in many cases. So at KC&C, we inform companies about what really underlies the models in terms of the hard data versus research.” She says the modelling companies create so much marketing hype around every model update they make each update sound like a major scientific breakthrough. “So, there are all these things about what the scientists know but many updates are not based on new factual knowledge. The updates incorpo- rate new research but typically it is research that has about the same level of uncertainty as the previous research. So there is the overselling of the science and the overselling of what scientists know.” Clark says KC&C is helping compa- nies to understand two things: the true nature of catastrophe risk and the limita- tions of the catastrophe models (ie, what models can be used for and what they can- not be used for). The firm is also focused on helping companies to access informa- tion generated outside the cat models so they can be more informed about the scope and potential of their catastrophe losses. The idea is for companies to be less dependent on the seemingly endless cycle of model updates and loss estimates that swing widely up and down. To illustrate this, she refers to the latest RMS model update. “One of the big- gest changes is the US inland hurricane losses are much higher – of the order of 200% to 300% in some cases. So this is an issue for a lot of companies. But there are a lot of companies saying they had known for a long time the RMS inland hurricane loss estimates were too low and they had been adjusting the numbers themselves. So, you have to ask yourself, if so many people in the market knew the RMS inland numbers were too low, did RMS know that? And if RMS knew that too, then why did it take it so many years to fix it?” She cites another example in Massa- chusetts, where a previous hurricane model update dramatically changed the wind footprints for hurricanes in north- east storms. This model change signifi- cantly raised the cost of insurance in coastal areas, such as Cape Cod. Reinsur- ance costs soared. Most of the companies pulled out of Cape Cod, so today most homeowners are in the Fair Access to Insurance Requirements (Fair) Plan and not being written by the private market. “There are cases of homeowners on Cape Cod who used to pay $800 for their insurance and are now paying more than $2,000 for the same property. Now the new model says it is not really as bad as we thought and the coastal numbers are decreasing. What does that say to the homeowners in coastal areas?” Cat models, she says, ultimately mean real money to real people and that is what people in the industry are missing if they regard a model update as merely another change in the numbers. “It may be fine for reinsurers, but if you are a primary insurer, it is no way to run a business.” Outside the black box Clark very much sees her present role as thinking outside the ”black box” of cat modelling. As far as she is concerned, catastrophe models have been taken about as far as they can go and the industry is at a point where fresh insight is needed. “While cat models are great, they have some significant limitations. We need to develop other approaches and methodolo- gies.” This use of credible information and tools other than the cat models is another critically important theme for Clark. “There is nothing that says a cat model is always the best tool so we have to use it. We don’t need to limit ourselves to one tool and we can have other approaches.” Cat models, she says, are very good for reinsurers that tend to write a global book of business. “Reinsurers are typi- cally not into the minutiae of every com- pany’s portfolio, so they can use these models to obtain a general assessment of their risks. The cat models are very good for that because they are comprehensive. But the problem is they are a one-size- fits-all approach, which gives a very general indication of risk, which may not provide the most credible view of risk for a regional or specialised book of business. But with the cat model everybody is stuck with the same solution. So if you are a primary insurer and you have a very localised or a specific book of business, the model generated loss estimates can be way off. And, at present, there is no way to fix the numbers. Here she cites the real-life example of FM Global, which writes a portfolio of industrial and commercial facilities. “The catastrophe models just don’t work for its business, so it commissioned a modeller to create a special model for it. Now, most companies cannot afford to pay the modellers millions of dollars to create their own model. So why don’t we just have an approach that makes it easier for companies to be able to use their own data and then tailor their risk assessment according to their books of business. That is one area where modelling compa- nies can do better.” The model estimates can also be off in certain peril regions. Since the models are based on historical data, Clark believes insurance companies should have access to the fundamental informa- tion about the historical events in the peril regions in which they write property business. They should have a very visual and very scientific representation of how many significant events have actually happened in the regions. “Everyone should know those statis- tics. It is not hard to know them. And these statistics should be generated separately from the models so they can be used to benchmark the output from the models. For example, if I know histori- cally something has happened that would give me a loss of $2bn today and the model is telling me my largest loss is only $1bn, obviously the model is off. Or, vice versa, if the largest loss has been $100m and the model is telling me it is going to be $10bn, you know it could be up the other way. So we need to look at other information.” In the ballpark Given Clark’s scepticism, just how useful have catastrophe risk models been to the insurance and risk-management indus- tries over the past five years? Clark says when cat models were first introduced, the industry was way off in its estimation of catastrophe losses. “We were not even in the ball park. The great thing about the cat models is they have got companies in the ball park and that has been very valuable. But over the past five years, we have not been using the cat models as if we are just in the ball park. The way I have expressed it before is that over the past 20 years, the use of the models have improved from a handsaw to a chainsaw. A chainsaw is a great tool but it is not a surgical instrument. And you would not try to do brain surgery with it.” In Clark’s view, that is exactly what the industry is trying to do with the models now. “We think we have a surgical instrument with which to make pinpoint calculations. We don’t. We have a chain- saw, which is still a saw. So that is the problem. We are trying to push the usage of the model beyond where it can go. It’s time we think about other approaches.” Underwriter judgment She dismisses the argument the models are needed otherwise the insurance industry will be forced to go back to the old way of just relying on seat-of-the- pants underwriting judgment. That, she says, is a big misconception. “If we don’t have a cat model, it does not mean we have to go back to the old ways. What has happened over the past 20 years is we have gone from no models and all under- writer judgment to the other extreme: all models and no underwriter judgment. Neither extreme is optimal.” Underwriter judgment, Clark notes, has earned itself a bad name but there are many things about individual risks and accounts underwriters know that a model will never know. “Underwriters have valu- able knowledge and expertise that should be part of the risk assessment and man- agement process. It does not have to be an either/or situation. We believe the ideal approach would use the discipline, struc- ture and scientific basis of the cat models but, where appropriate, the knowledge of underwriters, engineers, loss control spe- cialists should be included. Science can do a lot but it can’t do everything.” New dawn: Japan earthquake models did not have a magnitude 9 event plus tsunami modelled – despite such an event being well within the realms of possibility AP PHOTO/SERGEY PONOMAREV Global Markets incorporating Alternative Insurance Capital, The ReReport and World Insurance Report Insurance Day ID GLOBAL MARKETS

Lookingbeyondthe - Karen Clark & Company Day... · Lookingbeyondthe catastrophemodel ... the catastrophe-modelling sector is not ... Uniquely for the time, Clark’s catas-trophe

  • Upload
    vanlien

  • View
    219

  • Download
    0

Embed Size (px)

Citation preview

Looking beyond thecatastrophe model

THESE days, Karen Clark frequentlyfinds herself having to explain to journal-ists thathercriticismofcertainaspectsofthe catastrophe-modelling sector is notdirected at either the catastrophe modelsor at the risk modellers themselves. Thesituation largely reflects the rise in theprofiles of both Clark and her Boston-based catastrophe risk consultancy,Karen Clark & Company (KC&C) overthe past four years. It does, however,matter a great deal to her that she is notperceived to be disparaging an industryshevirtuallycreated fromscratchin1987when she devised the world’s first hurri-cane cat model and founded the first cat-modelling company, Applied InsuranceResearch (AIR), now AIR Worldwide.

She served as chief executive of AIRuntil 2007, when she left to set up KC&C.From the very beginning, the companyadopted a distinctly independent stancein relation to the risk-modelling indus-try; it is, notably, not slow to challengesome of the industry’s most deeply heldassumptions. It is an attitude that verymuch informs KC&C’s annual reports onthe performance of the near-term (overthe five-year period 2006 through 2010)hurricane models produced by the threemajor cat modellers: AIR, Eqecat andRisk Management Solutions (RMS).

Near-term models, Clark says, wereintroduced in 2006, following thedestructive and costly 2004 and 2005hurricane seasons. In the wake of hurri-cane Katrina there was a lot of “modelbashing”, she notes: “It was felt therewere a lot of factors the models wereunderestimating. At the same time, alot was being written on global warmingand its relationship to hurricanes. Sothe dynamics of the environment weresuch that the modellers really felt theyhad to look inside their models andreview the different components, andthat contributed to introduction of thesenear-term models in 2006 by all of themodelling companies.”

It is a development that reflects theimportance of catastrophe risk manage-ment for the US insurance industry.Catastrophe claims make up the largestcomponent of property losses today. “Inthe US homeowners’ insurance market,30% of the premium dollar is taken up byactual and expected catastrophe losses,”Clark explains.

The third and final KC&C report,which was published in January of thisyear, found the near-term models,designed to project insured losses inthe US from Atlantic hurricanes, havesignificantly overestimated losses forthe period. The two earlier reports came

to the same conclusion for the 2006through2008andthe2006through2009seasons. Interestingly, all the modellingcompanies more or less had the sameprojections for the firstyearof the2006to2010 near-term period. “All the modelspretty much said hurricane activityand losses were going to be 40% aboveaverage,whichisahugeamount,”shesays.

Little harddata

As it turnedout,hurricaneactivity for theprojectionperiodwaswellbelowaverage.For example, hurricanes making landfallin the US between 2006 to 2010 were 53%below average. The losses, Clark notes,were 70% below average. “We had mini-mal losses in every year except 2008. Sohow well did the models do? Our conclu-sion is they have not demonstrated anyskill. They have not shown any ability toaccurately project near-term hurricanelosses. And that is not surprising becausethere is very little hard scientific data tosupport these near-term projections.”

She has no doubt the modellers areusing the best science available. “But wealso know science can lead to numbersthat go awry and that is because there isnot a lot of data underlying that science.Scientists have the same problem actuar-ies do – if you don’t have a lot of data, youhave high uncertainty. So our approachat KC&C is to look at the numbers and atthe facts. There are so few facts we at leastshould be looking at those facts.”

One of Clark’s central arguments isthe problem is not with the models. “Themodels are great and we are not criticis-ing the modellers for developing thenear-term models, which have thepurpose of providing their clients withdifferent views of hurricane risk. Wetotally think that is the right thing to do.However, models are just models. Theydo the best they can. What is inappropri-ate is the extent to which the marketinghype oversold the near-term models andthe science underlying them.”

Global warmingAccording to Clark, an important factornot being taken into account is that themore sophisticated climate models areprojecting a decrease in hurricanes as aresult of global warming. “And that is astatement that comes from the Intergov-ernmental Panel on Climate Change[IPCC] in its most recent report. Now,thereissomeevidencestormsmaybecomemore intense over time because of risingsea-surface temperatures. And if that doesoccur, it is going to be a gradual increaseover time. It is not going to be a 40%increaseoverthecourseofayear.Sowecan

allagreethisisadevelopmentweshouldbemonitoring, but we have to be careful howweimplementit inourmodels.”

Clark says all the marketing hypearound the near-term models gave thefalse impression there was a generalscientific consensus hurricane losseswere increasing and were going to besignificantly above average for the period2006 to 2010. “RMS most aggressivelymarketed this new model as a replace-ment for the standard model. Ratherthan an alternative view, it said all com-panies should use this model.”

For its part, the industry is more orless compelled to take note of Clark’sinterventions. She is by far the mosthigh-profile and decorated figure inthe sector. In addition to a number ofindustry awards, in 2009 the NationalAssociation of Insurance Commissionersappointed KC&C as the lead consultantin developing a recommendation on thescope, timeline and potential costs ofbuilding a national catastrophe multi-peril model for personal lines risks in theUS. On the international front, she waspresented with an award certificate forthe 2007 Nobel Peace Prize bestowed onthe IPCC, an organisation with whichshe has worked since 1995. The IPCCchairman, RK Pachauri, said the IPCChad provided certificates of the awardonly to those who contributed sub-stantially to the work of the organisationover the years since its inception.

NoveltyThis is a far cry from the 1980s, whencatastrophe risk modelling was such anovelty one of Clark’s key roles was toexplain its objectives to the wider world,particularly to the insurance industry.

Clark recalls when she was at univer-sity in the early to mid-1980s, computershadjuststartedtobeusedforfinancialandeconomic modelling. “I loved using thecomputer to build models to generatefinancial information that could be usedtomakedecisions,” shesays.

After graduation, she worked with asmall group of about six or seven peoplein the research and development depart-ment of an insurance company in Bos-ton. “We were internal consultants. Ourrole was to come up with ways to help thiscompany deal with problems that werenot being addressed by traditional actu-arial and underwriting techniques.Catastrophe risk, of course, came underthat category. This company had signifi-cantcoastalexposuresandIwasgiventheproject of finding a better way of calculat-ing the losses it could sustain from a hur-ricane. That was how it all started. I just

fell in love with catastrophe modelling.One thing led to another and I ended updevoting my whole career to it.”

Not surprisingly, it was tough goingfor the first few years after she set up AIRin1987.Oneof the firstplaces shewent towas Lloyd’s. At that time, there were noprominent Bermuda companies andalthough there were some very big UScompanieswritingpropertycatbusiness,it was mainly written from London bythe Lloyd’s syndicates. One of Clark’sfirst contracts was with the reinsurancebroker EW Blanch (subsequentlyabsorbed into Aon Benfield) to develop acatastrophe risk model the companycould use for its clients.

WayoffClark says before she developed the firsthurricane model in the 1980s, insurersandreinsurersweregrosslyunderestimat-ing their potential losses by about a factorof10.“Theywerewayoff,”shenotes.

There were a number of reasons forthis. First, there had not been a majorstorm in a highly populated area for dec-ades.Second,inthemid-1980s,thelargestloss the insurance industry had experi-enced was slightly more than $1bn fromhurricane Alicia in 1983. Third, in 1986,there was a highly influential study by theUSAllIndustryResearchAdvisoryCouncil(AIRAC) which focused on the potentialfor a $7bn insured hurricane loss. “So thatnumber became the industry benchmarkfor a worst-case scenario,” Clark says. “Atthe same time, our hurricane model saidthe insured lossescouldreach$60bn.This

wasverydifferent fromwhat therestof theindustrywasthinking.”

TrackingexposuresBut, according toClark, themajor reasonthe industry was underestimating theirpotential losses by a factor of 10 wasbecause companies had stopped trackingtheir exposures in hazardous areas,particularly along the coastline. “Therehad been decades when there were nocatastrophe events and the propertyvalues had grown exponentially, so by thetime hurricane Andrew came in 1992,there were literally trillions of dollars ofexposurealongtheGulfofMexicoandtheUS East Coast and insurance companieswere simply not aware of the magnitudeof their exposures.”

Uniquely for the time, Clark’s catas-trophe model could simulate events andestimate what the damages could be atthe present time based on contempora-neous property values. “That was a reallyimportant component of the model. Asyou know, even today that is an issue interms of the under-evaluation of poten-tial losses,” she says.

HurricaneAndrewThe insurance market finally embracedcatastrophe modelling after hurricaneAndrew hit. “I remember it like it wasyesterday. It made landfall at about 5 amon August 24, 1992, a Monday morning.We ran scenarios with our hurricanemodel to try and give our clients someestimate of what the losses could be andby 9 am that morning we issued a state-ment that insured losses could exceed$13bn. Our clients simply did not believeit. Of course, the storm’s total lossesended up coming in at more than that,between $15bn and $16bn.”

Clark was besieged by phone calls,especially from underwriters in the Lon-donmarketwhowereconvincedthemax-imum loss figure would not be more than$6bn, particularly as Andrew had madelandfall south of Miami. “The responsewas: ‘A fewmobilehomesandanAirForcebase, how much can it be?’” According toClark, it would take nearly a year after thehurricane made landfall for the industryto fully appreciate the potential of catas-trophe modelling. “But it eventuallyclicked. The industry realised these mod-

The Japanese earthquake exposed the limitations of cat models. Here, theindustry’s founding figure talks toRASAADJAMIEabout how catastrophe-modelling companies can improve the accuracy of their loss projections

els were telling us something very valua-ble and we needed to wake up and figureout how we can best make use of them.”

RatingagenciesThe over-reliance on cat models by therating agencies, particularly their reli-ance on point estimates, is anotherimportant themeforClark. “The industryhas become wedded to these one-in-100and one-in-250-year numbers. So manydecisions are hanging on these point esti-matesandthen,ofcourse,whenthepointestimates change by 100%, by 200%, eve-rybody isat sea.Theratingagencies thinkthey are being consistent because theyare using a modelling approach. But dif-ferent models, different model versionsand different levels of data quality all leadto very different numbers.

The rating agencies claim they makeadjustments for these differences, butthey can’t really be sure they are makingthe right adjustments to be able to com-pare likewith like.Sooneof themessagesof KC&C is the rating agencies certainlyneed a different approach to be able toeffectively compare the financialstrength of different insurers.” The ten-dency of the rating agencies, sheexplains, is to be conservative. “Theirresponsibility is to give ratings on thefinancial health of companies, so they aregoing to be much more focused on thedownside, on the question of how badlycan this go wrong.”

Clark would strongly advise ratingagencies to adopt a robust set of trans-parent scenarios around characteristiccatastrophe events (such as the GreatNew England Hurricane of 1938 in thenorth-east of the US) to represent catas-trophe risk in each peril region.

These benchmark scenarios could beapplied to every company’s portfolio, sothe rating agencies are truly comparinglike with like. “Another characteristicevent for the New England region couldbe created by increasing the intensity ofthe 1938 New England Hurricane by 10%orby theamountscientists think iscredi-ble.” This is an approach Clark had rec-ommended to the rating agenciespreviously, but she now thinks, given theturmoil caused by recent model updates,theymightbea littlebitmoreopento thisidea. “What we have is a set of scientifi-cally defensible scenarios. So whilenobody knows what the right answer is interms of the numbers, we have a set ofcharacteristic events for each perilregion that credibly represents the risk,”she says.

Clark’s view is these characteristicevents are robust and are not going tochange frequently like the models do.

“And you can monitor the model changesrelative to the characteristic event sets.KC&C believes this is the future and isbetter than what the rating agencies aredoing now. At present, a rating agencycould be getting information that tells itone company is 100% higher risk thananother, when in reality that could bereversed,” she claims.

Over-specificationFor Clark, the cat-modelling industry isgoingdown the road of over-specification.Intryingtomodel thingsthatcannotevenbe measured, the loss estimates end upbeing highly volatile. “The cat modellerstalk about scientific knowledge – aboutwhat we know. At KC&C, we talk aboutwhat scientists don’t know. The cat mod-ellersneedtodothesamesothosewhousethe models can get a sense of the uncer-tainty in the data on which the models arebased. What scientists know is minusculerelative to what they don’t know, but it isnot necessarily in the modellers’ intereststotalkaboutwhat isnotknown.”

ButClarkbelieves tomanagecatastro-phe risk effectively, you need to have atleast an idea of what the range of uncer-tainty is in different peril regions. Shesees the Japan earthquake as a goodexample of why the industry should lookat other information beyond the models.She points out the modelling companiesdid not have a magnitude 9 event in theirJapan earthquake models in the seismicregion where the main event occurred.“Nor did they have a large-magnitudeearthquake combined with a major tsu-nami. And they certainly did not have anuclear disaster,” she adds.

“But if you had taken a few smartunderwriters before the event and putthem in a room for a few days to come upwith the most extreme and worst-casescenarios for Japan and given them thehistorical data relevant to Japan, theyprobably would have come up with amagnitude 9 or higher magnitude eventwith an associated tsunami. They wouldhaveknownapproximatelyevery15yearsthere is a magnitude 8 or greater earth-quake in or around Japan. And whilesomethinglikethishasnothadhappenedin Japan, there have been four earth-quakes of magnitude 9 or greater since1950alongtheso-calledRingofFire–themost seismically active region in theworld, of which Japan forms a part – thegreatestonebeingamagnitude9.5quakethat happened in Chile in 1960.”

Clark’s underwriters would also havebeen told large-magnitude earthquakesin Japan have caused tsunami waves of 30metres high and greater in at least threehistorically events. “If we gave our group

of non-scientists those facts, I am prettysure they would have come up with atleast a magnitude 9 earthquake, com-bined with a large tsunami wave, as aworst-case scenario. They may even havethought of the possibility of that [sub-sequent] nuclear disaster,” she says.

UpdatesBut many model users, she says, are notdoing this type of thinking because theyhave been lulled into this false sense ofsecurity, believing the models have fig-ured it all out for them. “The irony of thewhole thing is, now we know there couldbe a magnitude 9 earthquake, what do wedo? Are we going to wait for two or threeyears for the new earthquake models tocome out while the modellers updatetheir models? Why can’t we have a moreopen approach, so now we know we canhave a magnitude 9 earthquake, we canimmediately adjust our risk-manage-ment decisions to accommodate thatfact. Why do we have to wait several yearsfor the new models to come out?”

One reason why it takes the modellingcompanies so long to update their modelsis the models are overly complex. “Thereare only four major model componentsbut there are a great many variables, so anumber of experts and scientists aregoing to have to do more research. Thenthey need to get the results of theirresearchincorporated intothesecomplexmodels and they need to test it. For exam-ple, one of the things the scientists aresupposed to tell us is what the probabilityof this magnitude 9 earthquake is. Which,of course, is something they don’t know.It’s going to be a scientific guesstimate.They could call it a one-in-100-year, one-in-500-year or one-in-1,000-year event.So anything the scientists give us will justbebestguessbecause theydon’tknow.”

When you think about it, it is a bitcrazy. The models, she explains, are inone sense backward-looking toolsbecause they are constantly being cali-brated to the last major event, which isusually a couple of years old by the timethe new models are released.

MesmerisingFor Clark, the main issue for the industryis the science underlying these modelssounds so impressive. “You can go tothese presentations and get mesmerisedby all the scientific jargon. Companiesjustget lulled into this false senseof secu-rity the modellers have it all figured outto the extent that even when companieslook at numbers coming out of the mod-els that are obviously way off, they stillfeel compelled touse theminmanycases.So at KC&C, we inform companies aboutwhat reallyunderlies themodels in termsof the hard data versus research.”

She says the modelling companiescreate so much marketing hype aroundevery model update they make eachupdate sound like a major scientificbreakthrough. “So, there are all thesethings about what the scientists knowbut many updates are not based on newfactual knowledge. The updates incorpo-rate new research but typically it isresearch that has about the same level ofuncertainty as the previous research. Sothere is the overselling of the science andthe overselling of what scientists know.”

Clark says KC&C is helping compa-nies to understand two things: the truenature of catastrophe risk and the limita-tions of the catastrophe models (ie, whatmodelscanbeused forandwhat theycan-not be used for). The firm is also focusedon helping companies to access informa-tion generated outside the cat models sothey can be more informed about thescope and potential of their catastrophelosses.The idea is forcompanies tobe less

dependent on the seemingly endlesscycle of model updates and loss estimatesthat swing widely up and down.

To illustrate this, she refers to thelatest RMS model update. “One of the big-gest changes is the US inland hurricanelosses are much higher – of the order of200% to 300% in some cases. So this is anissueforalotofcompanies.Buttherearealot of companies saying they had knownfor a long time the RMS inland hurricaneloss estimates were too low and they hadbeen adjusting the numbers themselves.So, you have to ask yourself, if so manypeople in the market knew the RMSinland numbers were too low, did RMSknowthat?AndifRMSknewthattoo,thenwhydid it take it somanyyears to fix it?”

She cites another example in Massa-chusetts, where a previous hurricanemodel update dramatically changed thewind footprints for hurricanes in north-east storms. This model change signifi-cantly raised the cost of insurance incoastal areas, such as Cape Cod. Reinsur-ance costs soared. Most of the companiespulled out of Cape Cod, so today mosthomeowners are in the Fair Access toInsurance Requirements (Fair) Plan andnot being written by the private market.

“There are cases of homeowners onCape Cod who used to pay $800 for theirinsurance and are now paying more than$2,000 for the same property. Now thenew model says it is not really as bad as wethought and the coastal numbers aredecreasing. What does that say to thehomeowners in coastal areas?”

Cat models, she says, ultimately meanreal money to real people and that is whatpeople in the industry are missing if theyregard a model update as merely anotherchange in the numbers. “It may be finefor reinsurers, but if you are a primaryinsurer, it is no way to run a business.”

Outside the blackboxClark very much sees her present role asthinking outside the ”black box” of catmodelling. As far as she is concerned,catastrophe models have been takenaboutas farastheycangoandtheindustryis at a point where fresh insight is needed.“While cat models are great, they havesome significant limitations. We need todevelopotherapproachesandmethodolo-gies.”Thisuseofcredible informationandtools other than the cat models is anothercritically important theme for Clark.“There is nothing that says a cat model isalwaysthebesttoolsowehavetouseit.Wedon’t need to limit ourselves to one toolandwecanhaveotherapproaches.”

Cat models, she says, are very good forreinsurers that tend to write a globalbook of business. “Reinsurers are typi-cally not into the minutiae of every com-pany’s portfolio, so they can use thesemodels to obtain a general assessment oftheir risks. The cat models are very goodfor that because they are comprehensive.But the problem is they are a one-size-fits-all approach, which gives a verygeneral indication of risk, which may notprovide the most credible view of risk foraregionalorspecialisedbookofbusiness.But with the cat model everybody isstuck with the same solution. So if youare a primary insurer and you have a verylocalised or a specific book of business,the model generated loss estimates canbe way off. And, at present, there is no wayto fix the numbers.

Here she cites the real-life example ofFM Global, which writes a portfolio ofindustrial and commercial facilities.“The catastrophe models just don’t workfor its business, so it commissioned amodeller to create a special model for it.Now, most companies cannot afford topay the modellers millions of dollars tocreate their own model. So why don’t we

justhaveanapproachthatmakes it easierfor companies to be able to use their owndata and then tailor their risk assessmentaccording to their books of business.That is one area where modelling compa-nies can do better.”

The model estimates can also be off incertain peril regions. Since the modelsare based on historical data, Clarkbelieves insurance companies shouldhave access to the fundamental informa-tion about the historical events in theperil regions inwhichtheywritepropertybusiness. They should have a very visualand very scientific representation of howmany significant events have actuallyhappened in the regions.

“Everyone should know those statis-tics. It is not hard to know them. Andthese statistics should be generatedseparately from the models so they can beused to benchmark the output from themodels. For example, if I know histori-callysomethinghashappenedthatwouldgive me a loss of $2bn today and themodel is telling me my largest loss isonly $1bn, obviously the model is off. Or,vice versa, if the largest loss has been$100m and the model is telling me it isgoing to be $10bn, you know it could beup the other way. So we need to look atother information.”

In the ballparkGiven Clark’s scepticism, just how usefulhave catastrophe risk models been to theinsurance and risk-management indus-tries over the past five years?

Clark says when cat models werefirst introduced, the industry was way offin its estimation of catastrophe losses.“We were not even in the ball park. Thegreat thing about the cat models is theyhave got companies in the ball park andthat has been very valuable. But over thepast fiveyears,wehavenotbeenusingthecat models as if we are just in the ballpark. The way I have expressed it before isthat over the past 20 years, the use of themodels have improved from a handsaw toa chainsaw. A chainsaw is a great tool butit is not a surgical instrument. And youwould not try to do brain surgery with it.”

In Clark’s view, that is exactly what theindustry is trying to do with the modelsnow. “We think we have a surgicalinstrument with which to make pinpointcalculations. We don’t. We have a chain-saw, which is still a saw. So that is theproblem. We are trying to push the usageof the model beyond where it can go. It’stime we think about other approaches.”

Underwriter judgmentShe dismisses the argument the modelsare needed otherwise the insuranceindustry will be forced to go back to theold way of just relying on seat-of-the-pants underwriting judgment. That, shesays, is a big misconception. “If we don’thave a cat model, it does not mean wehave to go back to the old ways. What hashappened over the past 20 years is wehave gone from no models and all under-writer judgment to the other extreme: allmodels and no underwriter judgment.Neither extreme is optimal.”

Underwriter judgment, Clark notes,has earned itself a bad name but there aremany things about individual risks andaccountsunderwritersknowthatamodelwillneverknow.“Underwritershavevalu-able knowledge and expertise that shouldbe part of the risk assessment and man-agement process. It does not have to be aneither/or situation. We believe the idealapproach would use the discipline, struc-ture and scientific basis of the cat modelsbut, where appropriate, the knowledge ofunderwriters,engineers, losscontrolspe-cialists should be included. Science candoa lotbut it can’tdoeverything.”

New dawn: Japan earthquake models did not have amagnitude 9 event plus tsunami modelled – despitesuch an event being well within the realms of possibility

AP PHOTO/SERGEY PONOMAREV

Global Marketsincorporating Alternative Insurance Capital, The ReReport and World Insurance Report

Insurance Day ID GLOBAL MARKETS