Credit Models and the Crisis

Embed Size (px)

Citation preview

  • 7/28/2019 Credit Models and the Crisis

    1/44

    The Journal of Credit Risk (3981) Volume 6/Number 4, Winter 2010/11

    Credit models and the crisis:

    default cluster dynamics and thegeneralized Poisson loss model

    Damiano Brigo

    Department of Mathematics, Kings College London, Strand,

    London WC2R 2LS, UK; email: [email protected]

    Andrea Pallavicini

    Banca Leonardo, Via Broletto 46, Milan 20121, Italy;

    email: [email protected]

    RobertoTorresettiQuaestio Capital Management, Via del Lauro 14, 20100 Milan, Italy;

    email: [email protected]

    We consider collateralized debt obligations (CDOs), analyzing their valuation

    (both pre-crisis and in-crisis) with the generalized Poisson loss model. General-

    izedPoisson lossis an arbitrage-free dynamic lossmodel capable of calibratingall

    tranches for all maturities simultaneously. Alternative tranche analysis using the

    implied copula framework or using historical estimation techniques highlights a

    multimodal tail in the loss distribution underlying the CDO. An effective descrip-

    tion of loss dynamics in terms of the generalized Poisson loss model explains

    how the multimodal loss-probability distribution can be considered to arise fromexplicitly modeling the default of subsets of names within the CDOs pool of

    names. Such default clusters may in turn be interpreted as sectors of the economy.

    Our discussion is supported by abundant market examples through history, both

    pre-crisis and in-crisis.

    1 INTRODUCTION: PRE-CRISIS AND IN-CRISIS CREDIT MODELING

    1.1 Bottom-up models

    A common way of introducing dependence in credit derivatives modeling is by means

    of copula functions. Typically a Gaussian copula is postulated on the exponentialrandom variables driving the default times of the names of the pool. In general, if we

    The opinions expressed here are solely those of the authors and do not represent in any way those

    of their employers.

    39

  • 7/28/2019 Credit Models and the Crisis

    2/44

    40 D. Brigo et al

    try to model dependence by specifying dependence across single default times, we are

    using the so called bottom-up framework, and the copula approach typically falls

    within this framework. Such a general procedure cannot be extended in a simple way

    to a fully dynamical model. We cannot do justice to the huge amount of literature on

    copulas in credit derivatives here. We only mention that there have been attempts to

    go beyond the Gaussian copula introduced to the CDO world by Li (2000) that have

    led to the implied (base and compound) correlation framework, some important limits

    of which have been pointed out in Torresetti et al (2006b). Li and Hong Liang (2005)

    also proposed a mixture approach in connection with CDO-squareds. For results on

    sensitivities computed with the Gaussian copula models, see, for example, Meng and

    Sengupta (2008).

    An alternative to copulas in the bottom-up context is to insert dependence among

    the default intensities of single names (see, for example, Chapovsky et al (2007)).

    Joshi and Stacey (2006) resort to modeling business time to create a default corre-

    lation in otherwise independent single-name defaults using an intensity gamma

    framework. Similarly, but in a firm-value-inspired context, Baxter (2007) introduces

    Lvy firm-value processes in a bottom-up framework for CDO calibration. Lopatin

    (2008) introduces a bottom-up framework effective in the CDO context as well, first

    using single-name default intensities as deterministic functions of time and of the pool

    default counting process, then focusing on hedge ratios and analyzing the framework

    from a numerical performance point of view, showing that this model is interest-

    ing despite the fact that it lacks an explicit modeling of single-name credit-spread

    volatilities.

    Albanese et al (2006) introduce a bottom-up approach based on structural model

    ideas that can be made consistent with several inputs under both the historical and

    the pricing measures that manages to calibrate CDO tranches.

    At this point, it is useful to recall the concept of the implied copula (introduced

    by Hull and White (2006) as a perfect copula): this is a non-parametric model

    that can be used to deduce the shape of the risk-neutral pool loss distribution from

    a set of market CDO spreads spanning the entire capital structure. The general use

    of flexible systemic factors has been generalized and vastly improved by Rosen and

    Saunders (2009), who also discuss the dynamic implications of the systemic factor

    framework. In the context of one-factor models, though with a more econometric

    flavor, Berd et al (2007) interpret the latent factor as the market return governed

    by an asymmetric generalized autoregressive conditional heteroscedasticity model

    and introduce the implied correlation surface as a mapping between the given loss-

    generating model and a Gaussian benchmark.Factors and dynamics are also discussed

    in Inglis et al (2008).

    Our calibration results based on the implied copula already seen in Torresetti et al

    (2006c) and published later in Torresetti et al (2009) and Brigo et al (2010) point out

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    3/44

    Credit models and the crisis: default cluster dynamics and the GPL model 41

    that a consistent loss distribution across tranches for a single maturity features modes

    in the tail of the loss distribution. These probability masses on the far-right tail imply

    default possibilities for large clusters (possibly sectors of the economy) of names of

    the economy. These results were originally published in 2006 on ssrn.com (see Brigo

    etal (2006a,b)). We find the same features here using a completely different approach.

    The implied copula can calibrate consistently across the capital structure but not

    across maturities since it is a model that is inherently static. The next step is therefore

    to introduce the dynamic loss model. This moves us into the so called top-down

    framework (although dynamic approaches are also possible in the bottom-up context,

    as we have seen in some of the above references).

    1.2 Top-down framework

    We could completely avoid single-name default modeling and instead focus on thepool loss and default counting processes, thereby considering a dynamical model at

    the aggregate loss level, associated to the loss itself or to some suitably defined loss

    rates. This is the top-down approach (see, for example, Bennani (2005); Giesecke

    et al (2009); Sidenius et al (2008); Schnbucher (2005); Di Graziano and Rogers

    (2005); Brigo et al (2006a,b); Errais et al (2009); Lopatin and Misirpashaev (2007)).

    The first joint calibration results of a dynamic loss model across indexes, tranche

    attachments and maturities (Brigo et al (2006a)) show that even a relatively simple

    loss dynamics such as a capped generalized Poisson process suffices to account for

    the loss-distribution dynamical features embedded in market quotes. This work also

    confirms the implied copula findings of Torresetti et al (2006c), showing that the loss-

    distribution tail exhibits a structured multimodal behavior implying non-negligibledefault probabilities for large fractions of the pool of credit references, which leaves

    the potential for high losses implied by CDO quotes before the beginning of the

    crisis. Cont and Minca (2008) use a non-parametric algorithm for the calibration

    of top models, constructing a risk-neutral default intensity process for the portfolio

    underlying the CDO to look for the risk-neutral loss process closest to a prior loss

    process using relative entropy techniques (see also Cont and Savescu (2008)).

    However, in general, to justify the down in top-down one needs to show that,

    from the aggregate loss model, one can recover a posteriori consistency with single-

    name default processes when they are not modeled explicitly. Errais et al (2009)

    advocate the use of random thinning techniques for their approach (see also Halperin

    and Tomecek (2008), who delve into more practical issues related to random thinning

    of general loss models, and Bielecki et al (2008), who also build semi-static hedging

    examples and consider cases where the portfolio loss process may not yield sufficient

    statistics).

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    4/44

    42 D. Brigo et al

    It is often notclear whether or not a fully consistent single-name default formulation

    is possible given an aggregate model as the starting point.

    There is a special bottom-up approach that can lead to a distinct and rich loss

    dynamics. This approach is based on the common Poisson shock (CPS) framework

    reviewed in Lindskog and McNeil (2003). This approach allows for more than one

    defaulting name in small time intervals, which is in contrast with some of the above-

    mentioned top-down approaches. In bottom-up language we see that this approach

    leads to a MarshallOlkin copula linking the first jump (default) times of single names.

    In top-down language, this model looks very similar to the generalized Poisson loss

    (GPL) model of Brigo et al (2006a), where the number of defaults is not capped.

    The problem with the CPS framework is that it allows for repeated defaults, which

    is clearly wrong since one name could default more than once.

    The CPS framework has been used in the credit derivatives literature by Elouer-

    khaoui (2006), for example (see also the references therein). Balakrishna (2006)

    introduces a semi-analytical approach, again allowing for more than one default in

    small time intervals and hinting at its relationship with the CPS framework, that shows

    some interesting calibration results. Balakrishna (2007) then generalizes this earlier

    paper to include delayed default dependence and contagion.

    1.3 Generalized Poisson loss and generalized Poisson

    cluster loss models

    Brigo et al (2007) address the repeated default issue in the CPS framework by con-

    trolling the cluster default dynamics in order to avoid repetitions. They calibrate the

    obtained model satisfactorily to CDO quotes across attachments and maturities, but

    the combinatorial complexity of a non-homogeneous version of this model is forbid-

    ding, and the resulting generalized Poisson cluster loss (GPCL) approach is hard to

    use successfully in practice when taking single names into account.

    In the context of the present paper, however, the GPL and GPCL models will still

    be useful for showing how a loss-distribution dynamics consistent with CDO market

    quotes should evolve.

    As explained above, the GPL and GPCL models are dynamical models for loss that

    have the ability to reprice all tranches and all maturities simultaneously. They mainly

    differ in the way that they avoid repeated defaults, and by their stressing of the whole

    pool or of individual clusters as fundamental objects. Here we employ a variant that

    models theloss directly rather than by using thedefault counting process plus recovery.

    The loss is modeled as the sum of independent Poisson processes, each associated

    with the default of a different number of entities and capped at the pool size to avoid

    infinite defaults. A possible interpretation of these driving Poisson processes is that

    of defaults of sectors, although the sectors amplitudes vary in our formulation of the

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    5/44

    Credit models and the crisis: default cluster dynamics and the GPL model 43

    model pre-crisis and in-crisis. In the new in-crisis model implementation we fix the

    amplitude of the loss triggered by each cluster of defaults a priori without calibrating

    it, as we did in our earlier GPL and GPCL works. This makes the calibration more

    transparent and the calibrated intensities of the sectors defaults easier to interpret.

    We highlight how these models are able to reproduce the tail multimodal feature,

    which the implied copula has proved is indispensable for accurately repricing market

    spreads of CDO tranches on a single maturity.

    We also refer to the related results of Longstaff and Rajan (2008), which point

    in the same direction but add a principal component analysis on a panel of credit

    default swap (CDS) spread changes and also give further comments on the economic

    interpretation of the default clusters as sectors.An econometric investigation of cluster

    defaults starting from the Poisson framework can be found in Duan (2009).

    Incidentally, we draw the readers attention to the history of defaults, which points

    to default clusters being concentrated in a relatively short time period (a few months):

    the thrifts in the early 1990s at the height of the loan and deposit crisis, airlines after

    2001 and, more recently, autos and financials give evidence of this. In particular,

    from September 7, 2008 to October 8, 2008 a time window of one month we wit-

    nessedseven credit events affectingmajor financial entities: Fannie Mae, Freddie Mac,

    Lehman Brothers, Washington Mutual, Landsbanki, Glitnir and Kaupthing. Fannie

    Mae and Freddie Mac conservatorships were announced on the same date (September

    7, 2008) and the appointment of a receivership committee for the three Icelandic

    banks (Landsbanki, Glitnir and Kaupthing) was announced between October 7 and 8.

    Moreover, Standard & Poors (2009) issued a request for comments related to

    changes in the rating criteria of corporate CDOs. Thus far, agencies had been adopting

    a multifactor Gaussian copula approach to simulate the portfolio loss in the objective

    measure. Standard & Poors proposed changing the criteria so that tranches rated

    AAA would be able to withstand the default of the largest single industry in the

    asset pool with zero recoveries. We believe that this is a move in the direction of

    modeling the loss in the risk-neutral measure via GPL-like processes, given that the

    proposed changes to Standard & Poors rating criteria imply the admittance of the

    possibility that a cluster of defaults in the objective measure could exist as a stressed

    but plausible scenario (see also Torresetti and Pallavicini (2007) for the specific case

    of constant-proportion debt obligations).

    Finally, we comment more generally on dynamical aggregate models and on the

    difficulties encountered in deriving single-name hedge ratios when trying to avoid

    complex combinatorial analysis. The framework therefore currently remains incom-

    plete, because obtaining jointly tractable dynamics and consistent single-name hedges

    that can be realistically applied on the trading floor remains a problem. We have pro-

    vided some references for the latest research in this field above. We highlight, though,

    that even simple dynamical models such as our GPL model or the single-maturity

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    6/44

    44 D. Brigo et al

    implied copula model are enough to appreciate that the market quotes were imply-

    ing the presence of large default clusters with non-negligible probabilities well in

    advance of the credit crisis, as documented in 2006 and early 2007 (Brigo et al

    (2006a,b, 2007)).

    Finally, it is important to point out that most of the above discussion and references

    (with very few exceptions) center on corporate CDOs mostly synthetic ones and

    little literature is available for the valuation of CDOs on other asset classes, with

    literature on possibly complex waterfalls and prepayment risk (including collateral-

    ized loan obligations, residential mortgage-backed securities and CDOs of residential

    mortgage-backed securities) generally relating to the asset class that triggered the cri-

    sis. For manysuchdeals the main problem is oftenthe data. The scarce literature in this

    area includes Jaeckel (2008), Papadopoulos and Tan (2007) and Fermanian (2009).

    2 MARKET QUOTESFor single names, our reference products will be CDSs, but the most liquid multiname

    credit instruments available in the market are credit indexes and CDO tranches (eg,

    Dow JonesiTraxx (DJiTraxx) and CDX). We discuss these instruments below.

    The procedure for selecting the standardized pool of names is the same for the

    two indexes. Every six months a new series is rolled at the end of a polling process

    managed by MarkIt where a selected list of dealers contributes the ranking of the

    most liquid CDS.1

    2.1 Credit indexes

    The index is given by a pool of names 1 ; 2 ; : : : ; M , typically M D 125, each withnotional 1=M so that the total pool has unit notional. The index default leg consists

    of protection payments corresponding to the defaulted names of the pool. Each time

    one or more names default, the corresponding loss increment is paid to the protection

    buyer until final maturity T D Tb arrives or until all the names in the pool have

    defaulted.

    In exchange for loss increase payments, a periodic premium with rate S is paid

    from the protection buyer to the protection seller until final maturity Tb. This premium

    is computed on a notional that decreases each time a name in the pool defaults,

    decreasing by an amount corresponding to the notional of that name (without taking

    out the recovery).

    1 All credit references that are not investment grade are discarded. Each surviving credit reference

    underlying the CDS is assigned to a sector. Each sector is contributing a predetermined number of

    credit references to the final pool of names. The rankings of the various dealers for the investment

    grade names are put together to rank the most liquid credit references within each sector.

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    7/44

    Credit models and the crisis: default cluster dynamics and the GPL model 45

    We denote by NLt the portfolio cumulated loss and by NCt the number of defaulted

    names up to time t divided by M. Since, at each default, part of the defaulted notional

    is recovered, we have 0 6 NLt 6 NCt 6 1. The discounted payout of the two legs of

    the index is given as follows:

    defleg.0/ WD

    ZT0

    D.0;t/ d NLt

    prmleg.0/ WD S0 DV01.0/

    DV01.0/ WD

    bXiD1

    iD.0; Ti /.1 NCTi /

    where D.s;t/ is the discount factor (often assumed to be deterministic) between

    times s and t and i D Ti Ti1 is the year fraction. In the second equation, DV01.0/

    denotes the sum over all premium payment dates of the outstanding notional, weighted

    by the relevant year fraction, and discounted back at time 0. The risk-neutral expec-

    tation of such a quantity will provide the sensitivity of the premium leg to a unit

    movement in the spread S0, all other things being equal (hence the name DV01,

    although usually the spread is moved by 1 basis point (bp)). In the same equation,

    the actual outstanding notional in each period would be an average over Ti1; Ti ,

    but we have replaced it with 1 NCTi (the value of the outstanding notional at Ti ) for

    simplicity, which is commonly done.

    Note that, in contrast with the tranches (see Section 2.2), here the recovery is not

    considered when computing the outstanding notional because it is only the number

    of defaults that matters.

    The market quotes the values of S0 that, for different maturities, balance the two

    legs. If we have a model for the loss and the number of defaults, we may require that

    the loss and the number of defaults in the model, when plugged into the two legs, lead

    to the same risk-neutral expectation (and hence price):

    S0 DE0

    RT0 D.0;t/ d

    NLt

    E0Pb

    iD1 iD.0; Ti /.1 NCTi /

    (1)

    where E denotes the expectation under the pricing risk-neutral measure.

    Assuming deterministic default-free interest rates, we can rewrite:

    S0 DRT0 D.0;t / dE0 NLt Pb

    iD1 iD.0; Ti /.1 E0NCTi /

    (2)

    The assumption regarding deterministic default-free interest rates is a practical one

    and allows us to obtain analytical or semi-analytical pricing formulas, as is usually

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    8/44

    46 D. Brigo et al

    done in market practice. It can be relaxed to an assumption of independence between

    defaults and default-free interest rates, with the consequence that the D.t;T/ terms of

    the deterministic case become P . t ; T / D Et D.t; T /, the zero-coupon bond prices.

    2.2 Collateralized debt obligation tranches

    Synthetic CDOs with maturity T are contracts involving a protection buyer, a protec-

    tion seller and an underlying pool of names. They are obtained by putting together a

    collection of CDSs with the same maturity on different names, 1 ; 2 ; : : : ; M , typically

    M D 125, each with notional 1=M, then tranching the loss of the resulting pool

    between points A and B , with 0 6 A < B 6 1:

    NLt WD1

    B A. NLt A/1fA< NLt6Bg C .B A/1f NLt>Bg

    A useful alternative expression is:

    NLt WD1

    B AB NL

    0;Bt A NL

    0;At (3)

    Once enough names have defaulted and the loss has reached A, the count starts.

    Each time the loss increases, the corresponding loss change rescaled by the tranche

    thickness B A is paid to the protection buyer until maturity arrives or until the total

    pool loss exceeds B , in which case the payments stop.

    The discounted default leg payout can then be written as:

    deflegA;B.0/ WD ZT

    0D.0;t/ d NLt

    As usual, in exchange for the protection payments, a premium rate SA;B0 , fixed at

    time T0 D 0, is paid periodically at times T1; T2; : : : ; T b D T. Part of the premium

    can be paid at time T0 D 0 as an upfront UA;B0 . The rate is paid on the surviving

    average tranche notional. If we further assume that payments are made on the notional

    remaining at each payment date Ti , rather than on the average in Ti1; Ti , the

    premium leg can be written as:

    prmlegA;B.0/ WD UA;B0 C S

    A;B0 DV01A;B.0/

    DV01A;B.0/ WD

    bXiD1

    iD.0; Ti /.1 NLTi /

    When pricing CDO tranches, we are interested in the premium rate SA;B0 that sets

    to zero the risk-neutral price of the tranche. The tranche value is computed taking the

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    9/44

    Credit models and the crisis: default cluster dynamics and the GPL model 47

    TABLE 1 Standardized attachment and detachment for the DJiTraxx Europe main and

    CDX.NA.IG tranches.

    DJiTraxxEurope CDX.NA.IGmain (%) (%)

    03 03

    36 37

    69 710

    912 1015

    1222 1530

    22100 30100

    (risk-neutral) expectation (in t D 0) of the discounted payout, which is the difference

    between the default and premium legs above. We obtain:

    SA;B0 D

    E0RT0 D.0;t / d

    NLt UA;B0

    E0Pb

    iD1 iD.0; Ti /.1 NLTi /

    (4)

    Assuming deterministic default-free interest rates, we can rewrite:

    SA;B0 D

    RT0 D.0;t/ dE0

    NLt UA;B0Pb

    iD1 iD.0; Ti /.1 E0NLTi /

    (5)

    The above expression can be easily recast in terms of the upfront premium UA;B0 for

    tranches that are quoted in terms of upfront fees.

    The tranches that are quoted on the market refer to standardized pools, standardizedattachmentdetachment points A and B and standardized maturities.

    The standardized attachment and detachment differ slightly for the CDX.NA.IG

    and the DJiTraxx Europe main tranches.2

    For the DJiTraxx and CDX pools, the equity tranche .A D 0; B D 3%/ is quoted

    by means of the fair UA;B0 , while assuming S

    A;B0 D 500bps. Thereasonthat the equity

    tranche is quoted upfront is to reduce the counterparty credit risk that the protection

    seller is facing. All other tranches are generally quoted by means of the fair running

    spread SA;B0 , assuming no upfront fee (U

    A;B0 D 0). Following recent market turmoil,

    the 36% and the 37% tranches have been quoted in terms of an upfront amount and

    a running SA;B0 D 500bps given the exceptional risk premium priced by the market

    for this tranche.

    2 The attachment points of the CDX tranches are slightly higher, reflecting the average higher

    perceived riskiness (as measured, for example, by the CDS spread or by balance-sheet ratios) of the

    liquid investment grade North American names.

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    10/44

    48 D. Brigo et al

    3 A FULLY CONSISTENT DYNAMICAL MODEL:

    THE GENERALIZED-POISSON LOSS MODEL

    The GPL model can be formulated as follows. Consider a probability space supportinga number n of independent Poisson processes Z1; : : : ; Zn with time-varying and

    possibly stochastic intensities 1; : : : ; n under the risk-neutral measure Q. The risk-

    neutral expectation conditional on the market information up to time t , including the

    pool loss evolution up to t , is denoted by Et . Intensities, if stochastic, are assumed to

    be adapted to such information.

    Define the stochastic process:

    Zt WD

    nXjD1

    jZj.t / (6)

    for positive integers 1; : : : ; n. In the following, we refer to the Zt process simply as

    the GPL process. We will use this process as a driving process for the cumulated port-

    folio loss NLt , which is the relevant quantity for our payouts. In Brigo et al (2006a,b),

    the possible use of the GPL process as a driving tool for the default counting processNCt is illustrated instead.

    The characteristic function of the Zt process is:

    'Zt .u/ D E0eiuZt D E0E0e

    iuZt j 1. t / ; : : : ; n.t/

    where:

    j.t / WD

    Zt0

    j.s/ ds; i D 1 ; : : : ; n

    are the cumulated intensities of each Poisson process. Now, we substitute Zt,

    obtaining:

    'Zt .u/ D E0

    nYjD1

    E0eiu jZj .t/ j 1. t / ; : : : ; n.t/

    D E0

    nYjD1

    'Zj .t/jj .t/.u j/

    which can be directly calculated since the characteristic function 'Zj .t/jj .t/ of each

    Poisson process, given its intensity, is known in closed form, leading to:

    'Zt .u/ D E0 exp nX

    jD1

    j.t/.eiu j 1/ (7)

    The marginal distribution pZt of the process Zt can be directly computed at any

    time via an inverse Fourier transformation of the characteristic function of the process.

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    11/44

    Credit models and the crisis: default cluster dynamics and the GPL model 49

    The characteristic function 'Zt .u/ can be explicitly calculated for some relevant

    choices of Poisson cumulated-intensity distributions (see, for example, Brigo et al

    (2006a,b)).

    3.1 Loss dynamics

    The underlying GPL process Zt is non-decreasing and, given sufficiently large times,

    it takes arbitrarily large values. The portfolio cumulated loss and the rescaled number

    of default processes is non-decreasing, but limited to the interval 0; 1. Thus, we con-

    sider the deterministic non-decreasing function W N[ f0g ! 0; 1 and we define

    the cumulated portfolio loss process NLt as:

    Lt WD L.Zt / WD min.Zt ; M0/ and NLt WD

    Lt

    M0(8)

    where 1=M0

    , M0

    > M > 0, is the minimum jump for the loss process. M0

    is clearlyrelated to the loss granularity.

    Remark 3.1 Note that the loss is bounded within the interval 0; 1 by construction,

    but that the possibility remains that the loss jumps more than M times, where M is

    the number of names in the portfolio.If this is the case, we may checka posteriori that

    the probability of such events is negligible. This is the case for all of our examples.

    The marginal distribution of the cumulated portfolio loss process Lt can be easily

    calculated. We obtain:

    Lt D min.Zt ; M0/ D Zt1fZtM0g

    Since Zt has a known distribution, the distribution of Lt can be easily derived as a

    by-product. The related density (defined on integer values since the law is discrete)

    is:

    pLt .x/ D pZt .x/1fx M0g1fxDM0g

    The density of NLt follows directly. Also, the intensity of Lt , ie, the density of the

    absolutely continuous compensator of Lt (see, for example, Giesecke et al (2009)),

    can be computed directly and is given by:

    hL.t / D

    nXjD1

    min. j; .M0 Zt/

    C/j.t / (9)

    (seeBrigo etal (2006a,b) for details). The intensity h goes to zero when Z exceeds M0,

    which corresponds to total loss, as expected. Furthermore, if all the possible integer

    jump sizes between 1 and M0 are allowed, ie, if j D j and n D M0, the intensity

    hL jumps whenever the cumulated portfolio loss process L jumps. The intensity

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    12/44

    50 D. Brigo et al

    jumps downward, and this would seem to go in the opposite direction with respect

    to self-excitedness, which is considered a desirable feature of loss models in general.

    However, self-exciting features are embedded in our model, and embedded in the

    possibility of having several defaults in small intervals, contrary to most approaches

    to loss modeling. Consider, for example, just two names: instead of having the loss

    of one name increase the likelihood of default (intensity) of a second name, we have

    both names defaulting together immediately. This embeds self-excitedness, although

    in an extreme way. Finally, to view the effect of single names on each other, we need

    a more sophisticated formulation based on the common Poisson shocks framework,

    leading to the GPCL model, which is analyzed in Brigo et al (2007) and is presented

    below with a few updates.

    Example 3.2 It is worth presenting an example to further clarify the GPL for-

    mulation. Let us take an example considering a GPL model where there are three

    independent Poisson processes Z1, Z2 and Z3 correspondingly multiplied by sizes1, 10 and 40, leading to:

    Zt D 1Z1.t / C 10Z2.t/ C 40Z3.t/

    with constant deterministic intensities 1 D 4%, 2 D 1:5% and 3 D 0:75%.

    Assume a 40% deterministic recovery, and take one scenario where the three processes

    have eachjumped once before the current time. Theloss of the standardized DJiTraxx

    pool of 125 names associated to this scenario with one jump of Z1, Z2 and Z3,

    respectively, would be 0.48%, 4.8% and 19.2%, respectively. The expected loss in

    one year for the uncapped loss process is:

    0:2352% D .0:0048 0:04/ C .0:048 0:015/ C .0:192 0:0075/

    In Figure 1 on the facing page we plot the resulting default counting distribution on

    four increasing maturities: 3, 5, 7 and 10 years. We note that the modes on the right

    tail of the distribution become more evident as the maturity increases. These bumps

    are the corresponding probabilities of a jump of a higher amplitude occurring.

    From the distribution we also note that the probability is not simply assigned to

    the higher-amplitude jumps (10 and 40 defaults in our example) but also to a number

    of defaults immediately above: this would be the probability of having jumps of both

    a high amplitude (either 10 or 40) and of a lower amplitude (1 in our case), where

    jumps with lower amplitudes have higher probability.

    Furthermore, we note that some second-order modes start to appear as maturity

    increases. These are the bumps corresponding to multiple jumps in the higher-size

    (10 and 40), smaller probability and GPL components.

    Let us now assume we are to price two tranches: 24.8% and 1019.2%. The

    expected tranche loss of these tranches, and, ultimately, their fair spread, will depend

    primarily on the probability mass lying above the tranche detachment.

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    13/44

    Credit models and the crisis: default cluster dynamics and the GPL model 51

    FIGURE 1 Default counting M NCT probability distribution associated to a GPL model

    featuring three independent Poisson processes with jump amplitudes 1 D 1, 2 D 10 and

    3 D 40, with constant deterministic intensities 1 D 4%, 2 D 1:5% and 3 D 0:75%.

    10

    8

    6

    4

    2

    00 10 20 30 40 50 60

    3 year

    %

    (a)

    10

    8

    6

    4

    2

    00 10 20 30 40 50 60

    %

    5 year

    (b)

    10

    8

    6

    4

    2

    00 10 20 30 40 50 60

    %

    7 year

    (c)

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    14/44

    52 D. Brigo et al

    FIGURE 1 Continued.

    10

    8

    6

    4

    2

    00 10 20 30 40 50 60

    %

    10 year

    (d)

    3.2 Model limits

    The GPL model that we have introduced can currently be viewed as a particularly

    simple parametrization of the market-implied loss-distribution dynamics. A positive

    feature is that the loss changes only by positive jumps, which should be the case

    in any sensible loss model. Furthermore, this choice allows us to achieve a good

    calibration to market data, as we will see in the following section. However, we

    are not making explicit assumptions on two important issues: firstly, we have not

    addressed possible ways of making our model consistent with single-name dynamics,

    and secondly, we have not explained how to choose a full-featured pool spread and

    recovery dynamics. One possibility would be to make the intensities in the Poissonprocesses driving Z stochastic and to consider more general transformations of Z to

    obtain the loss process.

    Since we are focusing only on the calibration of CDO tranches, which depend

    only on the loss marginal distribution, we will avoid discussing such problems here

    (we direct the interested reader to Brigo et al (2006a,b) for an extensive analysis

    of candidate spread and recovery dynamics, and to Brigo et al (2007) for further

    discussion, including consistency with single-name data).We pointout that significant

    progress and testing in loss modeling will only be possible when more liquid market

    quotes for tranche options and forward-start tranches are available.

    3.3 Model calibration

    We work with the basic GPL model specification as given by the driving GPL process

    Z in (6), which we use to model the pool loss through (8). In this basic formulation,

    each Poisson mode Zj has a deterministic piecewise-constant intensity j.t/.

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    15/44

    Credit models and the crisis: default cluster dynamics and the GPL model 53

    Given that we have modeled the pool loss NLt directly, we do not completely charac-

    terize the rescaled default counting process NCt , but instead give only its expectations.

    Indeed, such expectationsare the only information on default counting that are implicit

    in index market quotes (Equation (1)), whereas tranche quotes (Equation (4)) depend

    only on the loss and not on default counting explicitly. We therefore assume:

    E0 NCt WD1

    1 RE0 NLt ; 0 6 R < 1 E0 NLTb (10)

    where the range of definition of the constantR is taken in order to ensure that, at each

    time t , the expected value of the rescaled number of defaults is greater than or equal

    to the cumulated portfolio loss, and that both are smaller than or equal to one. Note

    that we avoid introducing an explicit dynamics for the recovery rate (see Brigo et al

    (2006a,b, 2007) for an initial discussion on recovery dynamics). Our R here can be

    interpreted as a kind of average recovery rate.

    3.4 Detailed calibration procedure

    The model parameters found using the calibration procedure are the amplitudes j 2

    fm 2 N W m 6 M0g, j D 1 ; : : : ; n, and the cumulated intensities j.T /, which are

    real non-decreasing piecewise linear functions in the tranche maturity.

    The optimal values for the amplitudes are selected in the following way.

    1) Fix the minimum jump size to 1=M0 by choosing the integer M0 > M > 0.

    2) Find the integer value for 1 by calibrating the cumulated intensity 1 for each

    value of 1 in the range 1;M0, all other modes being set to zero. Calibration isobtained by minimizing the objective function (11) below. Finally, choose the 1for which the calibration error is at its minimum when using the corresponding

    1. We call this chosen 1 the best integer value for 1.

    3) Add the amplitude 2 and find its best integer value by calibrating the cumulated

    intensities 1 and 2, starting from the previous value for 1 as a guess, for

    each value of2 in the range 1;M0.

    4) Repeat the previous step for i with i D 3 and so on by calibrating the cumu-

    lated intensities 1; : : : ; i , starting from the previously found 1; : : : ; i1

    as an initial guess, until the calibration error is under a given threshold or untilthe intensity i can be considered negligible.

    5) Checka posteriori that the probability of having more than M jumps is negli-

    gible and that the value ofR is within the arbitrage-free range given in (10).

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    16/44

    54 D. Brigo et al

    TABLE 2 DJiTraxx index and tranche quotes in basis points on May 13, 2005, with

    bidask spreads in parentheses.

    Maturities Attachment 3 5 7 10detachment years years years years

    Index 38 54 65 77(4) (1) (3) (2)

    Tranche

    03 2,060 4,262 5,421 6,489(100) (118) (384) (124)

    36 72 173 398 590(10) (68) (40) (20)

    69 28 57 141 188(6) (6) (17) (15)

    912 13 31 72 87(2) (5) (20) (15)

    1222 3 21 42 60(1) (3) (13) (10)

    Index and tranches are quoted through the periodic premium, while the equity tranche is quoted as an upfront

    premium (see Section 2).

    The objective function f to be minimized in the calibration is the squared sum of

    the errors shown by the model to recover the tranche and the index market quotes

    weighted by market bidask spreads:

    f.;/ DXi

    "2i ; "i Dxi . ; / xmidi

    xbidi xaski

    (11)

    where the xi (with i running over the market quote set) are the index values S0for DJiTraxx index quotes and either the periodic premiums S

    A;B0 or the upfront

    premium rates UA;B for the DJiTraxx tranche quotes.

    3.5 Calibration results

    The GPL model is calibrated to the market quotes observed weekly from May 6,

    2005 to October 18, 2005. Following Albanese et al (2006), we take R D 30% as

    our reference value for the recovery rate in the DJiTraxx Europe market for spot

    and forward contracts. The quality of our calibration below is not altered if we select

    a value R D 40% resembling the recovery typically used in simplified quoting

    mechanisms in the market (see Brigo et al (2006a,b, 2007) for examples of this). We

    start with M0 D 200, corresponding to a minimum loss jump size of 50bps.

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    17/44

    Credit models and the crisis: default cluster dynamics and the GPL model 55

    TABLE 3 (a) Calibration error "i in (11) (calculated with respect to the bidask spread)

    for tranches quoted on May 13, 2005; (b) cumulated intensities (integrated up to tranche

    maturities) of the GPL model with M0 D 200.

    (a)

    Maturities Attachment 3 5 7 10detachment years years years years

    Index 0.0 0.1 0.3 0.0

    Tranche

    03 0.0 0.1 0.2 0.2

    36 0.0 0.0 0.2 0.0

    69 0.0 0.0 0.3 0.1912 0.1 0.1 0.1 0.4

    1222 0.0 0.0 0.2 0.3

    (b)

    .T / 3 5 7 10

    years years years years

    1 1.955 3.726 4.464 7.694

    3 0.000 0.062 0.305 0.305

    8 0.016 0.033 0.109 0.10912 0.004 0.013 0.026 0.026

    19 0.006 0.006 0.017 0.017

    72 0.000 0.009 0.026 0.049

    185 0.000 0.002 0.002 0.008

    Each row corresponds to a different Poisson component with jump amplitude . Recovery rate is 30%.

    As a first example, consider the calibration date May 13, 2005 (see Table 2 on the

    facing page). In Table 3 we list the calibration result and the values of the calibrated

    parameters. The calibration errors " are very low for all maturities. Note that a cal-

    ibration error smaller than one means that the difference between the market quote

    and the model price is smaller than the bidask spread.

    Consider, as a second example, the calibration date October 11, 2005 (see Table 4

    on the next page). In Table 5 on page 57, we list the calibration results and the values of

    the calibrated parameters. The calibration errors show that the 10-year equity tranche

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    18/44

    56 D. Brigo et al

    TABLE 4 DJiTraxx index and tranche quotes in basis points on October 11, 2005, with

    bidask spreads in parentheses.

    Maturities Attachment 3 5 7 10detachment years years years years

    Index 23 38 47 58(2) (1) (1) (1)

    Tranche

    03 762 3,137 4,862 5,862(26) (26) (76) (74)

    36 20 95 200 515(10) (1) (3) (10)

    69 7 28 43 100(6) (1) (2) (4)

    912 12 27 54(2) (4) (5)

    1222 7 13 23(1) (2) (3)

    is not correctly priced. We find such mispricing in many calibration examples, in

    particular after October 2005.

    For the minimum loss jump size 1=M0, besides 50bps, we try the values 2bps and

    10bps, corresponding, respectively, to M0 equal to 5,000 and 1,000. As can be seen

    from Table 6 on the facing page, the 10-year maturity tranches (in our experience themost difficult to calibrate) are stable through the three different loss sizes, suggesting

    that going below 50bps does not add much flexibility to the model. This is confirmed

    by further tests. In particular, the difference between the M0 D 1;000 calibration

    and the M0 D 5;000 calibration is always small. Furthermore, the behavior of the

    mean calibration error, ie, of the mean of the absolute values of the "i across time and

    quotes, for the three different choices of M0 is quite similar and within one bidask

    spread.

    We also note that as the minimum jump size decreases (granularity increases),

    the loss distribution becomes noisier, due to the presence of small amplitudes. Fur-

    thermore, very small modes, appearing when the minimum jump size is as small as

    a few basis points, may violate the requirement that the loss process jumps fewer

    than M times (see Remark 3.1). We also try calibrations with M0 less than 200, ie,

    with a minimum loss jump greater than 50bps. In this case the calibration error grows

    quickly. Indeed, the minimum jump size, in this case, becomes greater than the typical

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    19/44

    Credit models and the crisis: default cluster dynamics and the GPL model 57

    TABLE 5 (a) Calibration error "i in (11) (calculated with respect to the bidask spread) for

    tranches quoted on October 11, 2005; (b) cumulated intensities (integrated up to tranche

    maturities) of the GPL model with M0 D 200.

    (a)

    Maturities Attachment 3 5 7 10detachment years years years years

    Index 0.0 0.0 0.1 0.1

    Tranche

    03 0.1 0.1 1.2 2.1

    36 0.1 0.1 0.3 1.0

    69 0.0 0.1 0.3 0.9

    912 0.4 0.8 0.81222 0.0 0.0 0.0

    (b)

    .T / 3 5 7 10

    years years years years

    1 0.441 2.498 4.466 7.555

    2 0.435 0.435 0.435 0.671

    11 0.004 0.023 0.023 0.023

    22 0.000 0.001 0.006 0.030

    29 0.000 0.000 0.001 0.00132 0.001 0.004 0.004 0.004

    192 0.000 0.001 0.005 0.011

    The three-year maturity quotes lack two tranches.

    TABLE 6 GPL calibration error for different minimum loss sizes 1=M0 with respect to the

    bidask spread for 10-year tranches on October 11, 2005.

    Attachmentdetachment 50bps 10bps 2bps

    03 2.1 1.8 1.8

    36 1.0 1.0 1.0

    69 0.9 0.9 0.9

    912 0.8 0.9 0.8

    1222 0.0 0.2 0.0

    Recovery rate is 30%.

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    20/44

    58 D. Brigo et al

    TABLE 7 Values of the Poisson amplitudes =M0 for different values of the minimum loss

    jump 1=M0.

    (a) 50bps

    Poisson amplitudes 1 2 3 4 5 6 7

    Date (%) (%) (%) (%) (%) (%) (%)

    May 6, 2005 0.50 1.50 4.00 6.00 9.50 39.50 92.50

    September 2, 2005 0.50 1.00 4.00 5.50 12.50 39.00 100.00

    October 11, 2005 0.50 1.00 5.50 11.00 14.50 16.00 96.00

    (b) 10bps

    Poisson amplitudes 1 2 3 4 5 6 7

    Date (%) (%) (%) (%) (%) (%) (%)

    May 6, 2005 0.10 1.50 4.60 5.90 9.60 39.60 53.00

    August 5, 2005 0.20 1.10 1.40 8.10 11.30 49.00 62.40

    October 11, 2005 0.10 0.70 1.00 6.30 11.50 14.50 93.70

    (c) 2bps

    Poisson amplitudes

    1 2 3 4 5 6 7Date (%) (%) (%) (%) (%) (%) (%)

    May 6, 2005 0.02 1.50 5.26 9.64 17.58 39.64 99.78

    August 12, 2005 0.38 1.06 1.14 7.38 12.24 41.34 99.80

    October 3, 2005 0.02 0.98 1.16 7.52 9.74 43.34 65.16

    October 11, 2005 0.16 0.68 1.00 6.30 10.98 14.46 94.90

    Only the calibration dates between May 6, 2005 and October 18, 2005 where the =M0 values change are listed.

    portfolio loss given when one name defaults. 50bps seems, then, to be a reasonable

    reference value.

    Also, the values of the Poisson amplitudes are quite stable across the calibration

    dates. Indeed, in six months we observe at most four changes in their values, as shown

    in Table 7.

    The loss distribution implied by the GPL model is multimodal and the probabil-

    ity mass moves toward larger loss values as the maturity increases. These features

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    21/44

    Credit models and the crisis: default cluster dynamics and the GPL model 59

    FIGURE 2 Loss-distribution evolution of the GPL model with a minimum jump size of

    50bps at all the quoted maturities up to 10 years, drawn as a continuous line.

    0 0.02 0.04 0.06 0.08 0.100

    0.05

    0.10

    0.15

    0.20

    0.25

    0.30

    0.35

    0.40

    Loss

    0.10 0.15 0.20 0.25 0.300

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    3.5

    4.0

    4.5

    5.0x 10

    3

    Loss

    (a)

    3 years5 years7 years10 years

    (b)

    3 years5 years7 years10 years

    are shared by different approaches. For instance, static models, such as the implied

    default-rate distribution in Torresetti et al (2006c), suggest multimodal loss distribu-

    tions, as mentioned in the introduction regarding the implied copula. The evolution

    of the implied loss distribution is shown in Figure 2.

    The dynamic credit correlation model of Albanese et al (2006) shows implied

    loss distributions whose modes tend to group as the maturity increases, leading to a

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    22/44

    60 D. Brigo et al

    distribution approaching normality. The GPL model reproduces this behavior (see,

    for example, Brigo et al (2006a,b)).

    4 APPLICATIONS TO MORE RECENT DATA ANDTHE CRISIS

    In this section we check whether the critical features that we discussed regarding

    implied correlation and the subsequent elements coming from more advanced models

    are still present in-crisis after mid 2007. We will observe that the features are still

    present and are often amplified in the market after the beginning of the crisis.

    4.1 Customizing the generalized Poisson loss model

    to deal with sectors

    We now consider the GPL model to assess how well it performs in-crisis. We change

    the model formulation slightly in order to have a model that is more in line with the

    current market, while maintaining all the essential features of the modeling approach.

    Compared with the model described in Section 3, we introduce the following modifi-

    cations. We fix the jump amplitudes a priori rather than calibrating them through the

    detailed calibration procedure seen in Section 3.4, and we associate different recov-

    eries to different . Fixing the before calibration will make the model less flexible

    but quicker to calibrate and possibly more stable. To fix the a priori, we reason as

    follows. Fix the independent Poisson jump amplitudes to the levels just above each

    tranche detachment, when considering a 40% recovery.

    For the DJiTraxx index, for example, this would be realized through jump ampli-

    tudes ai D i=125, where:

    5 D roundup125 0:03

    1 R

    ; 6 D roundup

    125 0:061 R

    7 D roundup

    125 0:09

    1 R

    ; 8 D roundup

    125 0:12

    1 R

    9 D roundup

    125 0:22

    1 R

    ; 10 D 125

    and, in order to have more granularity, we add the sizes 1, 2, 3 and 4:

    1 D 1; 2 D 2; 3 D 3; 4 D 4

    In total we have n D 10 jump amplitudes. We then modify the obtained sizes slightlyin order to account for CDX attachments that are slightly different. Eventually, we

    obtain the set of amplitudes:

    125 ai i 2 f1;2;3;4;7;13;19;25;46;125g

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    23/44

    Credit models and the crisis: default cluster dynamics and the GPL model 61

    We want to associate a recovery of 40% to all of these amplitudes , except 125, to

    which we want to associate a recovery of 0% in order to size the premium quoted

    by the market for super-senior tranches. We start by considering the default counting

    process which, by introducing the non-Armageddon indicator It , can be cast in the

    following form:

    NCt D 1 It .1 Nct/ (12)

    with:

    Nct WD min

    n1XiD1

    aiZi .t/;1

    ; It WD 1fZn.t/D0g

    We refer to the component associated with n D 10 D 125 as the Armageddon

    mode, since whenever Zn D Z10 jumps, the whole pool defaults. We refer to the

    jump Zn D Z10 as an Armageddon event. Indeed, whenever theArmageddon com-

    ponent Zn jumps for the first time, the default counting processN

    Ct jumps to the entirepool size and every name in the pool has defaulted, with no more defaults being

    possible.

    We then introduce the stopping time O as the minimum time between the Armaged-

    don jump event and the time when the reduced pool without the Armageddon compo-

    nent has completely defaulted. This is also the time when the full pool has defaulted:

    O D inf

    t W

    nXiD1

    aiZi .t/ > 1

    We note that I is 0 after the Armageddon event, and otherwise it is equal to 1, while

    Nc is equal to 1 after the reduced pool is over, so, after some algebra, we obtain:

    NCt D 1fO6tg C Nct1fO>t g (13)

    Note that, as expected, NC depends on Nc only at terminal time t .

    To derive the loss process with a zero recovery for the Armageddon event, we first

    calculate the evolution of the counting process:

    d NCt D It d Nct dIt.1 Nct/

    Here, notation such as d NCt denotes the left increment d NCt D NCt NCt, since we are

    considering right-continuous jump processes. Intuitively, this quantity is zero if NC

    does not jump at t , whereas it is one if NC jumps at t .We then apply the recovery only to the terms coming from jumps that are not due

    to the Armageddon event:

    d NLt D .1 R/It d Nct dIt.1 Nct/

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    24/44

    62 D. Brigo et al

    We cannow integrate the previous equation to obtain theloss process. To manipulate

    the above equation we observe three things. Firstly, I is zero after the Armageddon

    event, otherwise it is equal to one. Secondly, dI is non-zero only at the Armageddon

    event. Finally, Nc is equal to one and d Nc is zero after the reduced pool is over:

    NLt D .1 R NcO/1fO6tg C .1 R/ Nct1fO>t g (14)

    Note that NL now depends on Nc both at full-pool exhaustion time O and at terminal

    time t .

    Remark 4.1 (Interpretation of the model) Whenever the Armageddon component

    Zn jumps the first time, we will assume that the recovery rate associated to the

    remaining names defaulting in that instant will be zero. The pool loss, however, will

    not always jump to one, since there is the possibility that one or more names already

    defaulted before the Armageddon component Zn jumped, and that they defaulted

    with recovery rate R. If, at a given instant t , the whole pool defaults, ie, NCt D 1, this

    may happen in one of two ways.

    1) Zn jumped by t . In this case, the portfolio has been wiped out with the help of

    an Armageddon event. Note that, in this case, d NCt D d NLt ifZn jumps at t . In

    fact, the recovery associated to the pool fraction defaulting in that instant will

    be equal to zero.

    2) Zn has not jumped by t . In this case the portfolio has been wiped out, not with

    the help of anArmageddon event, but because of defaults of more small or large

    sectors that do not comprise the whole pool. Note that, in this case, the loss is

    less than the whole notional, as all these defaults had recovery R > 0.

    In this way, whenever Zn jumps at a time when the pool has not yet been wiped

    out, we can rest assured that the pool loss will be above 1 R. We do this because

    the market in 2008 quoted CDOs with prices assuming that the super-senior tranche

    would be impacted to a level that was impossible to reach with recoveries fixed at

    40%. For example, there was a market for the DJiTraxx five-year 60100% tranche

    on March 25, 2008 quoting a running (bid) spread of 24bps bid.

    In the dynamic loss model, recovery can be made a function of the default rate NC

    (or other solutions are possible; see Brigo et al (2006b) for more discussion). Here

    we use the above simple methodology to allow losses of the pool to penetrate beyond

    1 R and hence severely affect even the most senior tranches, in line with market

    quotations.

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    25/44

    Credit models and the crisis: default cluster dynamics and the GPL model 63

    4.2 Numerical calculation of loss distribution

    We know how to calculate the distribution of both NCt and NLt . The distribution of the

    counting process may be directly calculated from Equation (12) as the distribution ofa reduced GPL, ie, a GPL where the jump Zn is excluded, whose counting process

    will be Nct .

    The distribution of the loss process can be calculated starting from Equation (14).

    A simple approach involves explicitly calculating the forward Kolmogorov equation

    satisfied by the probability distribution:

    p NLt .x/ D QfNLt D xg;

    d

    dtp NLt .x/ D

    Xy

    At.x;y/p NLt .y/ (15)

    where x and y are generic states of the loss process and where the generator matrix

    At is given by:

    At.x;y/ WD limt!0

    Qf NLtCt D x j NLt D yg 1fxDyg

    t

    This procedure is allowed since, at each point in time, starting only from the value

    of the loss process, it is possible to say whether the Armageddon event happened, so

    we are able to reduce the loss process, which depends on the stopping time O, to a

    Markov process. In order to do so, we consider the following states for the pool loss

    process divided by M:NLt 2 A[B

    whereA is the set of states where the Armageddon event has not happened, andB is

    the set of states where the Armageddon event has happened. The two sets are defined

    as follows:

    A WD

    0; .1 R/

    1

    M; .1 R/

    2

    M; : : : ; . 1 R/

    M 1

    M; 1 R

    B WD

    .1 R/

    M 1

    MC

    1

    M; : : : ; . 1 R/

    1

    MC

    M 1

    M; 1

    Note that the two sets are disjoint if R > 0, so if we know the value of NL only at time

    t , we can state whether the Armageddon event has happened.

    Example 4.2 Consider a pool of six names with the following GPL modes with

    constant deterministic default intensities:

    Zt D Z1.t/ C 3Z2.t / C 6Z.t /

    1 D 0:020; 3 D 0:010; D 0:003

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    26/44

    64 D. Brigo et al

    TABLE 8 Generator matrix for a simple GPL process of three amplitudes for a pool of six

    names with 40% recovery for each mode, and the Armageddon mode, whose recovery is

    zero.

    y x 0 10 20 30 40 50 60 100

    0 3.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00

    10 1.00 3.30 0.00 0.00 0.00 0.00 0.00 0.00

    20 0.00 1.00 3.30 0.00 0.00 0.00 0.00 0.00

    30 2.00 0.00 1.00 3.30 0.00 0.00 0.00 0.00

    40 0.00 2.00 0.00 1.00 3.30 0.00 0.00 0.00

    50 0.00 0.00 2.00 0.00 1.00 3.30 0.00 0.00

    60 0.00 0.00 0.00 2.00 2.00 3.00 0.00 0.00

    67 0.00 0.00 0.00 0.00 0.00 0.30 0.00 0.00

    73 0.00 0.00 0.00 0.00 0.30 0.00 0.00 0.0080 0.00 0.00 0.00 0.30 0.00 0.00 0.00 0.00

    87 0.00 0.00 0.30 0.00 0.00 0.00 0.00 0.00

    93 0.00 0.30 0.00 0.00 0.00 0.00 0.00 0.00

    100 0.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00

    Starting states (y) on columns, arrival states (x) on rows (see Equation (15)). See text for intensity and amplitude

    data. All values in the table are in percent.

    where we have called Z.t / the mode corresponding to the Armageddon event. We

    consider a recovery of 40% for the first two modes and a zero recovery for the last

    mode. The loss states are NLt 2 A[B with:

    AWD f0%; 10%; 20%; 30%; 40%; 50%; 60%g

    B WD f67%; 73%; 80%; 87%; 93%;100%g

    We can directly calculate the generator matrix since the pool loss may only jump

    via the three GPL modes, and each mode only corresponds to one loss transition. We

    display the matrix entries in Table 8, with starting states on columns and arrival states

    on rows. Note that the main diagonal is calculated to ensure probability conservation

    through time.

    If no default has happened at time t D 0, the boundary condition for our Kol-

    mogorov equation is p NL0.x/ D 1fxD0g, so we can integrate Equation (15) by means

    of matrix exponentiation:

    p NLt .x/ D exp Zt

    0

    Xy

    Au.x;y/ du

    p NL0.y/ D exp

    tXy

    A.x;y/

    p NL0.y/

    where the last step is due to the fact that, in this case, the generator matrix does

    not depend on time. Matrix exponentiation can be quickly computed with the Pad

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    27/44

    Credit models and the crisis: default cluster dynamics and the GPL model 65

    FIGURE 3 Loss distribution evolution up to 10 years predicted by the GPL model for a

    pool of six names.

    0

    2040

    6080

    100

    2

    4

    6

    8

    10

    0

    Years to maturity

    Attachment points (%)

    Loss

    (%)

    10

    5

    0

    40% recovery for each mode except the Armageddon mode, whose recovery is zero. See text for intensity and

    amplitude data.

    approximation (Golub andVan Loan (1983)), leading to a closed-form solution for the

    probability distribution p NLt .x/. This distribution can then be used in the calibration

    procedure. In Table 9 on the next page and in Figure 3, we show the loss distribution

    for our example.

    4.3 Calibration results

    Bearing this interpretation of the modes in mind, we decided to select the GPL ampli-

    tudes by choosing the independent Poisson jump amplitudes to be at the level just

    above each tranche detachment when using a 40% recovery. This led to the specific

    values of5; : : : ; 10 that we introduced earlier in the paper. With an eye on the vari-

    ety of shapes of the implied copula historical calibrated distributions, as summarized

    in Brigo et al (2010) and in Torresetti et al (2006c), we added four amplitudes (from

    one to four), corresponding to a small number of defaults.

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    28/44

    66 D. Brigo et al

    TABLE 9 Loss distribution numerical values for 5-year and 10-year maturity dates, pre-

    dicted by the GPL model for a pool of six names.

    0 5 10years years years(%) (%) (%)

    0% 100.00 84.79 71.89

    10% 0.00 4.24 7.19

    20% 0.00 0.11 0.36

    30% 0.00 8.48 14.39

    40% 0.00 0.42 1.44

    50% 0.00 0.01 0.07

    60% 0.00 0.46 1.72

    67% 0.00 0.00 0.00

    73% 0.00 0.00 0.02

    80% 0.00 0.07 0.2587% 0.00 0.00 0.00

    93% 0.00 0.04 0.12

    100% 0.00 1.38 2.55

    40% recovery for each mode, except the Armageddon mode whose recovery is zero.

    We now present the goodness of fit of this GPL through history. We measure the

    goodness of fit by calculating, for each date, the relative mispricing:

    MisprRel

    A;B

    D

    8

    SA;B;ask0

    0 otherwise

    where SA;B;theor0 is the tranche theoretical spread as in Equation (4), where, for the

    calculation of the expectations of both numerator and denominator, we take the loss

    distribution resulting from the calibrated GPL.

    In Figure 4 on the facing page and Figure 5 on page 68 we present the relative

    mispricing for all tranches, for all maturities and for both indexes throughout the

    sample (March 2005 to June 2009).

    We note that, while the 5-year tranches could be repriced fairly well in the cur-

    rent credit crisis for both DJiTraxx and CDX, the 10-year tranche calibrations for

    both indexes have sensibly been made less precise following the collapse of Lehman

    Brothers. Recently, with the stabilization of credit markets we again see fairly precise

    calibration results (ie, a relative mispricing in the 24% range) for all tranches and

    for both indexes and both maturities.

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    29/44

    Credit models and the crisis: default cluster dynamics and the GPL model 67

    FIGURE 4 Relative mispricing resulting from the GPL calibration: DJiTraxx.

    20

    15

    10

    5

    0

    5

    10

    15

    20

    03/2005 01/2006 11/2006 09/2007 08/2008 06/2009

    03%36%

    69%912%

    1222%22100%

    Index

    %

    (a)

    20

    15

    10

    5

    0

    5

    10

    15

    20

    03/2005 01/2006 11/2006 09/2007 08/2008 06/2009

    %

    03%36%

    69%912%

    1222%22100%

    Index

    (b)

    To highlight where the problems in calibration come from, we have grouped mis-

    pricings across three different categories.

    Instrument: we have grouped all tranches, independent of maturity and seniority, into

    one group and we have put the remaining calibrated instruments, ie, the 5-year and10-year indexes, in the other group.

    Maturity: we have grouped all tranches and indexes in two groups according to their

    maturity (5 and 10 years).

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    30/44

    68 D. Brigo et al

    FIGURE 5 Relative mispricing resulting from the GPL calibration: CDX.

    20

    15

    10

    5

    0

    5

    10

    15

    20

    %

    03/2005 01/2006 11/2006 09/2007 08/2008 06/2009

    03%36%69%912%1222%22100%Index

    (a)

    20

    15

    10

    0

    5

    10

    15

    20

    %

    03%36%

    69%

    912%1222%

    22100%

    Index

    5

    01/2006 11/2006 09/2007 08/2008 06/200903/2005

    (b)

    Seniority: we have grouped all tranches (leaving out the indexes) into three categories

    depending on the seniority of the tranche in the capital structure.

    1) Equity: equity tranche for both the DJiTraxx and CDX indexes.

    2) Mezzanine: comprising the two most junior tranches after the equity tranche.

    For the DJiTraxx index this means the 36% and 69% tranches, whereas

    for the CDX this means the 37% and 710% tranches.

    3) Senior: comprising the remaining most senior tranches. For the DJiTraxx

    this means the 912%, 1222% and 22100% tranches, whereas for the CDX

    this means the 1015%, 1530% and 30100% tranches.

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    31/44

    Credit models and the crisis: default cluster dynamics and the GPL model 69

    FromFigure 6 onthe nextpage and Figure 7 onpage 71we notethat the GPL model

    produced calibrations that resulted in a relative mispricing that was larger (though of

    relatively small magnitude) in the period from June 2005 to October 2006. For both

    the DJiTraxx and the CDX pools the mispricing could be ascribed to the 10-year

    tranches, in particular to the equity and mezzanine tranches.

    We also note that from October 2008 to June 2009 (ie, after the default of Lehman

    Brothers), the GPL calibration again resulted in a non-zero but still fairly contained

    relative mispricing that, in this case, could be ascribed to both the 5-year and 10-year

    tranches independent of their positions in the capital structure (equity, mezzanine or

    senior).

    5 ADDING SINGLE-NAME AND CLUSTER DYNAMICS:

    THE GENERALIZED-POISSON CLUSTER LOSS MODEL

    The GPL model introduces a simple mechanism to avoid repeated defaults by cap-

    ping the default counting process to the number of names in the pool. However, this

    approach prevents us from extending the model to incorporate single names, since

    we are limiting the number of defaults irrespective of which name is defaulting.

    In Brigo et al (2007), to overcome this issue we introduced the GPCL model,

    starting from the standard CPS framework with repeated defaults and using it as an

    engine to build a new model for (correlated) single-name defaults, clusters defaults

    as well as for the default counting process and the portfolio loss process. Two possible

    strategies (aside from the GPL capping function) were introduced to deal with single

    names.

    We now review the GPCL strategy based on an adjustment at the single-name levelto avoid repeated defaults, leading to (correlated) single-name default processes. This

    approach leads to a less clear cluster dynamics in terms of the original cluster repeated

    default processes, but it shows interesting properties at the single-name level. We refer

    the reader to Brigo et al (2007) for the strategy based directly on cluster dynamics.

    5.1 Common Poisson shock basic framework

    In the CPS framework we consider the observation that a single names default can

    be originated by different events or factors. Occurrence of the event/factor number

    e is modeled as a jump of independent Poisson processes Ne, e D 1 ; : : : ; m. Each

    event can be triggered many times r D 1 ; 2 ; : : : as jumps go on. The r th jump ofNe

    triggers a default event for name k with probability per;k

    . Note that a defaulted name

    k may default again. We address this limitation later.

    The CPS framework may accommodate systemic and idiosyncratic factors in a

    natural way. Consider, for instance, the following two cases:

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    32/44

    70 D. Brigo et al

    FIGURE 6 DJiTraxx relative mispricing resulting from the GPL calibration grouped by

    (a) maturity (5 and 10 years), (b) instrument type (index and tranches) and (c) seniority

    (equity, mezzanine and senior).

    12

    10

    8

    6

    4

    2

    003/2005 10/2005 06/2006 02/2007 10/2007 06/2008 02/2009

    10 year all5 year all

    (a)

    12

    10

    8

    6

    4

    2

    003/2005 10/2005 06/2006 02/2007 10/2007 06/2008 02/2009

    TrancheIndex

    %

    %

    (b)

    12

    10

    8

    6

    4

    2

    003/2005 10/2005 06/2006 02/2007 10/2007 06/2008 02/2009

    MezzanineEquity

    %

    14

    (c)

    Senior

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    33/44

    Credit models and the crisis: default cluster dynamics and the GPL model 71

    FIGURE 7 CDX relative mispricing resulting from the GPL calibration grouped by (a)

    maturity (5 and 10 years), (b) instrument type (index and tranches)and (c)seniority (equity,

    mezzanine and senior)

    12

    10

    8

    6

    4

    2

    0

    03/2005 10/2005 06/2006 02/2007 10/2007 06/2008 02/2009

    10 year all5 year all

    (a)

    12

    10

    8

    6

    4

    2

    003/2005 10/2005 06/2006 02/2007 10/2007 06/2008 02/2009

    TrancheIndex

    %

    %

    (b)

    12

    10

    8

    6

    4

    2

    003/2005 10/2005 06/2006 02/2007 10/2007 06/2008 02/2009

    MezzanineEquity

    %

    14

    (c)

    Senior

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    34/44

    72 D. Brigo et al

    1) e is a totally systemic factor ifper;1 D per;2 D D p

    er;n D 1;

    2) e is a totally idiosyncratic factor if there exists a k such that, for all j k, we

    obtain per;k D 1 and p

    er;j D 0.

    We define the dynamics for the single-name default process Nk, jumping each time

    name k defaults, as:

    Nk.t/ WD

    mXeD1

    Ne.t/XrD1

    Ier;k

    where Ier;k

    is a Bernoulli variable with probabilityQfIer;k

    D 1g D per;k

    . Note that the

    process Nk itself turns out to be Poisson. Furthermore, by differentiating the above

    relationship, we obtain:

    dNk.t/ D

    m

    XeD1 Ie1CNe.t/;k dN

    e.t /

    so that, if we consider two names k and h, we have that the respective default counting

    processes Nk and Nh are not independent, since their dynamics is explained by the

    same driving events Ne.t /.

    Thecore of the CPS framework involves mapping thesingle-name default dynamics

    (which consist of the dependent Poisson processes Nk) into a multiname dynamics

    explained in terms of independent Poisson processes QNs , where s is a subset (or

    cluster) of names of the pool, defined as follows:

    QNs.t / D

    m

    XeD1Ne.t/

    XrD1 Xs0s.1/js

    0jjsj

    Yk02s0Ier;k0

    where jsj is the number of names in the cluster s. In a summation, s 3 k means

    that we are adding up across all clusters s containing k, k 2 s means that we are

    adding across all elements k of cluster s, jsj D j means that we are adding across all

    clusters of size j , and, finally, s0 s means that we are adding up across all clusters

    s0 containing cluster s as a subset.

    Theproof of the independence of QNs for different subsets s canbe found in Lindskog

    and McNeil (2003). Note that a jump in an QNs process means that all the names in

    the subset s, and only those names, have defaulted at the jump time. We denote by Qsthe intensity of the Poisson process QNs.t /, and we assume it to be deterministic for

    the time being. We refer to Brigo et al (2007) for model extensions.

    5.2 Cluster processes as common Poisson shock building blocks

    We do not need to remember the above construction. All that matters for the following

    developments are the independent clusters default Poisson processes QNs.t /. These

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    35/44

    Credit models and the crisis: default cluster dynamics and the GPL model 73

    can be taken as fundamental variables from which (correlated) single-name defaults

    and default counting processes follow. The single-name dynamics can be derived

    based on these independent QNs processes in the so-called fatal shock representation

    of the CPS framework:

    Nk.t/ DXs3k

    QNs.t / or dNk.t / DXs3k

    d QNs.t / (16)

    where the second equation is the same as the first one but in instantaneous jump

    form. We now introduce the process Zj.t/, which describes the occurrence of the

    simultaneous default of any j names whenever it jumps (with jump size 1):

    Zj.t/ WDXjsjDj

    QNs.t/ (17)

    Note that each Zj.t /, being the sum of independent Poisson processes, is itself Pois-

    son. Furthermore, since the clusters corresponding to the different Z1; Z2; : : : ; ZMdo not intersect, the Zj.t / are independent Poisson processes.

    The multiname dynamics, that is, the default counting process Zt for the whole

    pool, can be easily derived by carefully adding up all the single-name contributions:

    Zt WD

    MXkD1

    Nk.t / D

    MXkD1

    Xs3k

    QNs.t/ D

    MXkD1

    MXjD1

    Xs3k;jsjDj

    QNs.t / D

    MXjD1

    jXjsjDj

    QNs.t/

    leadingto the relationship that links the setof dependentsingle-name default processes

    Nk with the set of independent and Poisson distributed counting processes Zj:

    MXkD1

    Nk.t / DMX

    jD1

    jZj.t/ DW Zt (18)

    Hence, the CPS framework offers us a way of consistently modeling the single-

    name processes along with the pool-counting process, taking into account the cor-

    relation structure of the pool, which remains specified within the definition of each

    cluster process QNs . Note, however, that the Zt=M process is not, properly speaking,

    the rescaled number of defaults NCt , since the former can increase without limit, while

    the latter is bounded in the 0; 1 interval. We address this issue in the next section,

    along with the issue of avoiding repeated single-name defaults.

    5.3 The single-name adjusted approach

    In order to avoid repeated defaults in single-name dynamics, we can introduce con-

    straints on the single-name dynamics ensuring that each single name makes only one

    default. Such constraints can be implemented by modifying Equation (16) in order to

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    36/44

    74 D. Brigo et al

    allow for only one default. Given the same repeated cluster processes QNs as before,

    we define the new single-name default processes N1k

    replacing Nk as solutions of the

    following modification of Equation (16) for the original Nk :

    dN1k .t / WD .1 N1k .t

    //Xs3k

    d QNs.t/

    DXs3k

    d QNs.t /Ys3k

    1f QNs.t/D0g (19)

    Remark 5.1 (Interpretation) This equation amounts to saying that name k jumps

    at a given time if some cluster s containing k jumps (ie, QNs jumps) and if no cluster

    containing name k has ever jumped in the past.

    We can compute the new cluster defaults QN1s consistent with the single names

    N1k

    as:

    d QN1s .t/ D Y

    j2sdN

    1j .t/ Y

    j2sc.1 dN

    1j .t// (20)

    where sc is the set of all names that do not belong in s.

    Now we can use Equation (18), with the N1k

    replacing the Nk , to calculate how the

    new counting processes Z1j are to be defined in terms of the new single-name default

    dynamics:

    MXkD1

    dN1k .t/ D

    MXkD1

    .1 N1k .t//

    Xs3k

    d QNs.t /

    D

    M

    XkD1

    .1 N1k .t//

    M

    XjD1

    Xs3k;jsjDj

    d QNs.t /

    D

    MXjD1

    XjsjDj

    d QNs.t /Xk2s

    .1 N1k .t//

    D

    MXjD1

    XjsjDj

    d QNs.t /Xk2s

    Ys03k

    1f QNs0 .t/D0g

    This expression should match:

    dZ1.t / WD

    Xjj dZ1j.t /

    so that the counting processes are to be defined as:

    dZ1j.t / WD1

    j

    XjsjDj

    d QNs.t/Xk2s

    Ys03k

    1f QNs0 .t/D0g(21)

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    37/44

    Credit models and the crisis: default cluster dynamics and the GPL model 75

    The intensities of the above processes can be directly calculated in terms of the

    density of the process compensator. By direct calculation, we obtain:

    hN1k

    .t/ D Ys3k

    1f QNs.t/D0gXs3k

    Qs.t/

    hZ1j

    .t/ D1

    j

    XjsjDj

    Qs.t /Xk2s

    Ys03k

    1f QNs0 .t/D0g

    9>>>>=>>>>;

    (22)

    where, in general, we denote by hX.t/ the compensator density of process Xat time t ,

    referred to as intensity ofX, and where Qs is the intensity of the Poisson process QNs .

    If we consider the repeated Poisson cluster default building blocks QNs to be

    exogenously given, the model N1k

    ; QN1s ; Z1j is a consistent way of simulating the

    single-name processes, the cluster processes and the pool-counting process from the

    point of view of avoiding repeated defaults. In particular, we obtain:

    NCt WD1

    M

    Xk

    N1k .t / D1

    MZ1t 6 1

    Note, however, that the definition ofN1k

    in (19), even if it avoids repeated defaults of

    single names, is not consistent with the spirit of the original repeated cluster dynamics.

    Consider the following example.

    Example 5.2 (Single names versus clusters) Consider two clusters s D f1;2;3g

    and z D f3;4;5;6g. Assume that no names defaulted up to time t except for cluster z,

    in that in a single past instant preceding t names 3, 4, 5 and 6 (and only these names)

    defaulted together (ie, QNz jumped at some past instant). Now suppose that, at time t ,

    cluster s jumps, ie, names 1, 2 and 3 (and only these names) default, so QNs jumps for

    the first time. Does name 2 default at t ?

    According to our definition of N12 , the answer is yes, since no cluster containing

    name 2 has ever defaulted in the past. However, we have to be careful in interpreting

    what is happening at cluster level. Indeed, clusters z and s cannot both default since,

    in this way, name 3 (in both clusters) would default twice. So we see that the actual

    clusters default dynamics of this approach, implicit in Equation (20), does not have

    a clear intuitive link with processes QNs . This is why, in Brigo et al (2007), we present

    a second strategy to avoid repeated defaults that is also consistent at the cluster level.

    5.4 Homogeneous pool limit

    To simplify the parameters and combinatorial complexity, we may assume that the

    cluster intensities Qs depend only on the cluster size jsj D j . Then it is possible to

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    38/44

    76 D. Brigo et al

    directly calculate the intensity of the pool-counting process C D Z1 as:

    hZ1.t/ D 1 Z

    1t

    M Xj j M

    j ! Qjwhere Qj is the common intensity of clusters of size j .

    We see that the pool-counting process intensity hZ1 is a linear function of the

    counting process C D Z1 itself, as would be expected from general arguments for

    a pool of independent names (again with homogeneous intensities). In such a pool,

    a default of one name does not affect the intensity of default of other names, and

    the pool intensity is the common homogeneous intensity multiplied by the number

    of outstanding names. Each new default simply diminishes the pool intensity of one

    common intensity value and the pool intensity is always proportional to the number

    (fraction) of outstanding names .1 NC /.

    5.5 Credit default swap calibration

    In this section we tackle the single-name CDS calibration using the above strategy for

    avoiding repeated defaults. In such a strategy, the compensator for the single default

    event of name k is given by Equation (22). This leads to the following expression for

    the default leg of a CDS on name k with deterministic recovery R paying protection

    up to maturity T, under deterministic risk-free interest rates, and assuming that the

    cluster intensities Qs are deterministic:

    E0defleg.0/ D LGD

    ZT0

    P.0;u/E0dN1k .u/

    D LGDZT0

    P.0;u/E0hN1k

    .u/ du

    D LGD

    ZT0

    P.0;u/Ys3k

    E01f QNs.u/D0gXs3k

    Qs.u/ du

    D LGD

    ZT0

    P.0;u/Ys3k

    eQs.u/

    Xs3k

    Qs.u/ du

    D LGD

    ZT0

    P.0;u/ exp

    Xs3k

    Qs.u/

    Xs3k

    Qs.u/ du

    which is the same as the default leg in a standard deterministic intensity single-name

    model for the CDS when the intensity of name k at time t is given by:

    hCDSk .t / WDXs3k

    Qs.t /

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    39/44

    Credit models and the crisis: default cluster dynamics and the GPL model 77

    In particular, if cluster intensities Qs are also constant in time, besides being deter-

    ministic, and if we consider CDSs with continuous payments in the default leg, we

    can see that single-name consistency with CDSs on name k having fair running spread

    SkCDS.0;T/ for maturity T is achieved with:

    Xs3k

    Qs DSkCDS.0;T/

    LGD

    Imposing this (linear) constraint on the Q0s for all k values in the CDO calibration

    ensures single-name consistency with CDSs. Of course, there is no need to assume

    constant cluster intensities, so the method also works when one tries to fit the term

    structure of both CDSs and CDOs. It is sufficient to go through the obvious general-

    ization for piecewise constant cluster intensities.

    This highlights a great advantage of this capping strategy for avoiding repeated

    defaults: it makes single-name CDS calibration very easy to achieve.

    5.6 Customizing the generalized Poisson cluster loss model to

    deal with sectors

    As was previously done for the GPL model, we can split the default counting process

    and the loss process to factor out the Armageddon event in order to assign a zero

    recovery to such a mode to allow for a better calibration of super-senior tranches.

    We consider the rescaled pool-counting process NCt and the corresponding reduced

    process Nc1

    t , which can be defined as:

    NCt WD1

    M

    MXjD1

    jZ1j.t /

    Nc1t WD1

    M 1

    M1XjD1

    j dZ1j.t /

    Starting from the counting process, we obtain:

    d NCt D

    1

    M

    MXjD1

    j dZ1j.t /

    D1

    M

    MXjD1

    XjsjDj

    d QNs.t/Xk2s

    Ys03k

    1f QNs0 .t/D0g

    Research Paper www.thejournalofcreditrisk.com

  • 7/28/2019 Credit Models and the Crisis

    40/44

    78 D. Brigo et al

    D 1f QN .t/D0g1

    M

    M1XjD1

    XjsjDj

    d QNs.t /Xk2s

    Ys03k;s0

    1f QNs0 .t/D0g

    C 1f QN .t/D0g dQN .t/ 1

    M

    Xk2

    Ys03k;s0

    1f QNs0 .t/D0g

    D 1f QN .t/D0g d Nc1t C 1f QN .t/D0g d

    QN .t/.1 Nc1t/

    and, by introducing the non-Armageddon indicator I1t , we obtain:

    d NCt D I1t d Nc

    1t C .1 Nc

    1t/I

    1t dI

    1t ; I

    1t WD 1fZ1

    M.t/D0g D 1f QN .t/D0g

    where is the set of all possible names.

    For the GPL model we then introduce the stopping time O as the minimum time

    between the Armageddon event time and the time when the reduced pool is over:

    namely, the time when the full pool is over. Now we can integrate in order to obtain thecounting process by observing the following: firstly, I1 is zero after the Armageddon

    event, otherwise it is equal to one; secondly, d I1 is non-zero only at the Armageddon

    event; and finally, Nc is equal to one and d Nc is zero after the reduced pool is over.After

    some algebra, we obtain:

    NCt D 1fO6tg C Nc1t 1fO>t g

    Note that NCt depends on the value of Nc1 only at time t and that the equation relating

    the two counting processes has the same form as the corresponding Equation (13) of

    the GPL model.

    The same line of reasoning used for the GPL model may be repeated for the GPCLloss process as well by considering a zero recovery for the Armageddon event. We

    obtain:

    d NLt D .1 R/I1t d Nc

    1t C .1 Nc

    1t/I

    1t dI

    1t

    and, by integrating:

    NLt D .1 R Nc1O/1fO6tg C .1 R/ Nc

    1t 1fO>t g

    we obtain an equation relating the loss of the pool to the counting process of a reduced

    GPCL model, which has the same form as the corresponding Equation (14) of the GPL

    model, allowing for the same numerical techniques to calculate the loss distribution.

    6 CONCLUSIONS AND FUTURE DEVELOPMENTS

    In this paper we have considered CDOs. We analyzed their valuation (both pre-crisis

    and in-crisis) using our pre-crisis GPL model, an arbitrage-free dynamic loss model

    The Journal of Credit Risk Volume 6/Number 4, Winter 2010/11

  • 7/28/2019 Credit Models and the Crisis

    41/44

    Credit models and the crisis: default cluster dynamics and the GPL model 79

    capable of calibrating all the tranches for all the maturities simultaneously. We have

    also confirmed the CDO model-implied loss-distribution features that we had already

    highlighted in Brigo et al (2006a): we found a multimodal loss probability distribution

    that can also be obtained independently within the implied copula framework. This

    behavior can be associated to the default of clusters of names within the CDO pool,

    which, in turn, may be interpreted as default of sectors of the economy. We repe