“ CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL ”. - Pampillón - 2004

Embed Size (px)

Citation preview

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    1/22

    CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD: A

    METHODOLOGICAL PROPOSAL.

    By: Cristina Ruza and

    Fernando Pampilln.

    1. INTRODUCTION

    Perhaps no industry has been subject to high degree of configuration during the last

    three decades like banking, and it focuses the attention of academics and supervisors since

    it is continuously facing new challenges. In this context it has been suggested the necessityof an appropriate prudential regulation in order to foster the banking solvency, as a key

    area of concern. After a period of intense preliminary work, on June 2004 the Basle

    Committee published the text of the New Capital Accord or Basle II. In spite of several

    changes that have been introduced in the text, we will primary be focused on the new

    treatment of credit risk, because banks are now allowed to use their own internal rating

    systems in classifying their customers according to their real risk profile.

    On these grounds, we will further the analysisof the scope and possibilities of those

    internal rating systems which are a relevant and contemporary issue.

    2. METHODOLOGY OF STUDY: ARTIFICAL NEURAL NETWORKS.

    The recent financial trends are playing a major role in reshaping the operations and

    structure of the financial institutions, and in turn the nature of credit risk. Hence, the

    development of new tools for systematically assessing credit risk has become a major

    priority for many banking institutions1. In this study we will apply artificial neural

    networks (NN) for assessing the credit risk of a saving banks customers, because it has

    obtained very good results as compared to other linear classification techniques.

    1Two decades ago lending institutions resisted to rely upon credit risk predictions using numerical formulas

    because of a reluctance to replace the expertise of loan officers and the absence of credit management

    1

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    2/22

    2.1. Concept and elements of neural networks.

    A neural network is a computerised system that tries to emulate the way in which

    the information is processed by the biological neurons of the brain. The basic element of a

    networks architecture is the "artificial neuron" that is a simple calculating device, which

    from an input vector of external information will provide a unique response. In Figure 1

    there are shown the different elements of a generic artificial neuron:

    1- Set of inputs .)(tjX

    2- Set of synapses or connecting links connected to neuron i ( w ) indicates the

    strength or weight at the input of a neuron, and controls the strength of the

    incoming signal from a sending (presynaptic) neuron j and a receiving

    (postsynaptic) neuron i.

    ij

    3- An adder gives the value of the postsynaptic signal depending on the weights

    and inputs (the more usual one is the weighted sum of inputs and synaptic

    weights).

    4- An activation or transfer function provides the current activity level of neuron i

    depending upon its previous activity level and its postsynaptic signal. This acts

    as a squashing function since it limits the amplitude of the postsynaptic signal

    to some limited value.5- The output function gives the actual response of neuron i depending on its

    activity level.

    schooled in quantitative techniques. Nowadays, the grim reality is rather different and most financialinstitutions recognise their applicability to the credit-granting process.

    2

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    3/22

    Figure 1. Generic Model of an artificial neuron.

    Source: Adapted from Rumelhart et al(1986).

    Output yj

    Output Function

    Transfer Function

    yi = f(ai)

    ai = f(hi)

    Adder

    Synapses Wij

    Inputs Xj

    hi=f(xj;wij)

    However, a NN is not only one artificial neuron but a system of layers of

    interconnected neurons, each of which is connected with the previous and the following

    ones (when exist). Therefore, the whole architecture of a NN is characterised by features

    such as the number of layers, the number of nodes within each layer (depending on the

    kind of inputs and the expected response), and the direction of information propagation2.

    One of the main properties of a NN is its capability of learning from its

    environment and storing associations in order to improve its performance. The learning

    procedure is an iterative process consisting of modifying the connections strengths in a

    manner that emulate rule-like behaviour. We should bear in mind that those adjustments

    make the NN more knowledgeable after each iteration, to a point when the learning

    process is interrupted according to some optimisation criterion.

    In the next section we will analyse the type of NN that will be applied in this study:

    the multilayer perception or back-propagation network.

    3

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    4/22

    2.2.The back-propagation neural network

    A back-propagation neural network is a feedforward multilayer model whose

    principal feature is that it uses a back-propagation algorithm as a mechanism of error-

    correction within a supervised scheme. Here, the input vector proceeds through the

    network in the forward pass emerging at the output end as the "actual response". These

    resulting values are compared to the values of the output facts so if they agree no action is

    taken, and if they differ the error signal is calculated. In the backward pass, this error

    signal propagates backward in the network in such a way that all connections are modified

    following the error-correction rule, as to make the "actual response" move closer to the

    "target or desired response" in subsequent iterations.

    Figure 2 depicts the network architecture organised into one input layer of source

    nodes, one or more hidden layers of computation nodes and, finally, one output or exit

    layer of computation nodes as well. The introduction of a hidden layer gives degrees of

    freedom to the net and it permits to capture more complex features of the environment to

    model. Going further, the hidden layer introduces non-linearity to the system since the

    transfer function more commonly used is sigmoid, continuous, differentiable and

    exhibiting asymptotic properties.

    2The analysis of different types of NN exceeds the aim and scope of this study. For more information see

    Martn del bro y Sanz (1997).

    4

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    5/22

    Figure 2. Multilayer perceptron and the Transfer funtion of the neuron.

    Input layer

    x

    f(x)

    Hidden layer Exit layer

    The learning procedure consists of repeatedly presenting related input- output sets

    so the back- propagation algorithm can incrementally adjust the connection weights for

    each neuron. This is strictly an optimisation problem and the cost function is defined in

    terms of the mean-square-error-criterion.

    To adjust the connecting weights there are two types: 1) those weights connecting

    the input layer to the hidden layer ( ), and 2) those weights connecting the hidden layer

    to the output layer (

    ijw

    jk ). To minimise the sum of squared errors the method of "gradient

    descent" will be used. This method tries to identify in the multidimensional error surface

    (with the shape of mountains and valleys)3 the direction of steepest descent (i.e. where the

    error reduces more abruptly) by making adjustments to connecting weights proportional to

    the product of the error signal and the input signal (see Figure 3).

    5

    3When dealing with non linear transfer functions the error surface has a global minimum and perhaps some

    local minima to which the algorithm would converge.

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    6/22

    Figure 3. Error surface.

    Source: Own elaboration.

    The back-propagation algorithm will start from an arbitrary point of the error

    surface (the initially assigned synaptic weights) and moves down successively toward a

    minimum point.

    The error signal is the sum of squared errors over all neurons in the output layer

    and is defined by:

    )2Yk-Ydk

    (K

    1=k

    2

    1=ECM (1)

    where Ykd represents the desired response at the output neuron k, and Yk represents its

    actual response (or fact).

    The formulae for adjusting the connections are obtained by firstly differentiating

    with respect to weights connecting the output layer to the hidden layer ( ), andjk

    6

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    7/22

    thereafter with respect to weights connecting the hidden layer to the input layer ( ). On

    the first stage we get

    ij

    4:

    )3(

    )2(

    jk

    NskNsk

    Y k)Y k-Ydk

    (K

    1=k

    -=jk

    ECM

    jk

    Y k)Y k-Ydk

    (K

    1=k

    -=jk

    ECM

    Z jNsk

    Y k)Y k-Ydk

    (-=jk

    ECM

    (4)

    Once the error variation has been calculated, the connecting weights will be

    updated according to the "delta rule", and using a sigmoid function (with range from 0 to

    +1) as the output function:

    ( )

    Ze+1

    e

    )Y-Y(--(t)=1)+(t jN- 2

    N-

    k

    d

    kjkjk sk

    sk

    (5)

    where is the learning-rate5.

    On the second stage the "chain rule" will be applied for modifying the synaptic

    weights connecting the input layer to the hidden layer as follows:

    4 Following Haykin (1994:145) "the gradient represents a sensitivity factor determining the

    direction of search in weight space for the synaptic weight ".

    jkECM /

    jk5 The introduction of a learning rate will impact the performance of the back-propagation algorithm and the

    rate of convergence to a stable solution.

    7

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    8/22

    )6(XiNoj

    Zj

    J

    1=j

    jkNsk

    Yk)Yk-Ydk

    (K

    1=k

    -=wij

    ECM

    wij

    Noj

    Noj

    ZjJ

    1=j

    jkNsk

    Yk)Yk-Y

    dk

    (K

    1=kij

    wij

    Zj

    J

    1=jjk

    Nsk

    Yk)Yk-Ydk

    (K

    1=k

    -=wij

    ECM

    wij

    Zjjk

    J

    1=j

    Nsk

    Yk)Yk-Y

    dk

    (K

    1=k

    -=wij

    ECM

    wij

    NskNsk

    Yk)Yk-Yk(K

    K=k

    -=wij

    ECM

    wij

    Yk)Yk-Y

    dk(

    K

    1=k-=wij

    ECM

    wij

    YkYk

    ECM=

    wij

    ECM

    -=w

    ECM

    d

    Proceeding as previously, the adjustments to be made to wij are:

    N

    Y)Y-Y(XN

    Z+(t)w=1)+(tw jksk

    kk

    dk

    K

    =1k

    i

    j

    j

    ijij (7)

    Among the different procedures for interrupting the training process of the network

    in this study we have applied the cross validation. It simply consists of separately

    calculating both the "learning error" and the "generalisation error". As it can be appreciated

    in Figure 4, we can reduce the learning error to almost zero by increasing the number ofiterations. Then, after any iteration we will apply the new connecting weights to the

    8

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    9/22

    generalisation set of examples until the point of minimum generalisation error, when the

    learning process is interrupted (see the Figure 4 below). Once this point is reached we may

    let the network deal with the environment by itself, which means that the network will

    thereafter operate in an unsupervised fashion.

    Figure 4. Learning error and generalisation error.

    Fuente: Elaboracin propia.

    Source: Own elaboration.

    Error

    Generalisation error

    Learning error

    N iterationsMinimum

    Lastly, it is necessary to make clear that the back-propagation algorithm does not

    always reach the global minimum of the error surface, but a local minimum instead.

    Consequently, in spite any improvement of the learning process (defining an adaptative

    learning rate, introducing a momentum term6 and so forth) we cannot assure whether the

    point is a local or global minimum7.

    3. EMPIRICAL APPLICATION.

    For carrying out an empirical application of credit risk assessment, a Spanish

    Saving Bank has provided us with information about its credits to small and medium sized

    enterprises covering the period from December 1995 up to December 2002. The size of the

    6 The momentum term is introduced in order to increase the algorithm rate of learning and to avoid thedanger of instability. Accordingly, the algorithm will accelerate descent in steady downhill directions,

    whereas having an stabilisation effect when correlative changes in weights present different signs.

    9

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    10/22

    initial sample comprises information from 4.229 good customers and 537 delinquent

    customers.

    With the purpose of appraising the credit quality of these customers we have

    followed the criteria of Hale (1983), Toms et al (2002) and Checkley (2003), among

    others. The financial ratios selected as good predictors of credit risk are:

    1. - Capital borrowing = Long-term liabilities / Equity

    2. - Interest coverage= Profit before taxes / Financial expenses

    3. - Return on equity = Net profit / Equity

    4. - Profitability = Net profit / Total assets

    5. - Liquidity = (Inventory + Receivables + Cash assets) / Long-term liabilities

    6. - Loan repayment capacity = (Net profit + Charge-offs) / Short-term liabilities

    7. - Fixed assets turnover = Sales / Fixed assets

    8. - Working assets turnover= Sales / Working assets

    9. - Analysis of net interest income = Net interest income / Sales

    10. - Self-financing capacity = (Net profit + Charge-offs) / Total assets

    11. - Total sales revenue = Sales revenue / Total assets

    The information has been divided into different samples following a chronological

    criterion. For the group of good customers we have used the number of years from the

    financing date, whereas for delinquent customers the criterion is the number of years in

    advance to the date of delinquency.

    Afterwards, an exploratory analysis of the financial ratios has been carried out, and

    we found evidence of a high dispersion of data as a consequence of the presence of outliers

    into the sample. Due to that, financial ratios have been codified according to the following

    ranges (Table 1).

    7 Sometimes the learning algorithm gets trapped at a local minimum and is unable to reach the global

    minimum itself. This problem is known as a premature saturation of neurons or "flat spot effect".

    10

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    11/22

    TABLE 1. VARIABLES RANGES.

    RATIO VERY GOOD GOOD BAD VERY BAD

    Variables codes 1 2 3 4

    Capital borrowing 0- 0,50 0,50-1,25 1,25-2,50 + 2,50

    Interest coverage > 10 5 - 10 2 -5 < 2

    Return on equity > 20% 8 20 % 2 - 8 % < 2%

    Profitability > 5% 2 - 5 % 0,5 2 % < 0,5%

    Liquidity 1,5 0,75 1,5 0,50 -0,75 < 0,50

    Loan repaymentcapacity

    > 20% 10 20% 5 - 10 % < 5%

    Fixed assetsturnover

    > 8 4 - 8 2 - 4 < 2

    Working assetsturnover

    > 4 2 - 4 1 - 2 < 1

    Analysis of netinterest income

    > 25% 10 25% 0 10% < 0%

    Self-financingcapacity

    > 12 % 5 12 % 2 5 % < 2%

    Total salesrevenue

    > 3 1,5 - 3 0,75 1,5

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    12/22

    Figure 5. Self -organised Kohonen maps.

    Source: From Olmeda and Barba - Romero (1993: 91).

    Synapses

    Presynaptic

    layer

    Postsynaptic

    layer

    The computer package that has been used is SAS System (8 th version), particularly

    the module "Enterprise MinerTM" (4.1th version). It has been specified 2x1 nodes as the

    map dimension; the neighbourhood radio is set equal 1, both during the ordering phase and

    the convergence phase; the neighbourhood function is gaussian; the grouping procedure is

    the batch-self organising map; and 100 the number of iterations carried out.

    From the classification results (Table 2), we can assign each customer to an initial

    rating category, which will be considered as the "target or desired responses" of the

    network. The nodes interpretation are as follows: node 1 of good customers initial

    rating of 1; node 2 of good customers initial rating of 2; node 1 of delinquent customers

    initial rating of 3, and node 2 of delinquent customers initial rating of 4.

    12

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    13/22

    TABLE 2. KOHONEN MAP RESULTS (2X1, r=1).

    Group N of

    Cases

    Dist

    Group

    CLUSTER SEEDS*

    V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11

    Delin

    quent

    0

    1

    2

    120

    150

    0.887 0.443

    0.545

    0.549

    0.459

    0.386

    0.590

    0.262

    0.690

    0.495

    0.504

    0.248

    0.699

    0.494

    0.505

    0.391

    0.587

    0.343

    0.627

    0.246

    0.704

    0.441

    0.547

    Delin

    quent

    -1

    1

    2

    139

    194

    0.844 0.435

    0.546

    0.401

    0.570

    0.432

    0.549

    0.276

    0.660

    0.467

    0.523

    0.228

    0.694

    0.546

    0.467

    0.437

    0.545

    0.337

    0.604

    0.232

    0.691

    0.485

    0.511

    Delin

    quent-2

    1

    2

    195

    220

    0.884 0.455

    0.540

    0.407

    0.581

    0.368

    0.617

    0.264

    0.709

    0.517

    0.485

    0.275

    0.699

    0.504

    0.496

    0.405

    0.583

    0.375

    0613

    0.257

    0.714

    0.454

    0.540

    Delin

    quent

    -3

    1

    2

    133

    185

    0.736 0.496

    0.502

    0.393

    0.574

    0.358

    0.601

    0.239

    0.687

    0.483

    0.511

    0.248

    0.680

    0.534

    0.475

    0.505

    0.495

    0.362

    0.600

    0.471

    0.520

    0.498

    0.501

    Delin

    quent

    -4

    1

    2

    79

    84

    0.863 0.512

    0.488

    0.408

    0.583

    0.413

    0.581

    0.287

    0.700

    0.478

    0.520

    0.257

    0.727

    0.518

    0.482

    0.412

    0.581

    0.399

    0.594

    0.265

    0.720

    0.432

    0.563

    Delin

    quent

    -5

    1

    2

    40

    41

    0.825 0.548

    0.452

    0.475

    0.525

    0.368

    0.627

    0.342

    0.653

    0.551

    0.448

    0.280

    0.719

    0.584

    0.417

    0.368

    0.628

    0.489

    0.579

    0.272

    0.722

    0.464

    0.535

    Delin

    quent

    -6

    1

    2

    10

    10

    0.824 0.507

    0.494

    0.497

    0.502

    0.487

    0.512

    0.335

    0.665

    0.5

    0.5

    0.29

    0.71

    0.502

    0.497

    0.335

    0.665

    0.43

    0.57

    0.26

    0.74

    0.407

    0.592

    Good

    1

    1

    2

    2025

    2182

    0.919 0.454

    0.542

    0.367

    0.633

    0.354

    0.634

    0.260

    0.721

    0.479

    0.519

    0.279

    0.703

    0.486

    0.512

    0.410

    0.582

    0.386

    0.610

    0.269

    0.713

    0.433

    0.561

    Good

    2

    1

    2

    1905

    2287

    0.907 0.438

    0.551

    0.337

    0.641

    0.339

    0.633

    0.249

    0.708

    0.474

    0.521

    0.278

    0.684

    0.487

    0.510

    0.424

    0562

    0.387

    0.594

    0.269

    0.692

    0.441

    0.548

    Good

    3

    1

    2

    2122

    1899

    1.01 0.442

    0.564

    0.305

    0.724

    0.344

    0.673

    0.275

    0.750

    0.474

    0.528

    0.303

    0.719

    0.485

    0.515

    0.461

    0.542

    0.346

    0.672

    0.288

    0.736

    0.462

    0.541

    Good

    4

    1

    2

    1797

    1987

    0.911 0.433

    0.559

    0.321

    0.665

    0.342

    0642

    0.259

    0.717

    0.471

    0.525

    0.291

    0.688

    0.506

    0.494

    0.429

    0.563

    0.403

    0.588

    0.280

    0.698

    0.457

    0.543

    13

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    14/22

    TABLE 2. KOHONEN MAP RESULTS (2X1, r=1). Continuation.

    God

    5

    1

    2

    1601

    1704

    0.899 0.430

    0.565

    0.328

    0.667

    0.349

    0.641

    0.266

    0.719

    0.473

    0.524

    0.296

    0.691

    0.506

    0.493

    0.427

    0.568

    0.414

    0.580

    0.288

    0.698

    0.450

    0.546

    Good

    6

    1

    2

    1313

    1435

    0.896 0.441

    0.553

    0.316

    0.672

    0.339

    0.647

    0.261

    0.717

    0.450

    0.545

    0.294

    0.687

    0.512

    0.488

    0.449

    0.546

    0.408

    0.584

    0.294

    0.688

    0.468

    0.528

    Good

    7

    1

    2

    986

    1101

    0.887 0.441

    0.552

    0.315

    0.669

    0.324

    0.656

    0.259

    0.714

    0.459

    0.536

    0.299

    0.674

    0.516

    0.484

    0.453

    0.541

    0.415

    0.576

    0.297

    0.681

    0.477

    0.520

    Good

    8

    1

    2

    259

    255

    0.888 0.434

    0.566

    0.333

    0.676

    0.336

    0.665

    0.274

    0.728

    0.466

    0.533

    0.309

    0.693

    0.526

    0.473

    0.443

    0.558

    0.415

    0.586

    0.314

    0.688

    0.469

    0.530

    Source: Own elaboration.

    * The variables are the following: V1: capital borrowing, V2: interest coverage, V3: return on equity, V4:profitability, V5: liquidity, V6: loan repayment capacity, V7: fixed asset turnover, V8: working assets

    turnover, V9: analysis of net interest income, V10: self- financing capacity, V11: total sales revenue.

    With this information we start the learning process of the back-propagation

    network, for which we use the software Trajan Neural Networks 4.0 version. The network

    architecture will be designed with 11 nodes in the input layer, 6 nodes in the hidden layer

    and 4 nodes in the output layer. The transfer functions are the followings:

    - Input layer: identity linear function a with pre-processing(t)h=(t) ii8.

    - Hidden layer: sigmoid functione+1

    1=(t)a (t)h-i i

    with a range from 0 to +1.

    - Output layer: sigmoid functione+1

    1=(t)a (t)h-i i

    with a range from 0 to +1 and with

    post-processing summing up to 1.

    Customers assignment to the different sets of examples is random and the final

    composition is: training set = 12.558 cases; generalisation set = 6.279 cases and the

    verification set = 6.278 cases.

    8

    The pre-processing consists of changing the scale and origin of the inputs in order to avoid a potentialproblem of neuron saturation that sometimes appears when using logistic transfer functions. Variables values

    was multiplied by a factor equal to 0,1111 (scale) and then added a shift equal to -0,1111.

    14

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    15/22

    Learning was carried out fixing an adaptive learning rate of an initial value of 0,7 and

    a final value of 0,01, and a momentum term set equal to 0,6 was also introduced. The error

    measure is the mean-square-error. The maximum number of iterations to be performed was

    limited to 5.000 and we have interrupted the learning process at iteration 3.561, which

    corresponds to the minimum generalisation error.

    Table 3 shows the estimated synaptic weights connecting the input layer and the hidden

    layer (wij) as well as the thresholds for each of the presynaptic neurons. Table 4 shows the

    estimated synaptic weights connecting the hidden layer and the output layer ( ) and the

    thresholds values.

    jk

    Finally, Table 5 presents the classification results obtained by applying the back-

    propagation network are presented. For the training set we have that the percentage of

    cases correctly classified is about 96%. Distinguishing by types of customers, the better

    performance corresponds to "very good customers" (96.95%), closely followed by "good

    customers" (95.8%). In general, the financial profile of those customers can be clearly

    distinguished from the delinquent customers group, which in itself is an important finding.

    Regarding the delinquent customers, the back-propagation network properly classifies the

    92.29% of "very bad customers" and the 86.21% of "bad customers".

    Even though the performance of the net is not perfectly balanced between good and

    delinquent customers, such result is reasonable because the sample contains more

    customers belonging from the first group rather than the second one.

    If we focus the attention on the generalisation set, results are quite similar to the

    previous ones with the following percentages of customers correctly classified: "very

    good", 97.2%; "good ", 96.3%; "bad", 83.8% and "very bad", 92.4%.

    Also, the verification or holdout set is composed by cases that are only used once the

    learning process has finished completely. The primary aim of this set is to externally

    validate the estimated weights of the network for cases completely unknown. Due to the

    fact that the percentage of customers well classified are the following: "very good", 97.1%;

    "good", 95.4%; "bad", 84.4% and "very bad", 90.3%, there is enough empirical support for

    the external validation of the network. This is especially important if we take into account

    that banking institutions concerns are to implement a credit assessment technique with a

    good capacity for ex ante prediction, with the objective of constructing an efficient credit

    portfolio according to a solvency criterion (regulatory capital) and risk aversion behaviour

    (economic capital).

    15

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    16/22

    TABLE 3. MATRIX OF ESTIMATED THRESOHOLDS AND SYNAPTIC WEIGHTS CONNECTING THE

    INPUT LAYER AND THE HIDDEN LAYER.

    \Hidden layer

    Input layer\Neuron 1 Neuron 2 Neuron 3 Neuron 4 Neuron 5 Neuron 6

    Threshold -87,39711 51,591670 -62,85674 -95,83867 -49,36286 -82,98805

    Capital

    borrowing

    -4,793041 19,56604 -93,18329 -13,17544 -71,4148 -45,03547

    Interest

    coverage

    -15,66782 4,480033 -59,78461 -101,8611 -8,164654 16,98045

    Return on

    equity

    -0,7059 46,93832 -45,6084 -61,79653 6,422741 15,70674

    Profitability -2,461284 55,59102 -53,55524 -90,0604 6,529964 9,363676

    Liquidity -4,578899 9,875433 -29,25736 -3,746445 -13,38025 -19,67534

    Loan repaym.capacity

    -9,094162 52,29752 -42,85632 -74,79896 -66,47169 -45,37585

    Fixed assets

    turnover

    -12,37476 -0,3004 7,001947 -8,315231 71,36565 15,49148

    Wking assets

    turnover

    -17,39107 16,5868 -27,84732 -34,05244 41,20108 37,27043

    Net interest

    income

    -38,54687 20,1122 -49,82619 -59,61193 7,768953 -64,00431

    Self-financing

    capacity

    -24,49053 62,35182 -44,21426 -93,12161 67,64085 -61,60878

    Total sales

    revenue

    6,737722 16,88155 -6,899281 -8,396957 -160,6237 -55,27595

    Source: Own elaboration.

    TABLE 4. MATRIX OF ESTIMATED THRESHOLDS AND SYNAPTIC

    WEIGHTS CONNECTING THE HIDDEN LAYER AND THE OUTPUT

    LAYER.

    \Output layer

    Hidden layer\Neuron 1

    (Very good)

    Neuron 2

    (Good)

    Neuron 3

    (Bad)

    Neuron 4

    (Very bad)

    Threshold -0,7622 15,14503 61,3072 -0,584034

    Neuron 1 3,665215 3,739193 71,86225 -7,404866

    Neuron 2 -9,151066 9,179011 -8,307265 3,171639

    Neuron 3 5,235971 -5,265758 -2,605748 -0,219881

    Neuron 4 4,183186 -4,218764 -5,989688 -1,066168

    Neuron 5 -3,698388 3,556096 -3,190971 0,854727

    Neuron 6 -2,448638 9,701068 -13,52736 -11,83772

    Source: Own elaboration.

    16

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    17/22

    TABLE 5. CLASSIFICATION RESULTS OF THEBACK-PROPAGATION NETWTRAINING SET GENERALISATION SET

    Very

    good

    Good Bad Very bad Very

    good

    Good Bad Very bad

    Total 5.807 5.914 370 467 2.942 2.953 173 211

    Correct

    cases

    5.630 5.666 319 431 2.860 2.844 145 195 Targetresponse

    Wrong

    cases

    177 248 51 36 82 109 28 16

    Very good 5.630 224 0 0 2.860 101 0 0

    Good 174 5.666 40 23 82 2.844 22 10

    Bad 3 24 319 13 0 8 145 6 Actual

    response

    Very bad 0 0 11 431 0 0 6 195

    % Correctly

    classified96,95 95,80 86,21 92,29 97,21 96,30 83,81 92,41

    Error % 3,05 4,20 13,79 7,71 2,79 3,70 16,19 7,59

    Global % 95,93 96,26

    Source: Own elaboration.

    17

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    18/22

    Finally, we have carried out a sensitivity analysis aiming to determine which

    variables contribute to improve the network classification performance to a higher

    extent. The procedure consists of comparing the error incurred when omitting one

    variable and the error incurred when all variables are jointly considered. For doing so

    we will construct the following ratio:

    Ratio RvariablestheallError with

    Xleout variabError with j=

    where:

    if R 1; variable contribution is negligible jX

    if R> 1; variable contributes to improve the overall performance.jX

    Thus, an R ratio will be calculated for each of the 11 variables initially

    considered in order to set out a ranking of variables from higher to lower contribution.

    Table 6 presents the sensitivity analysis results. In regard of the training set, the first

    five variables are: Self-financing capacity, Loan repayment capacity,Profitability, Net interest income and Return on equity. For the generalisation set

    the results match the previous ones with the exception of Net interest income that

    now occupies the sixth place after the Working assets turnover. However, the

    interpretation of those results should be limited to establish a ranking of variables and

    not for removing any of them from the analysis.

    18

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    19/22

    TABLE 6. SENSIBILITY ANALYSIS.

    TRAINING SET GENERALISATION SET

    Errorwithout 1

    variable

    Ratio R Ranking Errorwithout 1

    variable

    Ratio R Ranking

    Capital

    borrowing

    0,17208 1,284134 7 0,16293 1,277401 7

    Interest coverage 0,16599 1,238737 9 0,162004 1,270069 8

    Return on equity 0,17766 1,325828 5 0,17354 1,360578 4

    Profitability 0,18192 1,357588 3 0,17955 1,407661 3

    Liquidity 0,14584 1,088374 11 0,13609 1,066958 11

    Loan repayment

    capacity.

    0,19034 1,420403 2 0.181009 1,419061 2

    Fixed assets

    turnover

    0,16311 1,217261 10 0,15672 1,228695 10

    Working assets

    turnover

    0,17677 1,319148 6 0,16626 1,303434 5

    Analysis of net

    income

    0,17797 1,328122 4 0,16563 1,298545 6

    Self-financing

    capacity

    0,19838 1,480466 1 0,18689 1,465207 1

    Total sales

    revenue

    0,16865 1,258584 8 0,15918 1,247978 9

    Source: Own elaboration.

    4. DISCUSSION AND CONCLUSIONS.

    The first conclusion that can be drawn from the empirical analysis carried out is

    that the classification performance of the network is an improvement compared to

    alternative linear techniques. Since artificial neural networks yields very satisfactory

    results it should be recommended to apply this sort of technique, directly imported from

    the Biology, onto this research field.

    19

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    20/22

    In addition, the flexibility of NN is an important advantage taking into account the

    dynamic nature of the credit risk. Then, it will be necessary to update the estimation of

    weights on a frequent basis in order to assure that customers profile in terms of credit

    risk, will be properly captured at different points in time. On these grounds, we can feel

    confident about the possibilities of validating this model under the Basel II Capital

    Accord dictates, in particular under the internal- rating based foundation approach.

    Nevertheless, we cannot conclude this study without recognising the primary

    shortcomings of this empirical analysis. In first place it should be noted that credit risk

    is directly related to the financial situation of a firm, but also is influenced by the

    external environment, which is not reflected in the accounting states. As a consequence,

    it is absolutely necessary to widen the scope of analysis by including variables related to

    the economic environment in a broader sense. Other aspects that deserve to be

    mentioned are qualitative ratios such as customers concentration, diversification

    degree, senior executives experience and age of the company, among others.

    A final comment is that, despite the better results of the NN compared to

    alternative linear techniques, these can also be improved by refining the sample of data

    to be more balanced between good and delinquent customers. Indeed, the databases

    currently available only store information about the bank customers considered to be

    appropriate at the time of their credit application. Therefore, those customers whose

    credit applications had been denied should be monitored in order to see whether they

    comply with their financial obligations or they incurred in a delinquency with another

    financial institution.

    To do so, it will be required a joint effort from banking institutions to put

    together their information and then benefit from a more precise classification of

    customers according to their real risk profile. The extent to which the internal rating

    systems of banks become more accurate, it will have a direct impact in terms of the

    regulatory capital that banks have to hold to comply with the New Basel Capital

    Accord.

    20

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    21/22

    BIBLIOGRAPHY

    Altman, E.I.; Marco, G. y Varetto, F. (1994), "Corporate distress diagnosis:

    Comparisons using linear discriminant analysis and neural networks", Journal of

    Banking and Finance, vol. 18, n 3, p. 505-529.

    Bank for International Settlements BIS (2004), Basel II: International Convergence of

    Capital Measurement and Capital Standards: a Revised Framework, Basel

    Committee on Banking Supervision, junio.

    Barto, A.; Sutton, R. y Anderson, C. (1983), "Neuron-like adaptative elements that can

    solve difficult learning control problems", IEEE Transactions on Systems, Man

    and Cybernetics, n 13, p.834-846.

    Campos, P. y Yage, M.A. (2001), "Enfoques cuantitativos para el riesgo de crdito enempresas: ratings internos (IRB)",Perspectivas del Sistema Financiero, n 72, p.

    31-42.

    Checkley, K. (2003),Manual para el Anlisis del Riesgo de Crdito, Gestin 2000.com,

    Barcelona.

    Coats, P. y Fant, K. (1993), "Recognizing financial distress patterns using a neural

    network tool",Financial Management, vol. 22, n 3, p. 142-155.

    Cossin, D. y Pirotte, H. (2001), Advanced Credit Analysis: Financial Approaches and

    Mathematical Models to Assess, Price, and Management Credit Risk, John Wiley

    & Sons, Nueva York.

    Hale, R.H. (1983), Credit Analysis: A Complete Guide, John Wiley & Sons, Nueva

    York.

    Hassoun, H.M. (1995),Fundamentals of Artificial Networks, Massachusetts Institute of

    Technology Press, Cambridge.

    Haykin, S. (1994), Neural Networks. A Comprehensive Foundation, Prentice Hall

    International, Nueva Jersey.

    Hilera, J.R. y Martnez, V.J. (1995), Redes Neuronales Artificiales: Fundamentos,

    Modelos y Aplicaciones, RA-MA, Madrid.

    Kohonen, T. (1982), "Self-organized formation of topologically correct feature maps",

    Biological Cybernetics, n 43, p. 59-69.

    (1988), "Learning vector quantization",Abstracts of the First Annual INSS Meeting,

    p. 303.

    21

  • 7/29/2019 CREDIT RISK UNDER THE NEW BASEL CAPITAL ACCORD A METHODOLOGICAL PROPOSAL . - Pampilln - 2004

    22/22

    Martn del Brio, B. y Serrano, C. (1997), "Self - organizing neural networks for analysis

    and representation of data: Some financial cases", Neural Computing and

    Application, n 1, p. 193-206.

    Martn del Brio, B. y Sanz Molina, A. (1997), Redes Neuronales y Sistemas Borrosos,

    RA-MA, Madrid.

    Olmeda, I. y Barba-Romero, S. (1993), Redes Neuronales Artificiales: Fundamentos y

    Aplicaciones, Servicio de Publicaciones de la Universidad de Alcal de Henares,

    Madrid.

    Prieto, J. y Santillana, V. (2004), Nuevo Acuerdo de Capital de Basilea: discriminacin

    en riesgos, discriminacin en precios,Anlisis Local, n 55, p. 49-61.

    Refenes, P.A. (1995), Neural Networks in the Capital Markets, John Wiley&Sons,

    Nueva York.

    Ripley, B.D. (1996),Pattern Recognition and Neural Networks, Cambridge University

    Press, Cambridge.

    Ros, J.; Pazos, A.; Brisaboa, N.R. y Serafn, C. (1991),Estructura, Dinmica y

    Aplicaciones de las Redes Neuronales Artificiales, Centro de Estudios Ramn

    Areces, Madrid.

    Rumelhart, D.; Hinton, G. y Williams, R. (1986), "Learning representations by Back-

    Propagation Errors",Nature, n 323, p. 533-536.

    Serrano, C. y Martn del Bro, B. (1993), "Prediccin de la quiebra bancaria mediante el

    empleo de Redes Neuronales Artificiales", Revista Espaola de Financiacin y

    Contabilidad, vol. XXII, n 74, p. 153-176.

    Tomas, J.; Amat, O. y Esteve, M. (2002), Cmo Analizan las Entidades Financieras a

    sus Clientes, Gestin 2000, Barcelona.

    Trippi, R.R. y Turban, E. (1996), Neural Networks in Finance and Investing, Irwin

    Professional Publhing, Chicago.

    Wessels, L.F.A. y Barnard, E. (1992), "Avodiding false local minima by proper

    initialization of connections", IEEE Transactions on Neural Networks, vol. 3, n

    6, p. 899-905.

    Yang, L. y Yu, W. (1993), "Backpropagation with Homotopy", Neural Computation,

    vol. 5, n 3, p. 363-366.