10131948

Embed Size (px)

Citation preview

  • 8/3/2019 10131948

    1/66

    NUREG/CR-5161PNL-6462Vol. 2

    Evaluation of Sampling Plansfor In-Service Inspection ofSteam Generator Tubes

    Comprehensive Analytical and Monte CarloSimulation Results for Several Sampling PlansI I IIII II I I IIIIIIll I

    Prepared byR.J. Kurtz, P.G. Heasler, D. B. Baird

    Pacific Northwest LaboratoryOperated byBattelle Memorial Institute

    Prepared for"U.S. Nuclear Regulatory Commission

    _3_lJlll_lt,+' OFTHISt't,OP,/Mt_T I_ tJP!'.I.II",IIIEII

  • 8/3/2019 10131948

    2/66

    AVAILABILITY NOTICEAvailabilityofReferenceMatedaJsCited in NRCPublications

    Most documents cited In NRC publications willbe available from one of the following sources:1. The NRC Public Document Room, 2120 L Street, NW, Lower Level , Washington, DC 20555-00012. The Superintendent of Documents, U.S. Government Printing Office, Mai l Stop SSOP, Washington,DC 20402-93283. The National Technical Information Service, Springfield, VA 22161Although the listing that follows represents the majority of documents cited In NRC publications, tt is notIntended to be exhaustive.Referenced documents available for inspectlon and copying for a fee from the NRC PubllcDocument RoomInclude NRC correspondence and Internal NRC memoranda; NRC Office of Inspection and Enforcementbulletlns, clrculars, Information notices, inspection and Investlgatlon notices; Licensee Event Reports; ven-dor reports and correspondence; Commission papers; and applicant and licensee documents and corre-spondence.The following documents In the NUREG series are available for purchase from the GPO Sales Program:formal NRC staff and contractor reports, NRC-sponsored conference proceedings, and NRC booklets andbrochures. Also available are Regulatory Guides, NRC regulations In the Code of Federal Regulations, andNuclear Regulatory Commission Issuances.Documents available from the National Technical Information Service Include NUREG series reports andtechnical reports prepared by other federal agencies and reports prepared by the Atomic Energy Commis-sion, forerunner agency to the Nuclear Regulatory Commlsslon.Documents available from public and special technical libraries include all open literature Item.s, such asbooks, Journaland periodical articles, and transactions. Federal Register notices, federal and state 16g_sia-tion, and congressional reports can usually be obtained from these libraries,Documents such as theses, dissertations, foreign reports and translations, and non-NRC conference pro-ceedlngs are available for purchase from the organization sponsoring the publication cited.Single copies of NRC draft reports are available free, to the extent of supply, upon written request to theOffice of Information Resources Management, Distribution Section, U.S. Nuclear Regulatory Commission,Washington, DC 20555-0001.Copies of Industry codes and standards used in a substantive manner In the NRC regulatory process aremaintained at the NRC Library, 7920 Norfolk Avenue, Bethesda, Maryland, and are available there for refer-ence use by the public. Codes and standards are usually copyrighted and may be purchased from theoriginating organization or, if they are American National Standards, from the American National StandardsInstitute, 1430 Broadway, New York, NY 10018.

    DISCLAIMER tlOTICEThis report was prepared as an account of work sponsored by an agency of the United States Government.Neither the United States Government nor any agency thereof, or any of their employees, makes any warranty,expresed or implied, or assumes any legal liability of responsibility for any third party's use, or the results ofsuch use, of any information, apparatus, product or process disclosed in this report, or represents that its useby such third party would not infringe privately owned rights.

  • 8/3/2019 10131948

    3/66

    NUREG/CR-5161PNL-6462Vol. 2

    Evaluation of Sampling Plansfor In-Service Inspection ofSteam Generator Tubes

    Comprehensive Analytical and Monte CarloSimulation Results for Several Sampling Plans

    Manuscript Completed: December 1993Date Published: February 1994

    Prepared byR.J. Kurtz, P.G. Heasler, D. B. Baird

    PacificNorthwest LaboratoryRichland,WA 99352

    Prepared forDivision of EngineeringOffice of Nuclear Regulatory ResearchU.S. Nuclear Regulatory CommissionWashington, DC 20555-0001 llaA_o,,t,_.,r_NRC FIN B2097

    OlSq31_BOTIOP_F'THISgOCitMENTS UNLIMITED

  • 8/3/2019 10131948

    4/66

    AbstractThis report summarizes the results of three previous utilized to guide the selection of appropriate probabilitystudies to evaluate and compare the effectiveness of of detection and flaw sizing models for use in the analy-sampling plans for steam generator tube inspections, sis. Different distributions of tube degradation wereAn analytical evaluation and Monte Carlo simulation selected to span the range of conditions that might existtechniques were the methods used to evaluate sampling in operating steam generators. The principal means ofplan performance. To test the performance of candi- evaluating sampling performance was to determine thedate sampling plans under a variety of conditions, rang- effectiveness of the sampling plan for detecting andes of inspection system reliability were considered along plugging defective tubes. A summary of key resultswith different distributions of tube degradation. Results from the eddy current reliability studies is presented.from the eddy current reliability studies performed with The analytical and Monte Carlo simulation analyses arethe retired-from-service Surry 2A steam generator were discussed along with a synopsis of key results and con-clusions.

    iii NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    5/66

    SummaryThis report summarizes the results of three previous degraded tubes. When defective tubes were surroundedstudies to evaluate and compare the effectiveness of by a moderate number of degraded tubes the mostsampling plans for steam generator tube inspcctions, effective sampling strategy was a 40% initial sampleAn analytical evaluation and Monte Carlo simulation followed by expansion of the inspection around flawtechnique were the methods used to evaluate sampling indications found at the first stage. When defectiveplan performance. To test the performanc,_ of candi- tubes were completely isolated the most effective in-date sampling plans under a variety of conditions, rang- spection strategy was 100% inspection. For isolatedes of inspection system reliability were considered along degradation the effectiveness of the sampling planswith different distributions of tube degradation. Results investigated was approximately linearly related to thefrom the eddy current reliability studies performed with initial sample size.the retired-from-secvice Surry 2A steam generator wereutilized to guide the selection of appropriate probability Inspection system reliability was found to impact sam-of detection and flaw sizing models for use in the analy- piing plan effectiveness significantly. Improving thesis. Different distributions of tube degradation were probability of detection and flaw sizing reliability in-selected to span the range of conditions that might exist creased the effectiveness of all sampling plans studied.in operating steam generators. The principal measure Improved flaw sizing reliability was found to produceof sampling performance was its effectiveness for de- greater increases in sampling plan effectiveness thante_:ting and plugging defective tubes. In this work a improving the probability of detection, but this wasdefective tube was defined as one with _ 75% through- probably caused by the specific differences in the prob-wall degradation at the time of the inspection, ability of detection and flaw sizing curves considered in

    this study and may not be a general result.Significant results from this study are summarized be-low. This work clearly demonstrated that the most A variant of the sampling plan given in the EPRI PWReffective strategy for detecting and plugging defective Inspection Guidelines (EPRI-NP-6201) was found to betubes was 100% inspection. The effectiveness of all a very effective plan. This plan was effective largelysampling plans relative to 100% inspection was found to because it expanded to 100% inspection almost everydepend on the number and distribution of degraded time. This plan consists of a 20% initial sample rol-and defective tubes in the steam generator. All sam- lowed by 100% inspection of a "region" of the steampiing plans were equally effective at detecting and plug- generator if one defective tube or more than 5% of theging defective tubes when the defective tubes were initial sample was found to be degraded. In this worksurrounded by large numbers of degraded tubes. The the "region" for second-stage inspection was assumed tonotable exception to this finding was the standard tech- be all remaining tubes in the entire steam generatornical specification sampling plan. The effectiveness of inspected full length. This is not the plan described inthis sampling plan was highly erratic and did not always EPRI-NP-6201 so the results presented in this reportperform as well as the other sampling plans investigat- represent an upper bound estimate of the potentialed, even when the defective tubes were surrounded by effectiveness of that plan.

    V NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    6/66

    Contents

    Abstract ............................................................................... iii

    Summary .............................................................................. v1.0 Introduction ........................................................................ 1.12.0 ET Sizing Error and Probability of Detection ............................................... 2.12.1 Modeling ET Sizing Error ......................................................... 2.1

    2.2 Probability of Exceeding ET Plugging Limit ............................................ 2.32.3 Probability of Detection ........................................................... 2.5

    3.0 Analytical Evaluation and Comparison of Sampling Plans ...................................... 3.14.0 Monte Carlo Simulation Analysis ........................................................ 4.14.1 Flaw Distributions ............................................................... 4.1

    4.2 Probability of Detection and ET Sizing Models .......................................... 4.74.3 Sampling Plans and Second Stage Inspection ............................................ 4.84.4 Simulation Methodology .......................................................... 4.114.5 Simulation Results .............................................................. 4.11

    4.5.1 Effect of Inspection System Reliability .......................................... 4.124.5.2 Systematic Versus Random Sampling ........................................... 4.124.5.3 Effect of Degraded Tube Threshold ........................................... 4.134.5.4 Effect of Initial Sample Size and Flaw Distribution ................................. 4.134.5.5 Local Versus Global Expansion Rules .......................................... 4.154.5.6 Effect of False Calls ....................................................... 4.15

    5.0 Conclusions ........................................................................ 5.16.0 References ......................................................................... 6.1Appendix A: Results for Tube Maps ....................................................... A.1

    vii NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    7/66

    Figures2.1 Sizing Results for Typical Team From Surry Steam Generator Study .............................. 2.12.2 Sizing Results for Best Performing Team From Surry Steam Generator Study ....................... 2.12.3. Average POD Performance for Surry Round Robin Teams ..................................... 2.52.4. POD Results for DAARR and Baseline Teams from the Surry Round Robins ...................... 2.52.5. Best POD Performance Observed from Surry Round Robin .................................... 2.63.1. Example of a 20% Systematic Sampling Grid Pattern Where Exactly One Tube from Each Cluster Would

    be Included in the Initial Sample ....................................................... 3.23.2. Example of a 40% Systematic Sampling Grid Pattern Where Exactly Two Tubes from Each Cluster Would

    be Included in the Initial Sample ....................................................... 3.24.1. Monte Carlo Simulation Tube Map 1 ..................................................... 4.24.2. Monte Carlo Simulation Tube Map 1A ................................................... 4.24.3. Monte Carlo Simulation Tube Map 3 ..................................................... 4.34.4. Monte Carlo Simulation Tube Map 6 ..................................................... 4.34.5. Monte Carlo Simulation Tube Map 6A ................................................... 4.44.6. Monte Carlo Simulation Tube Map 8 ..................................................... 4.44.7. Monte Carlo Simulation Tube Map 8A ................................................... 4.54.8. Monte Carlo Simulation Tube Map 13 .................................................... 4.54.9. Monte Carlo Simulation Tube Map 13A .................................................. 4.64.10. Monte Carlo Simulation Tube Map 20 ................................................... 4.64.11. Monte Carlo Simulation Tube Map 21 ................................................... 4.74.12. Monte Carlo Simulation POD Curves 1 and 2 ............................................. 4.84.13. Monte Carlo Simulation POD Curves 4 and 5 ............................................. 4.84.14. Monte Carlo Simulation POD Curves 6 and 7 ............................................. 4.84.15. Effect of POD Performance on Sampling Plan Effectiveness .................................. 4.124.16. Effect of Sizing Performance on Sampling Plan Effectiveness ................................. 4.124.17. Effect of Systematic Versus Random Sampling on Sampling Plan Effectiveness .................... 4.134.18. Effect of Degraded Tube Threshold on Sampling Plan Effectiveness ............................ 4.134.19. Sampling Plan Effectiveness Versus Initial Systematic Sample Size for POD Curve 4 and Sizing Model 1 . 4.144.20. Sampling Plan Effectiveness Versus Initial Systematic Sample Size for POD Curve 5 and Sizing Model 2 . 4.144.21. Difference in Sampling Plan Effectiveness (Global Expansion Rule Minus Local Expansion Rule) WithPOD Curve 4 and Sizing Model 1 ...................................................... 4.154.22. Difference in Sampling Plan Effectiveness (Global Expansion Rule Minus Local Expansion Rule) WithPOD Curve 5 and Sizing Model 2 ...................................................... 4.154.23. Effect False Calls on Sampling Plan Effectiveness for Tube Map 6 ............................. 4.164.24. Effect of False Calls on the Total Number of Tubes Inspected for a Steam Generator With no Degrada-tion (Blank Map) .................................................................. 4.16

    NUREG/CR-5161, Vol. 2 viii

  • 8/3/2019 10131948

    8/66

    Tables2.1. Summary of Individual Team Flaw Sizing Regression Results ................................... 2.22.2. Estimated PEL Values for X = 75% and T = 40% .......................................... 2.32.3. Estimated PEL Values for Three Plugging Limits ............................................ 2.43.1. Values ofp Computed from the Given PEL Values by Assuming POD - 0.9 ....................... 3.13.1. Values ofp for a Range of PEL and POD(deg) Values Assuming that POD = 0.9 for Defective Tubes .... 3.44.1. Tube Maps (Flaw Distributions) for Simulation Analyses ...................................... 4.14.2. ET Sizing Models Used in Monte Carlo Simulations ......................................... 4.94.3. Sampling/Inspection Plans Investigated ................................................... 4.94.4. Standard Technical Specification Requirement Inspection Result Categories ....................... 4.104.5. Standard Technical Specification Requirement Second-Stage Inspection Criteria .................... 4.104.6. Monte Carlo Simulation St_,ps ......................................................... 4.11A.1. Results for Tube Map 1 ............................................................. A.1A.2. Results for Tube Map 6 ............................................................. A.2A.3. Results for Tube Map 3 ............................................................. A.3A.4. Results for Tube Map 8 ............................................................. A.3A.5. Results for Tube Map 13 ............................................................ A.4A.6. Results for Blank Tube Map .......................................................... A.4A.7. Results for Tube Map IA, POD Curve 4/Sizing Models 1 & 2 ................................ A.5A.8. Results for Tube Map 1A, POD Curve 5/Sizing Models 1 & 2 ................................ A.6A.9. Results for Tube Map 6A, POD Curve 4/Sizing Models 1 & 2 ................................ A.7A.10. Results for Tube Map 6A, POD Curve 5/Sizing Models 1 & 2 ................................ A.8A.11. Results for Tube Map 8A, POD Curve 4/Sizing Models 1 & 2 ................................ A.9A.12. Results for Tube Map 8A, POD Curve 5/Sizing Models 1 & 2 ................................ A.10A.13. Results for Tube Map 13A, POD Curve 4/Sizing Models 1 & 2 ............................... A.11A.14. Results for Tube Map 13A, FOD Curve 5/Sizing Models 1 & 2 ............................... A.12A.15. Results for Tube Map 20, POD Curve 4/Sizing Models 1 & 2 ................................ A.13A.16. Results for Tube Map 20, POD Curve 5/Sizing Models 1 & 2 ................................ A.14A.17. Results for Tube Map 21, POD Curve 4/Sizing Models 1 & 2 ................................ A.15A.18. Results for Tube Map 21, POD Curve 5/Sizing Models 1 & 2 ................................ A.16A.19. Results for GER Expansion Rule, POD Curve 4/Sizing Model 1 .............................. A.17A.20. Results for GER Expansion Rule, POD Curve 5/Sizing Model 2 .............................. A.18

    ix NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    9/66

    Previous Reports in SeriesAllen, R. P., R. L. Clark and W. D. Reece. 1984. NUREG/CP-0072, Vol. 2. Pacific Northwest Laborato-"Steam Generator Group Project Task 6 - Channel ry, Richland, Washington.Head Decontamination." NUREG/CR-3841, PNL-4712, Pacific Northwest Laboratory, Richland, Washing- Clark, R.A. 1984. "Status and Progress of Researchton. on a Removed-from-Service Steam Generator."

    NUREG/CP-0048, Vol. 4, p. 98-116, in Proceedings ofAlzheimer, J. M., R. A. Clark, W. E. Anderson, J.C. the Eleventh Water Reactor Safety Research Informa-Crowe, and M. Vagins. 1977. "Steam Generator Tube tion Meeting, Pacific Northwest Laboratory, Richland,Integrity Program. Quarterly Report, January 1-March Washington.31, 1977." NUREG-0359. Pacific Northwest Laborato-ry, Richland, Washington. Clark, R. A. and M. Lewis. 1985. "Steam Generator

    Group Project Annual Report - 1984." NUREG/CR-Alzheimer, J. M., R. A. Clark, C. J. Morris and M. 4362, PNL-5417, Pacific Northwest Laboratory, Rich-Vagins. 1979. "Steam Generator Tube Integrity Pro- land, Washington.gram Phase I Report." NUREG/CR-0718, PNL-2937,Pacific Northwest Laboratory, Richland, Washington. Clark, R. A. and M. Lewis. 1985. "Steam Generator

    Group Project Annual Report - 1983." NUREG/CR-Bickford, R. L., R. A. Clark, P. G. Doctor, H.E. 4361, PNL-5017, Pacific Northwest Laboratory, Rich-Kjarma and C. A. LoPresti. 1984. "Eddy Current land, Washington.Round Robin Test on Laboratory Produced Intergranu-lar Stress Corrosion Cracked Inconel Steam Generator Clark, R. A. and M. Lewis. 1984. "Steam GeneratorTubes." NUREG/CR-3561, PNL-4695, Pacific Group Project Annual Report - 1982." NUREG/CR-Northwest Laboratory, Richland, Washington. 3581, PNL-4693, Pacific Northwest Laboratory, Rich-land, Washington.Bowen, W. M., P. G. Heasler, and R. B. White. 1989."Evaluation of Sampling Plans for In-Service Inspection Clark, R. A. and M. Lewis. 1984. "Steam Generatorof Steam Generator Tubes: Modeling of Eddy-Current Group Project Semi-Annual Progless Report July -Reliability Data, Analytical Evaluations, and Initial December 1982." NUREG/CR-3580, PNL-4692, PacificMonte Carlo Simulations." NUREG/CR-5161, PNL- Northwest Laboratory, Richland, Washington.6462, Vol. 1. Pacific Northwest Laboratory, Richland,Washington. Clark, R. A. and R. L. Bickford. 1984. "Steam Gener-

    ator Tube Integrity Program Leak Rate Tests - Prog-Bradley, E. R., P. G. Doctor, R. H. Ferris, and J.A. ress Report." NUREG/CR-3562, PNL-4629, PacificBuchanan. 1988. "Steam Generator Group Project - Northwest Laboratory, Richland, Washington.Task 13 Final Report - NDE Validation." NUREG/CR-5185, PNL-6399, Pacific Northwest Laboratory, Clark, R. A., R. L. Bickford, A. S. Birks, R. L. Clark, P.Richland, Washington. G. Doctor, R. H. Ferris, L. K. Fetrow, W. D. Reece, E.

    B. Schwenk, and G. E. Spanner. 1985. "1984 BranchClark, R. A. and P. G. Doctor. 1985. "Surry Steam Annual Report - Steam Generator Integrity Program."Generator Program and Baseline Eddy Current Exami- NUREG-0975, Vol. 3, PNL-SA-13015, Pacific North-nation," Vol. 4, p. 260-285, in Proceedings of the west Laboratory, Richland, Washington.Twelfth Water Reactor Safety Research InformationMeeting. NUREG/CP-0058, Vol. 4. Pacific Northwest Clark, R. A. et al. 1984. "1983 Branch Annual Re-Laboratory, Richland, Washington. port - Steam Generator Integrity Program." NUREG-

    0975, Vol. 2, PNL-SA-11720, Pacific Northwest Labora-Clark, R. A., P. G. Doctor, and R. H. Ferris. 1986. tory, Richland, Washington."Surry Steam Generator - Examination and Evaluation,"Vol. 2, p. 409-430 in Proceedings of the Thirteenth Doctor, P. G., A. S. Birks, R. H. Ferris H. Harty andWater Reactor Safety Research Information Meeting, G.E. Spanner. 1988. "Steam Generator Group Project

    - Task 7: Final Report - Post-Service Baseline Eddy-

    xi NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    10/66

    Previous Reports in Series

    Current Examination." NUREG/CR-5087, PNL-5940, NUREG/CR-4848, PNL-5771, Pacific Northwest Labo-Pacific Northwest Laboratory, Richland, Washington. ratory, Richland, Washington.Doctor, P. G., J. A. Buchanan, J. M. Mclntyre, P.J. Kurtz, R. J. and R. A. Clark. 1986. "1985 BranchHof and S. S. Ercanbrack. 1984. "Steam Generator Annual Report - Steam Generator Integrity Program."Group Project Progress Report on Data Acquisi- NUREG-0975, Vol. 4, PNL-5772, Pacific Northwesttion/Statistical Analysis." NUREG/CR-3579, PNL- Laboratory, Ric!...,nd, Washington.3955, Pacific Northwest Laboratory, Richland, Washing-ton. Kurtz, R.J. 1987. "1986 Branch Annual

    Report - Steam Generator Integrity Program."Doctor, P. G., H. Harty, R. H. Ferris and A. S. Birks. NUREG-0975, Vol. 5, PNL-6110, Pacific Northwest1988. "Steam Generator Group Project - Task 9: Final Laboratory, Richland, Washington.Report - Nondestructive Evaluation Round Robin,Comments Vol. I: Description and Summary Data and Kurtz, R. J. et al. 1988. "1987 Branch AnnualVol. II." NUREG/CR-4849 Vol. I and II, PNL-5868, Report - Steam Generator Integrity Program."Pacific Northwest Laboratory, Richland, Washington. NUREG-0975, Vol. 6, PNL-6401, Pacific NorthwestLaboratory, Rich!and, Washington.Kurtz, R. J., R. A. Clark, E. R. Bradley, W. M. Bowen,P. G. Doctor, F. A. Simonen, and R. H. Ferris. 1990. Morris, C. J., G. H. Lyon, T. J. Davis and C. B. Perry."Steam Generator Tube Integrity Program/Steam Gen- 1980. "Eddy-Current Inspection of INCONEL-600erator Group Project - Final Summary Report." Steam Generator Tubes at the Tube Sheet."NUREG/CR-5117, PNL-6446, Pacific Northwest Labo- NUREG/CR-1626, PNL-3472, Pacific Northwest Labo-ratory, Richland, Washington. ratory, Richland, Washington.Kurtz, R. J., and R. A. Clark. 1988. "Compendium and Reece, W. D., G. R. Hoenes and M. A. Parkhurst.Comparison of International Practice for Plugging, 1984. "Steam Generator Group Project Progress Re-Repair, and Inspection of Steam Generator Tubing." port - Task 3 Health Physics." NUREG/CR-3578,NUREG/CR-5016, CSNI No. 140, PNL-6341, Pacific PNL-4711, Pacific Northwest Laboratory, Richland,Northwest Laboratory, Richland, Washington. Washington.Kurtz, R. J., J. M. Alzheimer, R. L. Bickford, R.A. Schwenk, E. B. and K. R. Wheeler. 1984. "SteamClark, C. J. Morris_ E. A. Simonen and K. R. Wheeler. Generator Group Project Task 10 - Secondary Side1988. "Steam Generator Tube Integrity Program Phase Examination." NUREG/CR-3843, PNL-5033, PacificII final Report." NUREG/CR-2336, PNL-4008, Pacific Northwest Laboratory, Richland, Washington.Northwest Laboratory, Richland, Washington.

    Schwenk, E. B. 1987. "Steam Generator Group ProjectKurtz, R. J., R. L. Bickford, W. M. Bowen, E. R. Brad- Task 10 - Secondary Side Examination Final Report."ley, P. G. Doctor, R. H. Ferris, L. K. Fetrow, and F.A. NUREG/CR-4850, PNL-6045, Pacific Northwest Labor-Simonen. 1988. "Surry Steam Generator Program - atory, Richland, Washington.Final Results - Evaluation of NDE Performance andTechnical Basis for Improved ISI and Plugging Crite- Sinclair, R.B. 1984. "Secondary Side Photographicria," Vol. 2, p. 429-457, in Proceedings of the Fifteenth Techniques Used in Characterization of Surry SteamWater Reactor Safety Research Information Meeting, Generator." NUREG/CR-3094, PNL-5053, PacificUSNRC Conference Proceeding NUREG/CP-0091, Northwest Laboratory, Richland, Washington.Vol. 2. Pacific Northwest Laboratory, Richland, Wash-ington. Wheeler, K. R., P. G. Doctor, L. K. Fetrow and M.

    Lewis. 1984. "Steam Generator Group Project Task 8Kurtz, R. J., M. Lewis and R. A. Clark. 1987. "Steam - Selective Tube Unplugging." NUREG/CR-3842,Generator Group Project Annual Report - 1985." PNL-4876, Pacific Northwest Laboratory, Richland,

    Washington.

    NUREG/CR-5161, Vol. 2 xii

  • 8/3/2019 10131948

    11/66

    1.0 Introduction

    Eddy current (ET) inservice inspections of steam gener- Research Institute (EPRI) and was jointly sponsored byator tubing are routinely performed as an element in EPRI and the NRC. The objective of this effort was tothe overall defense-in-depth strategy for ensuring the evaluate additional sampling plans not considered in thestructural and leak-tight integrity of the reactor coolant first effort. In particular the sampling plan given in thepressure boundary. The main objectives of these in- standard plant technical specifications was evaluated asspections are to detect evidence of tube degradation so part of this study. In the final effort 1, Monte Carlothat corrective action(s) may be taken to mitigate tube simulation techniques were utilized to evaluate a sam-damage, and to catch most or all degraded tubes that piing plan that was similar to the one recommended incould fail by leak or burst before the next inspection, the EPRI PWR Steam Generator Examination Guid-To attain these objectives a reliable inservice inspection elines, Revision 2, NP-6201.(ISI) must be performed. An element of the processwhich determines, in part, the overall effectiveness ef Four types of sequential (two-stage) sampling plansthe inspection is the selection of tubes from the total were evaluated and compared. Sequential samplingtube population for inspection. There are two plans involve the selection of an initial sample of tubesapproaches to the selection of tubes for inspection, for inspection. Depending on the inspection resultsThe first approach is to select all of the tubes or 100% from the initial sample (i.e. the "expansion rules") theinspection. This approach is the most effective means scope of the inspection may be expanded by selectingfor identifying tubes which might fail between ISis, but additional tubes. The first type of sampling plan evalu-may, in some cases, place an economic burden on utili- ated was a systematic-sequential plan. In this plan anties for marginal gain in safety or operational reliability, initial sample of tubes is selected for inspection byThe second approach is to select a subset of the tube superimposing a grid over the tube-sheet array. Forpopulation for an initial inspection and use the results example, in a plan involving a 20% initial sample everyof that initial inspection to decide if further inspection fifth tube in the steam generator is inspected. Theis needed (i.e. to "expand" the initial inspection). With results of the initial sample are used to determine ifthis approach there is no guarantee that all tubes with additional tube inspections should be performed. In"significant" flaws will be inspected. The effectiveness this work, additional tubes were inspected if a tubeof the inspection is limited, in this case, by the number inspected at the first-stage was found to have any ETof tubes selected for initial inspection, the reliability of flaw indication. An ET flaw indication was defined asthe inspection equipment, personnel, and procedures, one with some through-wall depth. Inspection wasand the level and distribution of degradation in the expanded radially around the tube with the ET flawsteam generator which in conjunction with the inspec- indication until a two tube wide "buffer zone" was ob-tion system reliability determines the total number of served. The buffer zone consisted of tubes free of anytubes inspected. ET flaw indications and completely surrounded the tube

    which triggered the expansion. The second type ofThe objective of this work was to evaluate and compare sequential sampling plan evaluated was exactly the samesampling plans for the primary or general monitoring as the first type in all aspects except the initial sampleISI. The goal of the primary ISI is to detect any form of tubes was selected randomly rather than systemati-of tube degradation occurring at any location in the cally. The third type of sampling plan was the standardsteam generator tube bundle. The evaluations per- technical specification (STS). In this plan the initialformed were not actual ISis but consisted of exercising sample consists of 3% of the tube population, which isan analytical model or performing Monte Carlo com- chosen randomly. Second-stage inspection depends onputer simulations of ISis. This report summarizes the the results of the initial sample. Different levels ofmethodology employed, the results obtained, and the second-stage inspection are performed depending onconclusions reached from three previous studies. In the whether the results of the first-stage inspection arefirst study (Bowen et al. 1989) the eddy current reliabil- categorized as C-2, or C-3. This categorization dependsity information and evaluation methodology were devel-oped under the Nl_C-managed Steam Generator Tube IHeasler, P. G., R. J. Kurtz, and D. B. Baird. 1992. Evaluation ofIntegrity Program. A follow-on effort (Hanlen 1990) Steam Generator Two-Stage Sampling Plans by Analytical Methodsand Monte Carlo Simulation. PNWD-2004, Battelle, Pacific North-was conducted under fmding from the Electric Power west Laboratories, Rich land , Washington.

    1.1 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    12/66

    1.0 Introduction

    on the numbers of degraded and/or defective tubes which could fail by leak or burst during reactor opera-discovered in the initial sample. The last sampling plan tion. In this work a defective tube was defined as oneevaluated was a variant of one proposed by EPRI. This with a true level of degradation ___75% deep. Thissampling plan consisted of a systematically selected definition was based on empirical correlations of tubeinitial sample (20%, 33.3%. or 40%) followed by failure pressure versus flaw depth for flaws greater thansecond-stage inspection consisting of all remaining tubes one tube diameter in axial length, and includes a 10%in the steam generator. Second-stage inspection was depth allowance for flaw growth between inspections.triggered if one or more tubes in the initial sample was This flaw size corresponds to one which could lead toclassified as defective or if 5% or more of the initial tube failure under main-steam-line-break loading condi-sample of tubes was classified as degraded, tions.It should be noted for all the sampling plans evaluated One criterion for comparing sampling plans is the prob-in this report we have reduced an essentially three- ability of detecting and plugging defective tubes. For adimensional inspection problem to two-dimensions. In given tube, this probability is a function of two otherother words, we assume that the "inspection unit" is an probabilities: (1) the probability of detection (POD),entire tube and not a region of a tube as it often is in which is the probability of observing a positive ETactual practice. This approach was taken because the indication, and (2) the conditional probability, denotedobjective of this study was to evaluate primary or gener- by PEL, that a positive ET indication will exceed theal monitoring ISis. The goal of the primary ISI is to plugging limit and __sult in plugging or repairing thedetect tube degradation at any location in the tube tube. Both the POD and PEL are functions of the truebundle so the logical "inspection unit" is the entire tube. size and type of flaw. They also depend upon the reli-The above comments are particularly important in ability of the inspectors, their equipment, and the pro-regard to the evaluations performed based on or with cedure employed.the EPRI sampling plan. In the actual EPRI samplingplan second-stage inspection consists of 100% of the In order to perform sampling plan evaluations two"region" of interest. In our evaluations we have pieces of information must be specified; the inspectionassumed that if second-stage inspection was triggered, system performance parameters as characterized by thethen all remaining tubes in the steam generator were POD and PEL models and the true state distribution ofinspected full length. This is not part of the EPRI degradation in the steam generator. True state degra-sampling plan. The EPRI plan allows for limited sec- dation distributions consisted of a number of hypotheti-ond-stage examination depending on the type of steam cal "tube maps" specifying the number and distributiongenerator, the tube damage mechanism, and plant of degraded and defective tubes in the steam generator.history. A potential source of inspection unreliability Each tube was either unflawed or contained one flaw ofexists in attempting to limit the extent of second-stage a certain size. The output of an evaluation producedinspection. We have not included this source of unreli- the total number of tubes inspected and the number ofability in our calculations so the results given in this defective tubes detected and plugged. These two re-report should be considered an upper-bound estimate suits formed the basis for generating other statistics forof the effectiveness of the EPRI sampling plan. comparing the various sampling strategies.All of the sampling plans evaluated in this report as- Section 2 of this report describes statistical analyses ofsumed that the inspection always produces information round robin data to characterize ET sizing error andon detection and sizing of flaws for each tube inspected. POD performance. This information was used to selectThe consequences of a tube inspection are very specific, appropriate ET sizing models and POD curves for useIf a flaw is detected and sized above the plugging limit, in the sampling plan evaluations. Section 3 presents thethe tube is either plugged or repaired, results of an analytical evaluation performed to com-pare the effectiveness of 20% and 40% systematic-se-The objective of the inspection is to detect evidence of quential sampling plans with 100% inspection for onetube degradation and to identify all defective tubes particular distribution of degradation. Section 4 gives

    NUREG/CR-5161, Vol. 2 1.2

  • 8/3/2019 10131948

    13/66

    1.0 Introduction

    the results of Monte Carlo simulations performed tosupplement the analytical evaluation. Monte Carlosimulation techniques permitted evaluation of a rangeof ET system performance characteristics and a spec-trum of degradation distributions ranging from isolateddefective tubes up to substantial clustering of degradedand defective tubes. Finally, the conclusions from thisstudy are described in Section 5.

    1.3 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    14/66

    2.0 ET Sizing Error and Probability of DetectionThis section summarizes the results of statistical analy- x00ses andmodeling performed to characterize POD andET sizing error. The inspection data from the SteamGenerator Group Project (SGGP) Bradley et al. 1988 sowere used to guide the selection of appropriate POD _ _ .curves and PEL models for use in the sampling planevaluations. Because multiple inspection teams were _ 6o "

    involved, statistical modeling was used to develop a _. ,,__: _ange of estimated POD and PEL values for each spec- _ "x "itied flaw size. These ranges of values were utili7ed ="40 /x/ -_, _'1 ,xwith probability theory and Monte Carlo simulation _ x y _ u " ",1techniques to evaluate and compare the various sam- _ /. dl, .x x. "piing plans. The SGGP results are largely due to wast- m "/_age and pitting type flaws, but the models of POD and /EL developed from these data are completely general.The results in this report do not relate just to wastage " , , .....and pitting type flaws but are applicable to any form of o m 40 e0 a_ 10otube degradation for which the ET system POD and Met.allngraphyall I,tm&%sizing performance characteristics can be represented Figure2.1 Sizing Results for Typical Team Fromby our statistical models. Surry Steam Generator Study

    lOO2.1 Modeling ET Sizing ErrorFor the purpose of characterizing ET sizing error and a)estimating PEL valued, results from the SGGP round robin studies were utilized. Destructive metallographic -" m ,, . 'tuanalyses of tube segments removed from the Surry -_ e0 - u uj/dk m xSteam Generator were matched with the ET inspection _ " _ _11 "

    results for l2 participating teams. Only the inspection _ " uux x_xxu _i"',,udata for the hot-leg top-of-tubesheet were used for this 4o u _ ,, u Ranalysis since this is the region where most of the de- 11 N II Nfects were located in the steam generator and where _ ,, ,/ xthe vast majority of the metallographic analyses wereperformed. Because PEL is conditional upon a positive _ ._ x xET indication, the data set for each team was reduced /y removing all false calls and nondetections.

    0 w i | tl i lell i ]l r io 8o 10oFigure 2.1 shows the relationship between ET estimated Metallography Wall/.,a_, %defect depth and metallographic examination results fora typical inspec.tionteam using conventional multi-fre- Figure 2.2 Sizing Results for Best Performing Teamquency inspection equipment and procedures. The best From Surry Steam Generator Studycorrelation observed is shown in Figure 2.2. This teamused complementary inspection equipment and specially The statisticalmethodology used to model ET sizingerror is based on the following definitions and assump-developed frequency mixes to augment their conven-tional inspection results to achieve improved flaw sizing tions. For a particular team and a particular flaw,accuracyand precision, define:

    2.1 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    15/66

    2.0 ET Sizing Error

    X = 'True" maximum flaw depth determined from de- The iterative regression algorithm presented by Aitkinstructive examination (DE), (1981) was used to fit the model in Equation 2.1 to the

    inspection data for each team. This technique is de-Y = Observed flaw depth as determined from ET in- signed to fit models to data with censored observationsspecden, and produces estimates of a, b, and Var(e). Selected

    results from this analysis are given in Table 2.1. TheAssume Y can be modeled as a simple linear function columns labeled a, b, and Var(e) display the estimatesof X but with random error e, so that of these parameters. The SD (or standard deviation)

    column displays the square root of the estimate ofY = a + bX + (2.1) Var(e). Note that the estimate of Vat(e) is a measureof the variability of the ET values about the fitted line.

    The #-Obs column is the number of (X,I 0 pairs thatare present for each team. Table 2.1 is divided into

    That is, at a specified value of X, the distribution of Y is three sections, corresponding to "groups" of teams. Itassumed to have true mean a + hA"and variance should be noted that an "ideal" team would have a = 0,Var(e), where a, b, and Var(e) are unknown parame- b = 1, and Var(e) would be small.ters that must be estimated for each team from theinspection data.

    Table 2.1. Summ_ry of Individual Team Flaw Sizing Regression Results

    A 13.59 0.44 285.00 16.88 61 Data Acquisitionand Analysis RoundB 15.69 0.40 220.34 14.84 48 Robin

    C 25.17 0.37 254.58 15.96 59,,,,D 10.32 0.57 294.41 17.16 61E 22.02 0.40 217.24 14.74 61U -4.67 0.85 61.83 7.86 18 Advanced/Alternate

    Techniques RoundUU 38.30 0.15 334.47 18.29 28 RobinV 12.60 0.68 105.90 10.29 72

    VV 20.26 0.24 329.88 18.16 28W -7.90 1.01 426.92 20.66 14X 7.73 0.51 244.96 15.65 62 BaselineY 11.85 0.51 323.45 17.98 76

    NUREG/CR-5161, Vol. 2 2.2

  • 8/3/2019 10131948

    16/66

    2.0 ET Sizing Error

    2.2 Probability of Exceeding ET Plug- equations show than an 85% through-wall flaw repre-ging Limit sents an average depth for all flaw types that could failunder main-steam-line-break loading conditions (_-2600psi pressure differential). If a flaw growth rate of 10%It is of interest to estimate the probability that a non- per operating cycle is assumed, then a tube with anzero ET value will exceed a specified "plugging limit" T actual flaw >_75% through-wall flaw could fail underfor a flaw with "true" depth X. It is also of interest to main-steam-line-break loading conditions by the end ofdetermine an ET plugging limit such that the probabili- the next operating period. This level of degradationty of plugging or repairing defective tubes is acceptably was used to define an unacceptable (i.e., defective) tubehigh. The fitted linear models described previously condition requiring tube plus ging or repair.provide a means for achieving these objectives.If it is assumed that the ET plugging limit is 40%, theFor a particular fitted model and specified ET plugging estimated PEL values (probability that Y > 40% givenlimit T, the probability of exceeding the ET plugging that X >_75%) for the fitted models are displayed inlimit for a tube with a positive ET indication and a flaw Table 2.2. For example, when a tube has a flaw with

    with "true" depth X can be evaluated as follows. The true depth X = 75%, and a non-zero ET value haspredicted mean ET value is computed from the formula been observed by a team with sizing characteristics likeY = a + lax with the estimates of a and b substituted, the average of the Data Acquisition and AnalysisThe variance of the distribution of ET values at X is the Round Robin (DAARR) teams, the probability is 0.73sum of the estimate of Var(e) and the variance of the that the observed ET value will be greater thanpredicted ET mean value. The probability that anobserved ET value will exceed T is estimated from thenormal distribution with mean and variance set equal to Table 2.2. Estimated PELValues for X = 75%their estimated values. In other words and T = 40%

    1 f dz(2.2) Team = PEL :..... : i ....Pr(PELIX) - _ -** A 0.65 Data Acquisitionand Analysis

    For this work definitions of degraded and defective B 0.65 Round Robintubes were needed to facilitate the evaluation of sam- C 0.79piing plan effectiveness and to determine when to trig-ger second-stage inspection. Consequently, a defective D 0.78 ,,,tube was defined as one which contains a flaw of suchseverity that the tube is unacceptable for continued E 0.79 .service. A degraded tube was defined as one which U 0.99 Advanced/,_Ater-contains a flaw of lesser severity than a defective tube. hate TechniquesIn terms of through-wall flaw depth, a defective tube UU 0.70 Round Robinwas taken as one with through-wall degradation severeenough to cause tube failure by burst or leakage under V 0.99normal operating or accident loading conditions. To VV 0.46determine the through-wall flaw depth which wouldresult in a tube being classified as defective, test data W 0.91on tube failure pressure as a function of flaw size andgeometry were utilized (Alzheimer et al. 1979 and X .... 0.65 BaselineKurtz et al. 1991). Y 0.71The burst mode constitutive equations were used todevelop a definition of an unacceptable flaw. These

    2.3 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    17/66

    2.0 ET Sizing Error

    Table 2_3. Estimated PEL Values for Three Plugging Limits

    T-- 30 0 0.17 0.68 0.0520 0.33 0.73 0.3640 0.53 0.78 0.8360 0.72 0.83 0.9975 0.84 0.86 1.00,T100 0.95 0.90 1.00

    T = 40 0 0.06 0.46 0.0020 0.15 0.53 0.C940 0.30 0.59 0.4960 0.50 0.66 0.9075 0.65 0.70 0.99100 0.85 0.77 1.00

    T = 50 0 0.02 0.26 0.0020 f_.05 0.32 0.0140 0.13 0.38 0.166O 0.28 0.44 0.6375 0.42 0.49 0.91

    l 100 0.67 0.57 1.00

    T = 40%. Clearly, a range of sizing capabilities is Note that Team UU always has higher PEL values forrepresented by the various models, nondefective tubes (X < 75%) than the other teams. If

    the plugging limit is set at T -- 30% so that Team AThe sizing capabilities of the teams can be compared by and Team UU have a high PEL when X = 75%, allcomparing the fitted models for each team or by com- teams have PEL > 0.5 when X > 40%; and Team UUparing PEL values. For example, Table 2.3 displays would tend to plug most of the tubes with positive ETPEL values for three of the teams with plugging limit indications. If T is increased to 50%, Team V has avalues T = 30%, 40%, and 50%. high PEL when X = 75%, and would not be likely to

    NUREG/CR-516I, Vol. 2 2.4

  • 8/3/2019 10131948

    18/66

    2.0 ET Sizing Error

    1.00plug tubes with true flaw depth X < 40%; but Team Aand Team UU have low PEL values when X = 75%.

    2.3 Probability of Detection _ o.so!he results from the SGGP round robin studies were 0.60utilized to guide the selection of POD curves to repre-sent a range of NDE system flaw detection reliabilities.Idealized POD curves were used for the sampling plan __ 040 "!evaluations described in this report. These curves are ._ |presented in Section 4. The idealized curves are in- d_ !ended to represent overall NDE system POD perfor- 0.2o 4mance for the range of flaw types that might be en- |countered during an ISI. Even though almost all of theIGGP data pertain to pitting and/or wastage-typedefects, the sampling plan evaluation results do not 0.0o , . , .0 _ 40 _ _ 100depend explicitly on the flaw type. The results of the MotallographyWall Loss. %work described in this report apply to any NDE inspection system exhibiting the POD characteristics shown in Figure 2.3. Average POD Performance for Surry

    Section 4. To give the reader an understanding of the Round Robin Teamsreasons for selecting the idealized POD curves used in 1.0o , t ,our analysis, an overview of the POD results from the =SGGP is presented below, o

    4,/.... & B.,....80 g stimates of POD were obtained from the SGGP data

    iby matching ET inspection results with the results from .both the visual inspections and destructive examinationsof tubes removed from the Surry Steam Generator. d[ o.6oFor each "true flaw size" category, the number of non- _ 4.zero ET indications divided by the total number of _' zraszai i / "*laws was used as a POD estimate. A POD curve (i.e., 0.40 x * ,, s// __.oa plot of estimated POD versus "true size") was con- ,, Dstrutted for each inspection team, and an overall POD x gcurve was constructed by combining the data from the o_0 xDAARR and Baseline teams. The ranges of estimated _ 4- v LYLPOD values were used as a basis for evaluating and [comparing the performance of sampling plans, o.0oo ...... _ -- 4_ _ _ leoThe curve shown in Figure 2.3 gives the average POD Metslleta'aphyWallLass,performance for seven teams employing conventional _ 2.4. POD Results for DAARR and Baseline'reams from the Surry Round RobinsZetec MIZ-12 multi-frequency inspection and DDA-4analysis equipment. The curve was based on metaUo-graphic measurement of the maximum wall-loss for Figure 2.4 is a plot of the individual POD estimates forthe same seven teams used to develop Figure 2.3. Thedefects from all regions of the steam generator com- curve in Figure 2.4 is an approximate 90/90 lower toler-bined. The oscillatory behavior of the curves is due tothe small numbers of specimens in each of the incre- ance limit (LTL) for these teams. The teams are as-mental wan-loss categories, sumed to be typical of the total population of teamsperforming inservice inspection; therefore, if each team

    in the total population c i teams performing inserviceinspection had inspected the round robin tube set, we

    2.5 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    19/66

    2.0 ET SLzing Error

    1.o0can be 90% confident that 90% of the individual teamPOD values would be above the LTL. Note that thepart of the curve extending from about 65% to 85% o.aowaU-loss is fiat because the number of specimens withdefects in this range is inadequate to provide a mean-ingful estimate of the LTL. Thus, the LTL at 65%wall-loss was extended as a conservative approximation d_ o.6oof the LTL for wall-loss > 65%.As shown in Figure 2.5, an apparent improved POD ._ 0.40performance was observed for one team that employedalternative inspection methods. The POD curve for thisteam increased more rapidly at low levels of wall-loss o.2oand was higher above 40% wall-loss than the PODcurves for other teams. This team employed speciallydeveloped frequency mixes to enhance the signal-to- 0.0o - , - , . , -noise ratio and computer data screening techniques. 0 m 40 _ m lo0MetallographyWallLoss, %

    Figure 2.5. Best POD Performance Observed fromSurry Round Robin

    NUREG/CR-5161, Vol. 2 2.6

  • 8/3/2019 10131948

    20/66

    3.0 Analytical Evaluation and Comparison of Sampling PlansTwo statistical evaluation techniques were used to de- performance of sampling plans, PEL values rangingtermine the effectiveness of sampling plans for detect- from 0.50 to 1.0were used in Equation 3.1 to computeing and plugging defective tubes. The first technique the values ofp shown in Table 3.1.consisted of an analytical evaluation which is discussedin this section. In the analytical evaluation, concepts Table 3.1. Values of p Computed from the Given PELfrom sampling theory and probability theory are uti- Values by Assuming POD = 0.9lized. The evaluation was based on the ranges of PODand PEL values estimated from the SGGP data. Thesecond technique involved Monte Carlo simulations tofurther evaluate and compare sampling plans under PEL pmore realistic conditions. The Monte Carlo simulationmethods azadresults are presented in Section 4. 0.50 0.45

    0.60 0.54There are two basic strategies for selecting tubes fi'oma generator for ISI. Either all tubes are inspected 0.70 0.63(100% inspection) or a sample of the tubes is selectedfor inspection. Although there are many possible sam- 0.80 0.72piing plans that could be applied, several types of se- 0.85 0.77quential sampling plans were identified as most appro-priate for analytical evaluation and were evaluated and 0.90 0.81compared with 100% inspection. 0.95 0.86In evaluating and comparing the expected effectiveness 1.00 0.90of sampling plans for detecting and plugging (or repair-ing) defective tubes, it is important to recognize thatthe effectiveness of 100% inspection is the maximumachievable and provides an upper bound for the effec- When a sampling plan is applied to select tubes fortiveness of all sampling plans. Thus, it is of interest to inspection, there is no guarantee that all defective tubesevaluate the effectiveness of 100% inspection and then in a generator will be inspected. Thus, the probabilityuse the results as a basis for evaluating and comparing of detecting and plugging a defective tube is a functionthe effectiveness of sampling plans, of the probability that the defective tube will be inspect-ed. Without further assumptions about the distributionWith 100% inspection, all defective tubes in a generator of defective tubes in a getlerator, Equation 3.1 would bewill be inspected. The inspection of each defective tube multiplied by the probability of inspection. For exam-is assumed to be independent of the inspection of all pie, if POD = 0.9, PEL = 0.7, and if 3% of the tubesother defective tubes. Therefore, the effectiveness of are randomly selected for inspection, then the probabili-100% inspection does not depend on the distribution of ty of inspecting, detecting, and plugging an individualdefective tubes within the generator. For the 100% defective tube isp -- 0.9(0.7)(0.03) = 0.0189. Withinspection case, the joint probability, p, of detecting and 50% random sampling, p = 0.9(0.7)(0.5) = 0.315.plugging an individual defective tube is the product of These values compare with p = 0.9(0.7) for 0.63 forthe POD and the PEL for a defective tube. That is, 100% inspection. By making assumptions about the

    distribution of defective tubes and by considering partic-p = POD(PEL) (3.1) ular types of sequential sampling plans, the effectivenessof sampling/inspection relative to 100% inspection canbe improved considerably over the completely randomIt is assumed that POD - 0.9 for flaws large enough to sampling implied above.classify a tube as defective, then p can be computed

    from Equation 3.1 for a specified PEL value. For the In the analytical work it was assumed that defectivepurpose of evaluating 100% inspection and using the tubes tend to occur in "clusters," which are groups ofresults as a basis for evaluating and comparing the defective and degraded tubes. For the purpose of

    3.1 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    21/66

    3.0 Analytical Evaluation

    evaluating and comparing sampling plans, a "minimum" each cluster. Examples of 20% and 40% systematiccluster was assumed, which is a defective tube sur- sampling grid patterns that produce this result arerounded by degraded but not defective tubes in the shown in Figures 3.1 and 3.2, respectively. Then, if thefollowing pattern: defective tube in a cluster is not included in the initialsample, there is a chance that the degraded tube(s) willD produce a positive ET indication that wilt trigger addi-DFD tional inspection (step b), which will include the defec-D tive tube. Thus, the probability of inspecting and de-tecting the defective tube, denoted by PI&D, is a func-where F denotes a defective tube and D denotes a tion of the POD for degraded but not defective tubes,degraded but not defective tube. denoted by POD(deg), as well as the POD for defectivetubes (which is assumed to be 0.9).It is recognized that in a real generator, clusters couldbe shaped differently than the one shown above, andcould be different sizes, and could include more than X0000X0000X0000X0000X0000one defective tube. The above cluster configuration 00X0000X0000X0000X0000X00was selected for evaluation purposes because it would 0000X0000X0000X0000X0000Xbe harder to detect than a larger cluster with more than OXOOOOXOOOOXOOOOXOOOOXO00one defective tube. It is also recognized that in somecases a defective tube may be isolated. Thus, in the 000X0000X0000X0000X0000X0Monte Carlo simulation portion of this study, various X0000X0000X0000X0000X0000distributions of degraded and defective tubes were 00X0000X0000X0000X0000X00examined to evaluate other conditions of clustering 0000X0000X0000X0000X0000Xranging from nearly isolated defectives up to a single 0X0000X0000X0000X0000X000large duster. 000X0000X0000X0000X0000X0The sequential sampling plans that were chosen foranalytical evaluation are assumed to proceed as follows: Figure 3.1. Example of a 20% Systematic SamplingGrid PatternWhere Exactly One Tube frow Each

    Cluster Would be Included in the Initial Sample(a) The initial sample is selected according to a sys-tematic sampling plan that consists of a specifiedpercentage of the tubes in a generator, and each XX000XX000XX000XX000XX000tube in the sample is inspected. 000XX000XX000XX000XX000XX0XX000XX000XX000XX000XX00(la) When a positive ET indication is observed, inspec- X000XX000XX000XX000XX000X

    tion continues in the region immediately sur- 00XX000XX000XX000XX000XX0rounding the suspect tube until a two-tube wide"buffer zone" is observed, which is composed of XX000XX000XX000XX000XX000tubes with no ET indications, and which complete. 000XX000XX000XX000XX000XXly surrounds the tube(s) with ET indication(s). 0XX000XX000XX000XX000XX00X000XX000XX000XX000XX000X

    (c) In steps (a) and (b), each tube with an ET indica- 00XX000XX000XX000XX000XX0tion that exceeds the plugging limit will beplugged or repaired. Figure 3.2. Example of a 40% Systematic SamplingGrid Pattern Where Exactly Two Tubes from EachBy assuming the above cluster configuration, it is possi- Cluster Would be Included in the Initial Sampleble to define a 20% systematic sampling plan for step(a) thatwould include exactlyone tube from each clus-

    ter. It is also possible to define a 40% systematic sam-piing plan that would include exactly two tubes from

    NUREG/CR-5161, Vol. 2 3.2

  • 8/3/2019 10131948

    22/66

    3.0 Analytical Evaluation

    It should be noted that in actual implementation of a PI&D = 0.36 + 1.116 POD(deg) -systematic sampling plan, the grid should be "shifted" at 0.54 [POD(deg)] 2 (3.3)each ISI so that every tube in a generator will eventual-ly be inspected. The joint probability of detecting and plugging a defec-tive tube is given byWith the 20% systematic sampling plan shown in Figure3.1, exactly one tube from each cluster is inspected at p = PI&D(PEL) (3.4)the first stage of inspection, and an expression forPI&D for an individual defective tube is derived as For evaluating and comparing the sequential samplingfollows: plans and 100% inspection, PEL values ranging from0.50 to 1.0 were considered (the PEL estimates fromA. POD = 0.9 for a defective tube the fitted ET sizing data range from 0.46 to 0.99), to-gether with values of POD(deg) ranging from 0.5 to 0.7.B. Pr(defeetive tube is inspected at first stage) = 0.2 For the sequential sampling plans with 20% or 40%initial sampling, values ofp were computed from Equa-C. POD(deg) = Pr(ET > 0 [ degraded but not tion 3.4. The resulting values ofp are displayed indefective tube) Table 3.1. For 100% inspection, values ofp were com-puted from Equation 3.1 by assuming POD = 0.9.D. Pr(a degraded tube is inspected at first stage) =

    0.8 Note in Table 3.1 that when POD(deg) = 0.7, thesequential sampling plan with 40% systematic samplingE. Pr(inspect defective at first stage and observe ET at the initial stage yields values ofp that are close to> 0) = A(B) = 0.18 those obtained for 100% inspection. In fact, by settingEquations 3.2 and 3.3 equal to 0.9 and then solving forF. Pr(inspect degraded at first stage and observe ET POD(deg), a value of POD(deg) = 1.0 would be re-> 0) = C(D) = (0.8)POD(deg) quired for the 20% sequential plan to perform exactlylike 100% inspection, whereas POD(deg) = 0.77 would

    Detecting a degraded tube at the first stage would be required for the 40% sequential plan to performtrigger the second stage of inspection that would in- exactly like 100% inspection. Numerous tables wereelude the defective tube. Therefore, computed for each of the three plans which display theprobability of leaving a specified number of defectiveG. Pr(ET > 0 for def at second stage I ET > 0 for tubes unplugged after inspection when there are n (ndeg at In'st stage) = 0.9 ranged from 2 to 20) defective tubes in the generatorprior to inspection. These results indicate that whenThen, POD(deg) is approximately 0.7, the 40% sequentialplan nearly duplicates the performance of 100% inspec-PI&D -- Pr(inspeet and detect defective at tion. It must be emphasized, however, that these re-

    first or second stage) suits are dependent upon the cluster assumption dis-= E + F(G) cussed previously.= 0.18 + (0.8)(0.9)lPOD(deg)]PI&D = 0.18 + e.72[POD(deg)l (3.2)

    With the 40% systematic sampling shown in Figure 3.2,exactly two tubes from each cluster are inspected at thefirst stage of inspection, and an expression for PI&Dfor an individual defective tube can be derived in amanner similar to the development given above (Bowenet al. 1989). For the 40% plan, the expression forPI&D is:

    3.3 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    23/66

    3.0 Analytical Evaluation

    Table 3.1. Values of p for a Range of PEL and POD(deg) Values Assuming that POD = 0.9 for Defective Tubes

    PEL0.50 0.6010.70 0.80 0.85 0.90 i 0.95 1.00

    2O%Sequential

    0.50 0.270 0.324 0.378 0.432 0.459 0.486 0.513 0.540POD(deg) 0.60 0.306 0.368 0.428 0.490 0.520 0.551 0.581 0.612

    0.70 0.342 0.410 0.479 0.547 0.581 0.616 0.650 0.6844O%

    Sequential0.50 0.392 0.470 0.548 0.626 0.666 0.705 0.744 0.783

    POD(deg) 0.60 0.418 0.501 0.585 0.668 0.710 0.752 0.793 0.8350.70 0.438 0.526 0.614 0.701 0.745 0.789 0.833 0.877

    100%Inspection

    !

    0.45 0.54 [ 0.63 0.72 0.77 0.81 0.86 0.90 ._

    NUREG/CR-5161, Vol. 2 3.4

  • 8/3/2019 10131948

    24/66

    4.0 Monte Carlo Simulation AnalysisTo supplement the analytical results, it was desirable to the Westinghouse Model 51 geometry. The differenceevaluate the performance of the sampling plans when in geometries is slight - 46 rows and 94 columns for thethe assumption of defect-clustering does not hold. This Model 51 generator, and 45 rows and 92 columns forwas accomplished by application of Monte Carlo simu- the Model 44 generator.lation techniques. A computer program was developedthat simulates 100% inspection and various sampling Figures 4.1 to 4.11 show symbol-coded tube maps thatpla, s of interest, illustrate the flaw distributions represented by the num-

    bers in Table 4.1. The Blank map is not illustrated. In4.1 Flaw Distributions Figures 4.1 to 4.11, each symbol represents a flaw sizecategory (or interval). Rather than assigning the "mid-A steam generator with the same mmaber of tubes as a point" or "average" flaw size to each tube in a particularsize category, it was desirable that the flaws within eachWestinghouse Model 51 generator (i.e., 3,388) wasassumed, and twelve different tube maps were consid- category represent a random sample from a continuousdistribution of flaw sizes.ered. Assuming one flaw per tube, Table 4.1 gives abrief description of the flaw distributions used in thiswork. As shown in Table 4.1, each flaw distribution has To accomplish this, it is assumed that the flaw sizesa specified number of flaws in each percent through- within each category are approximately uniformly dis-wall category. Tube maps 20 and 21 were provided by tributed. Then, for each tube in a particular flaw sizeEPRI and represent field ISI results for tv,o specific category, true flaw size X was randomly generated fromplants with Westinghouse Model 51 and Model 44 a uniform (or rectangular) distribution with endpointssteam generators, respectively. The coordinates of the equal to the upper and lower limits of the flaw sizedegraded and defective tubes for map 21 were set into category. All tubes without symbols have no flaws.

    Table 4.1. Tube Maps (Flaw Distributions) for Simulation Analyses

    Map [ < 20% 20 - 49% 50 - 75% 75 ..."100% Distribution of Flaws1 15 5 5 5 Small Isolated Clusters

    1A 0 5 5 5 Small Isolated Clusters3 15 5 5 5 Single Cluster6 90 40 40 10 Isolated Clusters,,,

    6A 0 40 40 10 l_olated Clusters ,,,,,,8 92 38 40 10 Single Large Cluster,,,,,

    8A 0 38 40 10 Single Large Cluster13 120 232 175 116 Predicted Surry

    13A 0 232 175 116 Modified Predicted Surry,,,,,20 0 0 4 12 Isolated Defective Tubes21 0 0 1 3 Isolated Defective TubesB 0 0 0 0 Blank (No Degradation)

    4.1 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    25/66

    4.0 Monte Carlo Simulation

    Tube Map 15O

    ..o.,.,.,....,...., o......, .o...... ElawDeoth%40

  • 8/3/2019 10131948

    26/66

    4.0 Monte Carlo Simulation

    Tube Map 350....o., .,..,..... .....

    , ..... Flaw Depth %. ,....." "..

  • 8/3/2019 10131948

    27/66

    4.0 Monte CarloSimulation

    TubeMap6A5O_.o. .....

    .. " ..A_O *.. .,.." ..... .. l-law Depth. %,, .

    40 .'" .... A 20- 49 ." ". o 50- 74

    75-100 .* ,.,30 .. ,,_o

    O ' BOO .rr" . 20 _ o o .

    _OAAUL A_

    10 ' :

    0 O .OOA OA 0 OA............ O..O0.A,a...00 .... O.A ............ 0.0 ............ Ju_ .......... 0.0.0.0 ..............I I I I I0 20 40 60 .80 100

    ColumnFigure 4.5. Monte Carlo Simulation Tube Map 6A

    TubeMap85OAOO..(NN)....

    .." ..... ... Flaw DeDIh.%o

  • 8/3/2019 10131948

    28/66

    4.0 Monte Carlo Simulation

    Tube Map 8A50

    ,..., ..,.o..... FlawDoeth%.. A ,.40 " " " 20- 49 o 50- 74.. o. 75-100

    ... ...30 o ". o j

    O ' &.&20 - oK_ AOOO_aI OOA_OB00 OOa_OOOOO &AOO aBOOOO& OAO

    A 0 & OAOO& O10 : o : ,,& O .'o'.........,......-_'.o...-...........................................

    " I I l I0 20 40 60 80 100ColumnFigure 4.7. Monte Carlo Simulation Tube Map 8A

    Tube Map 1350

    .,..........,...,.. ,.=,.. ,,...... FlawDepth%40 .."" "....

  • 8/3/2019 10131948

    29/66

    4.0 Monte Carlo Simulation

    Tube Map 13A50

    ,.,,...,. ,,,, .,,o.,.. .

    ...... Flaw Depth. % t I o .. .An .. A 20- 49I I e-it1_/ . . o 50- 74

    .' ". " 75-100 J

    30 .' Ao _ .. A ".o A AO OAfie ' A Aa=O O A ' 41ill O A O OAOA , , AA At A lIOn A hA20- . o_- _ ooou==oo . A _ A_MI= BA _ II OAOOi_k &O dK) AOO _ A O _a&OOAO& A OamA&& & JB JBOdMtCXXmalM,&OA A AOOOOO_O0_OAIB_10A ,AO&

    ImAAm t 0 C_iO moo ,_ .o"o,PoA ,=o"'_"" "o,o,,o10 . mo m_ooAjm-_l_2m_i Ao_ooo , IOO oo,,o_'_,'- '%',oI, ,,=_ =o, , =2,- '1=" _ ". "g o,,o o "J

    , ,

    " I I I I0 20 40 60 80 1 10ColumnFigure 4.9. Monte Carlo Simulation Tube Map 13A

    Tube Map 205O,..,.,.,, ,,.,..,. .,,,,

    , .,,,. .i

    40 ..''" "'.... Flaw Depth. %.." ".. 20-49.. .. o 50-74 ' 75- 10030 ." '.

    O "n" III .o 20 . .

    .'li10" '

    o o

    O I I I I0 20 40 60 80 100Column

    Figure 4.10. Monte Carlo Simulation Tube Map 20NUREG/CR-5161, Vol. 2 4.6

  • 8/3/2019 10131948

    30/66

    4.0 Monte Carlo Simulation

    TubeMap215Oo,o,, oo,o,o**. oo

    o, .o,o,o ,,oo. *

    40 .." "'" Flaw Depth.%." "'. o 50-74

    . 75- 100O,., ,.o30 .' ".

    -,O tr"2O

    ,

    10 i

    '"' I " I I " I ' '0 20 40 60 80 100Column

    Figure 4.11. Monte Carlo Simulation Tube Map 21It should be noted that the flaw distributions were these factorswould affect the performance of eachchosen for the purpose of evaluating and comparing the sampling plan. To accomplish this, several differentperformance of the sampling plans under various de- POD curves were considered. Figures 4.12 to 4.14grees of clustering. These distributions are not notes- show the POD curves utiliTed.sarily realistic, and they are not intended to representany particular operating steam generators; any similari- Each POD curve defmes a POD value for any true flawties are coincidental, size from X = 0% to X = 100% through-wall. The

    POD curves 1 and 2 in Figure 4.12 were chosen to4.2 Probability of Detection and ET represent lower (curve 1) and upper (curve 2) boundsSizing Models on the POD estimates obtained from the Surryinspec-tion data prior to the final POD analysis presented inthe Task 13 Report (Bradley et al. 1988). Note, howev-As shown in the analytical evaluation, the effectiveness er, that curves 1 and 2 do not include a false call proba-of a sampling plan depends on the POD, sizing capabil- bility; that is, in Figure4.12, both curves have POD = 0ity,and plugging limit. Thus, to determine the outcome when the true flaw depth isX = 0%. Although thisof a tube inspection, it was necessary to input: 1) a zero false call probability is not realistic, false calls canPOD model that expresses POD as a function of true only improve the effectiveness of sampling plans. Thus,flaw sizeX, 2) an ET sizing modei that expresses the an evaluation of the effectiveness with zero false callexpected (or mean) ET size as a function of true flaw probabilitywill tend to be conservative.size X and also provides the standard deviation of indi-vidualET values about the mean value, and 3) an ET The POD curves 4 and 5 in Figure 4.13 are a refine-plugging limit such that a tube with an ET reading that ment of curves 1 and 2 based on later POD estimates.exceeds the plugging limit will be plugged or repaired. Note curve4 is a somewhat more optimistic estimate ofIt was of interest to study how changes in any or all of

    4.7 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    31/66

    4.0 Monte Carlo Simulation

    1.00" 1.00

    0.80 0.80 __: C_e 2 _ Curve7o.oo o.oo.I_ Carve10.40 0.40.

    0.20 0.20 i_ _0.00 0.00 , , , , T

    0 20 40 60 80 100 0 20 40 60 80 10True Flaw Depth, % True Flaw Depth, %

    Figure 4.I2. Monte Carlo Simulation POD Curves 1 Figure 4.14. Monte Carlo Simulation POD Curves 6and 2 and 7The POD curves 6 and 7 in Figure 4.14 are simply1.00 curves4 and 5 modified to include a 0.05 false callprobability. Thus, a comparison of results based oncurves 6 and 7 with curves 4 and 5 should reveal how

    0.so false calls affect the performance of the sampling plans.= c_,s Also, simulations performed on the blank tube map (noflaws present) and either curve 6 or 7 should provide

    0.60 additional information on the impact of false calls onincreased inspection and plugging of tubes.0.40 To study the effect of sizing capability on the effective-

    ness of the sampling plans, two ET sizing models wereconsidered. Model 1 in Table 4.2 is intended to repre-0.20 sent typical sizing capability of SGGP teams. Specifical-ly, the parameters a and b were estimated by averagingthe values of a and b (see Equation 2.1) forTeams A,0.00 B, C, D, and X. The standard deviation (SD), was esti-o 2o 40 so so 1oo mated by first averaging the Var(e) values for theseTrueFlawDepth,% teams and then takingthe square root of the average.Model 2 in Table 4.2 is the fitted model for the best

    Figure 4.13. Monte Carlo Simulation POD Curves 4 performing SGGP team and is intended to represent anand $ improved level of sizing performance.the lower bound POD performance than curve 1, and 4.3 Sampling Plans and Second Stagecurve 5 is a less optimistic upper bound on POD than Inspectionurve 2. Note also that curves 4 and 5 have a zero falsecall probability;that is POD = 0 atX = 0%. There were 17 sampling plans considered in this study.

    A samplingplan consisted of type of plan (systematic or

    NUREG/CR-5161, Vol. 2 4.8

  • 8/3/2019 10131948

    32/66

    4.0 Monte Carlo Simulation

    Table 4.2. ET Sizing Models Used in Monte Carlo Simulations

    Model Equation SD Description1 14.5 + 0.46(X) 16 Typical SGGP TeamI ""'2 12.6 + 0.68(30 10 Best Sizing Performance, , ,,, ,,,,

    Table 4.3. Sampling/Inspection Plans Investigated

    Initial % ExpansionType of Total Rule Description,, ,,, ,,,

    Systematic 20 Local 0 Every fifth tube inspected on gridLocal 2Global 2,,, , ,,, ,,

    33.3 Local 0 Every third tube inspected on gridLocal 2Global 2

    40 Local 0 Two of every five tubes inspected onLocal 2 gridGlobal 2

    Random 3 C1, C2, C3 Standard Technical Specifications (STS),,,,20 Local 0Local 233.3 Local 0

    Local 2.... i

    40 Local 0Local 2

    .... ,,.,

    NA 100 NA 100% inspection case

    random), initial percent of the total tube population tional inspection in the region immediately surroundingsampled, and the second-stage expansion rule. The the suspect tube until a two-tube wide "buffer zone" was17Saml: ing plans are given in Table 4.3. Note that four observed, which was composed of tubes with no ETdifferent types of second-stage expansion rules were indications and completely surrounds the tube(s) withevaluated in this study. ET indications > 0%. For the Local 2 expansion rule,

    second-stage inspection analogous to Local 0 was trig-The local expansion rule (LER) denoted as Local 0 or gered only when an ET indication > 20% wasLocal 2 was used in most of the simulations and trig- observed.gered second-stage inspection when a tube with a posi-tive ET indication was observed. For the Local 0 rule, The global expansion rule (GER) was significantlyany tube with an ET indication > 0% triggered addi- different than the local expansion rules described above.

    4.9 NUREG/CR-5161, Voi. 2

  • 8/3/2019 10131948

    33/66

    4.0 Monte Carlo Simulation

    It triggered second-stage inspection when 1) one or indicationmay not be correctly classified and the in-more tubes from the initial sample was classified as spection not expanded sufficiently. No attempt wasdefective (i.e., ET indication _. 40%), or 2) 5% or made to model this potential unreliability in the inspec-more of the initial sample of tubes was classified as tion process. Therefore, the evaluations discussed indegraded (i.e., ET indication > 20% but < 40%). The this report must be taken as an upper bound estimateextent of second-stage inspection mandated by the GER of the samplingplans using the expansion rules given inwas 100%of the "region"of interest. It should be EPRI NP-6201.noted that the GER is similar to, but not the same as,the second-stage expansion rules presented in the re- The last second-stage expansion rule considered was theport EPRI NP-6201. In this workwe have assumed Standard Technical Specification (STS) requirement.that, if the results of the initial sample triggered second- For the STS expansion rule, the level of second-stagestage inspection, then all remaining tubes in the steam inspection also depends on the number and severity ofgenerator were inspected full-length. This is not the the flaws found during firs,_-stagesampling. The inspec-recommendation given in EPRI NP-6201. The EPRI tion result categories are described in Table 4.4 and theNP-6201 expansion rules allow for limited examination different levels of inspection required for each resultat the _econd-stage depending on the type of steam category are given in Table 4.5. Note that the criteriagenerator, the tube damage mechanism, and plant have been reduced to one steam generator. In the STS,history. Since determination of flaw type is largely the criteria affect all of the steam generators at a par-based on ET indication location within the tube bundle ticularplant.and prior experience, there is a potential that anET

    Table 4.4. Standard Technical Specification Requirement Inspection Result Categories,t

    Category Inspection ResultC1 Less than 5% inspected are degraded or none are defectiveC2 Between 5% and 10% inspected are degraded or less than 1% are defectiveC3 More than 10% inspected are degraded or more than 1%are defective,, ,., ,,.,, ,.

    Table 4.5. Standard Technical Specification Requirement Second-Stage Inspection Criteria

    i_ I SecondiIns_tion _ InspectionReset initial Action Required Result Second Action Required

    C1 None NA NAC2 Plug defective tubes. Inspect C1 None6%more tubes.

    C2 Plug defective tubes. Inspect12% more tubes.C3 100% inspection,,.,

    C3 100% inspection NA NA,.' * -

    NUREG/CR-5161, Vol. 2 4.10

  • 8/3/2019 10131948

    34/66

    4.0 Monte Carlo Simulation

    Table 4.6. Monte Carlo Simulation Steps

    Step Description, , , , ,, ,, ,, ,,,,, ,,,,,1 Read tube map with true flaw sizes X.2 Inspect tubes according to sampling plan.3 Based on true flaw size and POD curve, assign probability of ET > 0, then randomly

    generate "detected" or "not detected" for each tube.4 If ET > 0, use sizing model to generate "ET size" Y from a normal distribution with

    mean a + lax and standard deviation SD. This provides the 100% inspection result.5 If "ET size" Y > plugging limit (40%), plug tube. ,,,,,6 Compare inspection results with expansion rule and perform second-stage inspection

    as needed.7 Repeat Steps 2 through 6, 25 times and tabulate summary statistics. Shift grid as

    necessary for systematic sampling.*..........*Samplinggrid not shifted for simulationsperformedon tube maps 1,3,8, and 13.

    4.4 Simulation Methodology then an ET flaw size is randomly generated from anormal distribution with mean and standard deviationThe simulation analysis strategy was to consider various defined by the sizing model. Thus each time a flaw iscombinations of flaw distribution, POD curve, and ET detected, a different ET size will be generated. Thesizing model with each sampling plan. For a given implication is that the outcome of step 5 for this tubecombination of these parameters, 25 independent appli- can differ from application to application.cations of all sampling plans were simulated. For eachcombination, the process outlined in Table 4.6 was 4.5 Simulation Resultsfollowed. This process is classified as a Monte Carlosimulation because of the randomness introduced in The principal measure of sampling plan performancesteps 3 and 4 of Table 4.6. Consider, for example, a for detecting and plugging defective tubes was the sam-particular tube with true flaw size X > 20%. Assume piing plan effectiveness. The sampling plan effective-that this tube is included in the initial 20% systematic ness was defined as the ratio of the mean number ofsample, in each of the 25 independent applications of defective tubes plugged to the total number of defectivethe sampling plan, this tube has a chance of being tubes in the tube map. The effectiveness parameterdetected in Step 3. However, it could be detected in provided a means for comparing the plugging capabilitysome of the applications and not others, of various sampling plans across different tube map.,,.

    To determine the sampling plan efficiency, comparisonsIn each of the 25 applications, the outcome of step 6 were made of the mean number of defective tubesdetermines whether additional inspection is performed, plugged by a sampling plan to the mean number ofThus, the total number of tubes inspected can be ex- defective tubes plugged using 100% inspection. Thesepeered to differ from application to application. In comparisons provided an assessment of how well aeach of the 25 applications, the outcome of step 3 also sampling plan performed relative to the best possibledetermines whether or not the flaw sizing in step 4 is (100% inspection).carried out. If step 4 results in detection (ET > 0%),

    4.11 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    35/66

    4.0 Monte Carlo Simulation

    The complete output of all simulation results for all The results plotted in Figure 4.16 show the effect ofcases considered is too voluminous to include in this improving sizing reliability at constant POD. The dif-report. Thus, only a limited subset of the results for ferences in Figure 4.16 were calculated by subtractingeach tube map is presented in this section. The limited the sampling plan effectiveness for POD curve X/sizingsubset of results is summarized to provide a basis for model 1 from POD curveX/sizing model 2. Note thatthe comparisons and evaluations of interest. Detailed in all cases the sampling plan effectiveness increasedsummaries are included in AppendixA by tube map. with increasing flaw sizing performance. The averageincrease in samplingplan effectiveness across all tube4.5.1 Effect of Inspection System Reliability maps was about 0.13, which roughly corresponds to thedifference in PEL between sizing models 1 and 2 whenThe simulation results provide valuable insights into the X >- 75%. These results indicate that givena choice ofeffect of inspection system reliability on the sampling improving sizing reliability from model 1 to model 2plan effectiveness for detecting and plugging defective versus the detection reliabilityfrom curve 4 to curve 5,tubes. Figures 4.15 and 4.16 show plots giving the then changing the sizing reliabilitywould provide theaverage difference (for all systematic sampling plans) in biggest increase in inspection effectiveness.sampling plan effectiveness for various POD 0.2_curve/sizing model combinations for eight tube maps. POD_Sizin_ ModThe differences in Figure 4.15 were calculated by sub- m 4n.4Jltracting the sampling plan effectiveness for POD curve 0.20 [] _. _x4/sizing model X from POD curve 5/sizing model X.These results show the effect of increasingPOD perfor-mance at constant sizing reliability. Note that in most _ o.t5

    the sampling plan effectiveness increased withasesincreasing POD performance. The average increase in _ 0.10sampling plan effectiveness across all tube maps was ._-about 0.05 which corresponds to the difference in PODbetween curves 4 and 5 when X > 75%. _ 0.05

    O.25POD C-_rvPJ_]izin_ Modal_o.2o m _x.4Jl

    -0.05i 1 1A 6 6A 8A 13A 20 210.15 TubeMap

    Figure 4.16. Effect of Sizing Performance on Samplinglan Effectiveness0.10

    0.05 4.5.2 Systematic Versus Random SamplingAn objective of the Monte Carlo simulation work wasto evaluate the difference in sampling plan effectivenesso.oo for different systematic and randomly selected initialsample sizes. Figure 4.17 shows a plot of the difference.o.o5 in systematic and random sampling plan effectiveness1 1A 6 6A 8A 13A 20 21 for the six tube maps for which these simulations wereTubeMap performed. The plot shows the differences broken

    Figure 4.15. Effect of POD Performance on Sampling down by initial sample s_, either 20%, 33.3%,or 40%.Plan Effectiveness If no bar is displayed for a partictdar tube map andinitial sample size, then the difference between the

    NUREG/CR-5161, Vol. 2 4.12

  • 8/3/2019 10131948

    36/66

    4.0 Monte Carlo Simulation

    systematic and random sample plans is zero for that 5/2), three systematic sampling plans (20%, 33.3%, andcombination. The results indicate that there is no 40%), and the LER to trigger second-stage inspection.statistically significant difference between systematic Figure 4.18 shows a histogramof the sampling planand randomly selected initial samples. As a conse- effectiveness for systematic sampling plans employingquence, the remainder of the discussion in this section the 20% degraded tube def'mitionminus systematicis focussed on results obtained from the systematic sampling plans employing the 0% degraded tubedef'mi-sampling plans. Since the systematic and random sam- tion (i.e., SX2 - SX0). There was essentially no effectpiing plans evaluated in this report gave nearly equiva- of the POD curve/sizing model combination observed,lent results, the conclusions reached apply to both. so the average for all fourPOD curve/sizing modelcombinations was used to construct Figure 4.18. It is0.25

    clear from the results that the degraded tube definitiondid not effect inspection effectiveness significantly. InIla) fact, for four tube maps (8A, 13A, 20, and 21) the0.20 O"_ t:l 4o average effectiveness difference was zero, and for the= other tube maps the average difference was 0.02 or less._', o.15 The lack of a degraded tube threshold effect is likelyi due to the specific tube flaw sizes considered for the sixtube maps considered.t 0.I0 0.25

    0.05" Initialystematicamnle.0.20 BD 411

    0.00 _ 0.15o_0-0.05 1A 6A 8A 1_, 20 21 ,-- O.lOTubeMapFigure 4.17. Effect of Systematic Versus Random .._

    Sampling on Sampling Plan Effectiveness _ o.0s4.5.3 Effect of Degraded Tube Threshold 0.0o _zL_a __JSeveral simulations were performed to evaluate theeffect of the definition of a degraded tube. All of the -o.0s 'samplinglanshaveexpansionuleshichtriggerec- IA 6A 8A 13A m 21ond-stage inspection when some number of tubes from TubeMapthe initial sample have been classified as degraded. Figure 4.18. Effect of Degraded Tube 'I'nreshoid onTwo definitions of a degraded tube were considered in Sampling Plan Effectivenessthis analysis. In one series of simulatiom a degradedtube was defined as any tube with an El" indication > 4.5.4 Effect of Initial Sample Sizeand Flaw0% but less than 75%. Another set of simulations was Distributionperformed in which a degraded tube was defined as anytube with an ET indication _ 20% but less than 75%. The effect of the initial sample size and flaw distribu-The 20% threshold was selected to reflect common tion on sampling plan effectiveness is shown in Figuresfield practice and the difficulty of detecting and sizing 4.19 and 4.20. Simulationswere performed for six tube< 20% indications. Simulations were performed for six maps (1A, 6A, 8A, 13A, 20, 21), four POD curve/sizingtube maps (1A, 6A, 83, 13A, 20, and 21), four POD model combinations (4/1, 4/2, 5/1, 5/2), and threecurve/sizlng model combinations (4/1, 4/2, 5/1, and systematic sampling plans. The LER was used as the

    4.13 NUREG/CR-5161, Vol. 2

  • 8/3/2019 10131948

    37/66

    4.0 Monte Carlo Simulation

    criterion to trigger second-stage inspection for all sam- Only the results for the worst and best combination ofpiing plans considered. In addition, the STS was also POD curve and sizing model are presented. Figuressimulated. 4.19 and 4.20 display results for three tube maps in1.0 which the flaws are isolated or nearly isolated. In other

    STS Plugging Limit.,40_ words, the flaw distributions do not exhibit a significantdegree of clustering. The simulation results show thatin two out of three cases the sampling plan effective-0.8 r ness was approximatelylinearly related to the initial

    _b, sample size. It is significant to note that the STS sam-

    pling plan performed the worst in nearly all cases and0.6 _, resulted in the highest mean number of tubes inspectede per defective plugged. It is clear from the results for'_ M_o tube maps 20 and 21 that when flaws are completely

    _0.4 f // _L_ isolated that the best strategy for steam generator in-A spection is 100% inspection since this is the most effec-/ .,,_/ _ 8A/ /_,,,,_ _ laA tive for identifying defective tubes and gives the lowest

    o.2 _ --_ 21a_ number of tubes inspected per defective plugged.In three tube maps, the flaws are clustered or occur in0.0 , . . .... . groups of neighboring tubes. In general, all sampling0 a3 40 eo so 100 plans are equally effective (except for the STS) for

    Initial SystematicSample,% detecting and plugging defective tubes when the defec-Figure 4.19. Sampling Plan Effectiveness Versus Inl- tive tubes are surrounded by large numbers of degradedtlai Systematic Sample Size for POD Curve 4 and tubes. That is, enough inspection was triggeredby eachSizing Model 1 sampling plan to result in a thorough inspection of theregion containing the defective tubes. The effectiveness

    1.0 of the sampling plan in this case depends on the PODcurve and sizingmodel combination, rather than theinitial sample size.0.8 For all the systematic sampling plans, if map 13 andmap 13A results are excluded, the total number oftubes inspected increased by -< 10% of the total num-0.e ber of tubes in the generator. Only when degradationwas copious did the systematic sampling plans expand

    _ / / __._ 6_ more than 10%beyond the initial sample. It is signiti-o.4 --o- _ cant to note that even in these two cases (map 13 andmap 13A), the level of inspection for all the sampling_ _ 8A plans was considerably less than 100%but still achieved

    0.2 / ,- _ _mit,,40_ , __--'_ 21 nearly the same effectiveness as 100%inspection.Pluut,s For the STS, depending on the amount and distribution

    0.o . of tube degradation, the number of tubes inspected for0 z) 40 eo m 100 each simulation run was highly erratic, either 3%orInitial SystematicSample,% 100%. For tube maps with isolated defectives, only theminimum 3% inspection was performed for nearly allFigure 4.20. Sampling Plan Effectiveness Versus lni- 25 runs. For tube maps with large numbers of defect-tial Systematic Sample Size for POD Curve $ and ires or more clustering, then the initial 3% inspection

    Sizing Model 2 often expanded to 100%inspection. It is important toobserve that in several instances of intermediate cluster-

    NUREG/CR-5161, Vol. 2 4.14