31
Destructive testing In destructive testing, tests are carried out to the specimen's failure, in order to understand a specimen's structural performance or material behaviour under different loads. These tests are generally much easier to carry out, yield more information, and are easier to interpret than nondestructive testing . Destructive testing is most suitable, and economic, for objects which will be mass produced, as the cost of destroying a small number of specimens is negligible. It is usually not economical to do destructive testing where only one or very few items are to be produced (for example, in the case of a building). Some types of destructive testing: Stress tests Crash tests Hardness tests Metallographic tests Stress testing Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries, such as fatigue testing for materials. Reliability engineers often test items under expected stress or even under accelerated stress. The goal is to determine the operating life of the item or to determine modes of failure . [1] Stress testing, in general, should put the hardware under exaggerated levels of stress in order to ensure stability when used in a normal environment. Crash test

Testing

Embed Size (px)

Citation preview

Page 1: Testing

Destructive testingIn destructive testing, tests are carried out to the specimen's failure, in order to understand a specimen's

structural performance or material behaviour under different loads. These tests are generally much easier to

carry out, yield more information, and are easier to interpret than nondestructive testing.

Destructive testing is most suitable, and economic, for objects which will be mass produced, as the cost of

destroying a small number of specimens is negligible. It is usually not economical to do destructive testing

where only one or very few items are to be produced (for example, in the case of a building).

Some types of destructive testing:

Stress tests

Crash tests

Hardness tests

Metallographic tests

Stress testingStress testing is a form of testing that is used to determine the stability of a given system or entity. It involves

testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress

testing may have a more specific meaning in certain industries, such as fatigue testing for materials.

Reliability engineers often test items under expected stress or even under accelerated stress. The goal is

to determine the operating life of the item or to determine modes of failure.[1]

Stress testing, in general, should put the hardware under exaggerated levels of stress in order to ensure

stability when used in a normal environment.

Crash testA crash test is a form of destructive testing usually performed in order to ensure safe design standards

in crashworthiness and crash compatibility for various modes of transportation or related systems and

components.

Types

Frontal-impact tests: which is what most people initially think of when asked about a crash test. These

are usually impacts upon a solid concrete wall at a specified speed, but can also be vehicle-vehicle

Page 2: Testing

tests. SUVs have been singled out in these tests for a while, due to the high ride-height that they often

have.

Offset tests: in which only part of the front of the car impacts with a barrier (vehicle). These are important,

as impact forces (approximately) remain the same as with a frontal impact test, but a smaller fraction of the

car is required to absorb all of the force. These tests are often realized by cars turning into oncoming

traffic. This type of testing is done by the U.S.A. Insurance Institute for Highway

Safety (IIHS), EuroNCAP and Australasian New Car Assessment Program (ANCAP).

Side-impact  tests: these forms of accidents have a very significant likelihood of fatality, as cars do not

have a significant crumple zone to absorb the impact forces before an occupant is injured.

Roll-over  tests: which tests a car's ability (specifically the pillars holding the roof) to support itself in a

dynamic impact. More recently dynamic rollover tests have been proposed as opposed to static crush

testing (video).[1]

Roadside hardware crash tests: are used to ensure crash barriers and crash cushions will protect

vehicle occupants from roadside hazards, and also to ensure that guard rails, sign posts, light poles and

similar appurtenances do not pose an undue hazard to vehicle occupants.

Old versus new: Often an old and big car against a small and new car, or two different generations of the

same car model. These tests are performed to show the advancements in crashworthiness.

Computer model: Because of the cost of full-scale crash tests, engineers often run many simulated crash

tests using computer models to refine their vehicle or barrier designs before conducting live tests.

HardnessFrom Wikipedia, the free encyclopedia

This article is about mechanical properties of materials. For other uses, see Hard.

Hardness is a measure of how resistant solid matter is to various kinds of permanent shape change when

a force is applied. Macroscopic hardness is generally characterized by strong intermolecular bonds, but the

behavior of solid materials under force is complex; therefore, there are different measurements of

hardness: scratch hardness, indentation hardness, and rebound hardness.

Page 3: Testing

Hardness is dependent on ductility, elastic stiffness, plasticity, strain, strength, toughness, viscoelasticity,

and viscosity.

Common examples of hard matter are ceramics, concrete, certain metals, and superhard materials, which can

be contrasted with soft matter.

Measuring hardness

There are three main types of hardness measurements: scratch, indentation, and rebound. Within each of

these classes of measurement there are individual measurement scales. For practical reasons conversion

tables are used to convert between one scale and another.

Scratch hardness

Scratch hardness is the measure of how resistant a sample is to fracture or permanent plastic deformation due

to friction from a sharp object. The principle is that an object made of a harder material will scratch an object

made of a softer material. When testing coatings, scratch hardness refers to the force necessary to cut through

the film to the substrate. The most common test is Mohs scale, which is used in mineralogy. One tool to make

this measurement is the sclerometer.

Another tool used to make these tests is the pocket hardness tester. This tool consists of a scale arm with

graduated markings attached to a four wheeled carriage. A scratch tool with a sharp rim is mounted at a

predetermined angle to the testing surface. In order to use it a weight of known mass is added to the scale arm

at one of the graduated markings, the tool is then drawn across the test surface. The use of the weight and

markings allows a known pressure to be applied without the need for complicated machinery.[1]

[edit]Indentation hardness

Main article: Indentation hardness

Indentation hardness measures the resistance of a sample to material deformation due to a constant

compression load from a sharp object; they are primarily used in engineering and metallurgy fields. The tests

work on the basic premise of measuring the critical dimensions of an indentation left by a specifically

dimensioned and loaded indenter.

Common indentation hardness scales are Rockwell, Vickers, Shore, and Brinell.

[edit]Rebound hardness

Rebound hardness, also known as dynamic hardness, measures the height of the "bounce" of a diamond-

tipped hammer dropped from a fixed height onto a material. This type of hardness is related to elasticity. The

device used to take this measurement is known as a scleroscope.[2]

Two scales that measures rebound hardness are the Leeb rebound hardness test and Bennett hardness scale.

Page 4: Testing

Physics

Diagram of a stress-strain curve, showing the relationship between stress(force applied per unit area) and strain or

deformation of a ductile metal.

In solid mechanics, solids generally have three responses to force, depending on the amount of force and

the type of material:

They exhibit elasticity—the ability to temporarily change shape, but return to the original shape when

the pressure is removed. "Hardness" in the elastic range—a small temporary change in shape for a

given force—is known as stiffness in the case of a given object, or a high elastic modulus in the case

of a material.

They exhibit plasticity—the ability to permanently change shape in response to the force, but remain

in one piece. The yield strength is the point at which elastic deformation gives way to plastic

deformation. Deformation in the plastic range is non-linear, and is described by the stress-strain

curve. This response produces the observed properties of scratch and indentation hardness, as

described and measured in materials science. Some materials exhibit

both elasticity and viscosity when undergoing plastic deformation; this is called viscoelasticity.

They fracture—split into two or more pieces.

Strength is a measure of the extent of a material's elastic range, or elastic and plastic ranges together.

This is quantified as compressive strength, shear strength, tensile strength depending on the direction of

the forces involved. Ultimate strength is an engineering measure of the maximum load a part of a specific

material and geometry can withstand.

Brittleness, in technical usage, is the tendency of a material to fracture with very little or no detectable

deformation beforehand. Thus in technical terms, a material can be both brittle and strong. In everyday

usage "brittleness" usually refers to the tendency to fracture under a small amount of force, which exhibits

both brittleness and a lack of strength (in the technical sense). For perfectly brittle materials, yield

strength and ultimate strength are the same, because they do not experience detectable plastic

deformation. The opposite of brittleness is ductility.

The toughness of a material is the maximum amount of energy it can absorb before fracturing, which is

different from the amount of force that can be applied. Toughness tends to be small for brittle materials,

because elastic and plastic deformations allow materials to absorb large amounts of energy.

Page 5: Testing

Hardness increases with decreasing particle size. This is known as the Hall-Petch relationship. However,

below a critical grain-size, hardness decreases with decreasing grain size. This is known as the inverse

Hall-Petch effect.

Hardness of a material to deformation is dependent on its microdurability or small-scale shear modulus in

any direction, not to any rigidity or stiffness properties such as its bulk modulus or Young's modulus.

Stiffness is often confused for hardness.[3][4] Some materials are stiffer than diamond (e.g. osmium) but

are not harder, and are prone to spalling and flaking in squamose or acicular habits.

MetallographyFrom Wikipedia, the free encyclopedia

  (Redirected from Metallographic test)

A micrograph of bronze revealing a cast dendritic structure

This article includes a list of references, related reading orexternal links, but its sources remain unclear because it lacks inline citations. Please improve this article by introducing more precise citations. (August 2008)

Metallography is the study of the physical structure and components of metals, typically using microscopy.

Ceramic and polymeric materials may also be prepared using metallographic techniques, hence the

terms ceramography, plastography and, collectively, materialography.

Page 6: Testing

Contents

[hide]

1 Preparing metallographic specimens

2 Analysis techniques

o 2.1 Design, resolution, and image contrast

o 2.2 Bright and dark field microscopy

o 2.3 Polarized light microscopy

o 2.4 Differential interference contrast microscopy

o 2.5 Oblique illumination

o 2.6 Scanning electron and transmission electron microscopes

o 2.7 X-ray diffraction techniques

3 Quantitative metallography

4 See also

5 References

6 External links

[edit]Preparing metallographic specimens

Hot mounting: The specimens are placed in the mounting press, and the resin is added. The specimens are mounted under

heat and high pressure.

Page 7: Testing

Cold mounting: The specimens are placed in a mounting cup and mounting material is then poured over the specimens. A

vacuum impregnation unit (photo) is used for mounting of porous materials.

The MD-System is an example of a reusable pad for use with diamond suspension. A single magnetic platen is positioned

on the grinding and polishing machine to support the preparation pads.

The surface of a metallographic specimen is prepared by various methods of grinding,polishing, and etching.

After preparation, it is often analyzed using optical or electron microscopy. Using only metallographic

techniques, a skilled technician can identify alloysand predict material properties.

Mechanical preparation is the most common preparation method. Successively finer abrasiveparticles are used

to remove material from the sample surface until the desired surface quality is achieved. Many different

machines are available for doing this grinding and polishing, able to meet different demands for quality,

capacity, and reproducibility.

A systematic preparation method is the easiest way to achieve the true structure. Sample preparation must

therefore pursue rules which are suitable for most materials. Different materials with similar properties

(hardness and ductility) will respond alike and thus require the same consumables during preparation.

Page 8: Testing

Metallographic specimens are typically "mounted" using a hot compression thermosetting resin. In the

past, phenolic thermosetting resins have been used, but modern epoxy is becoming more popular because

reduced shrinkage during curing results in a better mount with superior edge retention. A typical mounting cycle

will compress the specimen and mounting media to 4,000 psi (28 MPa) and heat to a temperature of 350

°F (177 °C). When specimens are very sensitive to temperature, "cold mounts" may be made with a two-part

epoxy resin. Mounting a specimen provides a safe, standardized, and ergonomic way by which to hold a

sample during the grinding and polishing operations.

A macro etched copper disc

After mounting, the specimen is wet ground to reveal the surface of the metal. The specimen is successively

ground with finer and finer abrasive media. Silicon carbide abrasive paper was the first method of grinding and

is still used today. Many metallographers, however, prefer to use a diamond grit suspension which is dosed

onto a reusable fabric pad throughout the polishing process. Diamond grit in suspension might start at

9 micrometresand finish at one micrometre. Generally, polishing with diamond suspension gives finer results

than using silicon carbide papers (SiC papers), especially with revealing porosity, which silicon carbide paper

sometimes "smear" over. After grinding the specimen, polishing is performed. Typically, a specimen is polished

with a slurry of alumina, silica, or diamond on anapless cloth to produce a scratch-free mirror finish, free from

smear, drag, or pull-outs and with minimal deformation remaining from the preparation process.

After polishing, certain microstructuralconstituents can be seen with the microscope, e.g., inclusions and

nitrides. If the crystal structure is non-cubic (e.g., a metal with a hexagonal-closed packed crystal structure,

such as Ti or Zr) the microstructure can be revealed without etching using crossed polarized light (light

microscopy). Otherwise, the microstructural constituents of the specimen are revealed by using a suitable

chemical or electrolytic etchant.

Page 9: Testing

[edit]Analysis techniques

Many different microscopy techniques are used in metallographic analysis.

Prepared specimens should be examined after etching with the unaided eye to detect any visible areas that

respond differently to the etchant as a guide to where the microscopical examination should be employed. Light

optical microscopy (LOM) examination should always be performed prior to any electron metallographic (EM)

technique, as these are more time-consuming to perform and the instruments are much more expensive.

Further, certain features can be best observed with the LOM, e.g., the natural color of a constituent can be

seen with the LOM but not with EM systems. Also, image contrast of microstructures at relatively low

magnifications, e.g., <500X, is far better with the LOM than with the scanning electron microscope (SEM),

while transmission electron microscopes (TEM) generally cannot be utilized at magnifications below about

2000 to 3000X. LOM examination is fast and can cover a large area. Thus, the analysis can determine if the

more expensive, more time-consuming examination techniques using the SEM or the TEM are required and

where on the specimen the work should be concentrated.

Scanning transmission electron microscope, used in metallography.

[edit]Design, resolution, and image contrast

Light microscopes are designed for placement of the specimen's polished surface on the specimen stage either

upright or inverted. Each type has advantages and disadvantages. Most LOM work is done at magnifications

between 50 and 1000X. However, with a good microscope, it is possible to perform examination at higher

magnifications, e.g., 2000X, and even higher, as long as diffraction fringes are not present to distort the image.

However, the resolution limit of the LOM will not be better than about 0.2 to 0.3 micrometers. Special methods

Page 10: Testing

are be used at magnifications below 50X, which can be very helpful when examining the microstructure of cast

specimens where greater spatial coverage in the field of view may be required to observe features such

as dendrites.

Besides considering the resolution of the optics, one must also maximize visibility by maximizing

image contrast. A microscope with excellent resolution may not be able to image a structure, that is there is no

visibility, if image contrast is poor. Image contrast depends upon the quality of the optics, coatings on the

lenses, and reduction of flare and glare; but, it also requires proper specimen preparation and good etching

techniques. So, obtaining good images requires maximum resolution and image contrast.

Bright field illumination, where sample contrast comes from absorbance of light in the sample.

Dark field illumination, sample contrast comes from light scattered by the sample.

Page 11: Testing

Cross-polarized light illumination, where sample contrast comes from rotation of polarized light through the sample.

[edit]Bright and dark field microscopy

Most LOM observations are conducted using bright field (BF) illumination, where the image of any flat feature

perpendicular to the incident light path is bright, or appears to be white. But, other illumination methods can be

used and, in some cases, may provide superior images with greater detail. Dark field microscopy (DF),

although not used much today, provides high contrast images and actually greater resolution than bright field.

In dark field, the light from features perpendicular to the optical axis is blocked and appears dark while the light

from features inclined to the surface, which look dark in BF, appear bright, or "self luminous" in DF. grain

boundaries, for example, are more vivid in DF than BF.

[edit]Polarized light microscopy

Polarized light (PL) is very useful when studying the structure of metals with non-cubic crystal

structures (mainly metals with hexagonal close-packed (hcp) crystal structures). If the specimen is prepared

with minimal damage to the surface, the structure can be seen vividly in crossed polarized light (the optic axis

of the polarizer and analyzer are 90 degrees to each other, i.e., crossed). In some cases, an hcp metal can be

chemically etched and then examined more effectively with PL. Tint etched surfaces, where a thin film (such as

a sulfide, molybdate,chromate or elemental selenium film) is grown epitaxially on the surface to a depth where

interference effects are created when examined with BF producing color images, can be improved with PL. If it

is difficult to get a good interference film with good coloration, the colors can be improved by examination in PL

using a sensitive tint (ST) filter.

[edit]Differential interference contrast microscopy

Another useful imaging mode is differential interference contrast (DIC), which is usually obtained with a system

designed by the Polish physicist Georges Nomarski. This system gives the best detail. DIC converts minor

height differences on the plane-of-polish, invisible in BF, into visible detail. The detail in some cases can be

quite striking and very useful. If an ST filter is used along with a Wollaston prism, color is introduced. The colors

are controlled by the adjustment of the Wollaston prism, and have no specific physical meaning, per se. But,

visibility may be better.

[edit]Oblique illumination

DIC has largely replaced the older oblique illumination (OI) technique, which was available on reflected light

microscopes prior to about 1975. In OI, the vertical illuminator is offset from perpendicular, producing shading

effects that reveal height differences. This procedure reduces resolution and yields uneven illumination across

the field of view. Nevertheless, OI was useful when people needed to know if a second phase particle was

Page 12: Testing

standing above or was recessed below the plane-of-polish, and is still available on a few microscopes. OI can

be created on any microscope by placing a piece of paper under one corner of the mount so that the plane-of-

polish is no longer perpendicular to the optical axis.

[edit]Scanning electron and transmission electron microscopes

If a specimen must be observed at higher magnification, it can be examined with a scanning electron

microscope (SEM), or a transmission electron microscope (TEM). When equipped with an energy dispersive

spectrometer (EDS), the chemical composition of the microstructural features can be determined. The ability to

detect low-atomic number elements, such as carbon, oxygen, andnitrogen, depends upon the nature of the

detector used. But, quantification of these elements by EDS is difficult and their minimum detectable limits are

higher than when a wavelength-dispersive spectrometer (WDS) is used. But quantification of composition by

EDS has improved greatly over time. The WDS system has historically had better sensitivity (ability to detect

low amounts of an element) and ability to detect low-atomic weight elements, as well as better quantification of

compositions, compared to EDS, but it was slower to use. Again, in recent years, the speed required to perform

WDS analysis has improved substantially. Historically, EDS was used with the SEM while WDS was used with

the electron microprobe analyzer (EMPA). Today, EDS and WDS is used with both the SEM and the EMPA.

However, a dedicated EMPA is not as common as an SEM.

An x-ray diffractometer.

[edit]X-ray diffraction techniques

Characterization of microstructures has also been performed using x-ray diffraction (XRD) techniques for many

years. XRD can be used to determine the percentages of various phasespresent in a specimen if they have

different crystal structures. For example, the amount of retained austenite in a hardened steel is best measured

using XRD (ASTM E 975). If a particular phase can be chemically extracted from a bulk specimen, it can be

Page 13: Testing

identified using XRD based on the crystal structure and lattice dimensions. This work can be complemented by

EDS and/or WDS analysis where the chemical composition is quantified. But EDS and WDS are difficult to

apply to particles less than 2-3 micrometers in diameter. For smaller particles, diffraction techniques can be

performed using the TEM for identification and EDS can be performed on small particles if they are extracted

from the matrix using replication methods to avoid detection of the matrix along with the precipitate.

[edit]Quantitative metallography

A number of techniques exist to quantitatively analyze metallographic specimens. These techniques are

valuable in the research and production of all metals and alloys and non-metallic or composite materials.

Microstructural quantification is performed on a prepared, two-dimensional plane through the three-dimensional

part or component. Measurements may involve simple metrology techniques, e.g., the measurement of the

thickness of a surface coating, or the apparent diameter of a discrete second-phase particle, (for

example, spheroidal graphite in ductile iron). Measurement may also require application of stereology to

assess matrix and second-phase structures. Stereology is the field of taking 0-, 1- or 2-dimensional

measurements on the two-dimensional sectioning plane and estimating the amount, size, shape or distribution

of the microstructure in three dimensions. These measurements may be made using manual procedures with

the aid of templates overlaying the microstructure, or with automated image analyzers. In all cases, adequate

sampling must be made to obtain a proper statistical basis for the measurement. Efforts to eliminate bias are

required.

An image of the microstructures of ductile iron.

Some of the most basic measurements include determination of the volume fraction of a phase or constituent,

measurement of the grain size in polycrystalline metals and alloys, measurement of the size and size

distribution of particles, assessment of the shape of particles, and spacing between particles.

Standards organizations, including ASTM International's Committee E-4 on Metallography and some other

national and international organizations, have developed standard test methods describing how to

characterize microstructures quantitatively.

Page 14: Testing

For example, the amount of a phase or constituent, that is, its volume fraction, is defined in ASTM E 562;

manual grain size measurements are described in ASTM E 112 (equiaxed grain structures with a single size

distribution) and E 1182 (specimens with a bi-modal grain size distribution); while ASTM E 1382 describes how

any grain size type or condition can be measured using image analysis methods. Characterization

of nonmetallic inclusions using standard charts is described in ASTM E 45 (historically, E 45 covered only

manual chart methods and an image analysis method for making such chart measurements was described in

ASTM E 1122. The image analysis methods are currently being incorporated into E 45). A stereological method

for characterizing discrete second-phase particles, such as nonmetallic inclusions, carbides, graphite, etc., is

presented in ASTM E 1245.

Nondestructive testingFrom Wikipedia, the free encyclopedia

  (Redirected from Non-destructive testing)

Nondestructive testing or Non-destructive testing (NDT) is a wide group of analysis techniques used in

science and industry to evaluate the properties of a material, component or system without causing damage.

[1] The terms Nondestructive examination(NDE), Nondestructive inspection (NDI), and Nondestructive

evaluation (NDE) are also commonly used to describe this technology.[2] Because NDT does not permanently

alter the article being inspected, it is a highly-valuable technique that can save both money and time in product

evaluation, troubleshooting, and research. Common NDT methods include ultrasonic, magnetic-particle,liquid

penetrant, radiographic, remote visual inspection (RVI), eddy-current testing,[1] and low coherence

interferometry [3]  .[4] NDT is a commonly-used tool in forensic engineering, mechanical engineering, electrical

engineering, civil engineering, systems engineering,aeronautical engineering, medicine, and art.[1]

Page 15: Testing

Contents

[hide]

1 Methods

2 Applications

o 2.1 Weld verification

o 2.2 Structural mechanics

o 2.3 Radiography in medicine

3 Notable events in early industrial NDT

4 Applications

5 Methods and techniques

6 Personnel training, qualification and certification

o 6.1 Definitions

o 6.2 Training

o 6.3 Certification schemes

o 6.4 Levels of certification

7 Terminology

8 Reliability and statistics

9 See also

10 References

o 10.1 Bibliography

11 External links

[edit]Methods

NDT methods may rely upon use of electromagnetic radiation, sound, and inherent properties of materials to

examine samples. This includes some kinds of microscopy to examine external surfaces in detail, although

sample preparation techniques for metallography,optical microscopy and electron microscopy are generally

destructive as the surfaces must be made smooth through polishing or the sample must be electron

transparent in thickness. The inside of a sample can be examined with penetrating electromagnetic radiation,

such as X-rays or 3D X-rays for volumetric inspection. Sound waves are utilized in the case of ultrasonic

testing. Contrast between a defect and the bulk of the sample may be enhanced for visual examination by the

unaided eye by using liquids to penetrate fatiguecracks. One method (liquid penetrant testing) involves using

dyes, fluorescent or non-fluorescing, in fluids for non-magnetic materials, usually metals. Another commonly

used method for magnetic materials involves using a liquid suspension of fine iron particles applied to a part

while it is in an externally applied magnetic field (magnetic-particle testing). Thermoelectric effect (or use of

Page 16: Testing

the Seebeck effect) uses thermal properties of an alloy to quickly and easily characterize many alloys.

The chemical test, or chemical spot test method, utilizes application of sensitive chemicals that can indicate the

presence of individual alloying elements.

[edit]Applications

[edit]Weld verification

1. Section of material with a surface-breaking crack that is not visible to the naked eye.

2. Penetrant is applied to the surface.

3. Excess penetrant is removed.

4. Developer is applied, rendering the crack visible.

In manufacturing, welds are commonly used to join two or more metal parts. Because these connections may

encounter loads and fatigue during product lifetime, there is a chance that they may fail if not created to

proper specification. For example, the base metal must reach a certain temperature during the welding

process, must cool at a specific rate, and must be welded with compatible materials or the joint may not be

strong enough to hold the parts together, or cracks may form in the weld causing it to fail. The typical welding

defects (lack of fusion of the weld to the base metal, cracks or porosity inside the weld, and variations in weld

density) could cause a structure to break or a pipeline to rupture.

Welds may be tested using NDT techniques such as industrial radiography or industrial CT scanning using X-

rays or gamma rays, ultrasonic testing, liquid penetrant testing or via eddy current. In a proper weld, these tests

would indicate a lack of cracks in the radiograph, show clear passage of sound through the weld and back, or

indicate a clear surface without penetrant captured in cracks.

Page 17: Testing

Welding techniques may also be actively monitored with acoustic emission techniques before production to

design the best set of parameters to use to properly join two materials.[5]

[edit]Structural mechanics

Structures can be complex systems that undergo different loads during their lifetime. Some complex structures,

such as theturbomachinery in a liquid-fuel rocket, can also cost millions of dollars. Engineers will commonly

model these structures as coupled second-order systems, approximating dynamic structure components

with springs, masses, and dampers. These sets of differential equations can be used to derive a transfer

function that models the behavior of the system.

In NDT, the structure undergoes a dynamic input, such as the tap of a hammer or a controlled impulse. Key

properties, such asdisplacement or acceleration at different points of the structure, are measured as the

corresponding output. This output is recorded and compared to the corresponding output given by the transfer

function and the known input. Differences may indicate an inappropriate model (which may alert engineers to

unpredicted instabilities or performance outside of tolerances), failed components, or an inadequate control

system.

[edit]Radiography in medicine

Chest radiography indicating a peripheralbronchial carcinoma.

As a system, the human body is difficult to model as a complete transfer function. Elements of the body,

however, such as bones or molecules, have a known response to certain radiographic inputs, such as x-

rays or magnetic resonance. Coupled with the controlled introduction of a known element, such asdigested

barium, radiography can be used to image parts or functions of the body by measuring and interpreting the

response to the radiographic input. In this manner, many bone fractures and diseasesmay be detected and

localized in preparation for treatment. X-rays may also be used to examine the interior of mechanical systems

in manufacturing using NDT techniques, as well.

Page 18: Testing

[edit]Notable events in early industrial NDT

1854 Hartford, Connecticut: a boiler at the Fales and Gray Car works explodes, killing 21 people and

seriously injuring 50. Within a decade, the State of Connecticut passes a law requiring annual inspection

(in this case visual) of boilers.

1880 - 1920 The "Oil and Whiting" method of crack detection is used in the railroad industry to find cracks

in heavy steel parts. (A part is soaked in thinned oil, then painted with a white coating that dries to a

powder. Oil seeping out from cracks turns the white powder brown, allowing the cracks to be detected.)

This was the precursor to modern liquid penetrant tests.

1895 Wilhelm Conrad Röntgen discovers what are now known as X-rays. In his first paper he discusses

the possibility of flaw detection.

1920 Dr. H. H. Lester begins development of industrial radiography for metals.

1924 — Lester uses radiography to examine castings to be installed in a Boston Edison Company steam

pressure power plant[1].

1926 The first electromagnetic eddy current instrument is available to measure material thicknesses.

1927 - 1928 Magnetic induction system to detect flaws in railroad track developed by Dr. Elmer Sperry and

H.C. Drake.

1929 Magnetic particle methods and equipment pioneered (A.V. DeForest and F.B. Doane.)

1930s Robert F. Mehl demonstrates radiographic imaging using gamma radiation from Radium, which can

examine thicker components than the low-energy X-ray machines available at the time.

1935 - 1940 Liquid penetrant tests developed (Betz, Doane, and DeForest)

1935 - 1940s Eddy current instruments developed (H.C. Knerr, C. Farrow, Theo Zuschlag, and Fr. F.

Foerster).

1940 - 1944 Ultrasonic test method developed in USA by Dr. Floyd Firestone.

1950 The Schmidt Hammer (also known as "Swiss Hammer") is invented. The instrument uses the world’s

first patented non-destructive testing method for concrete.

1950 J. Kaiser introduces acoustic emission as an NDT method.

(Source: Hellier, 2001) Note the number of advancements made during the WWII era, a time when industrial

quality control was growing in importance.

[edit]Applications

NDT is used in a variety of settings that covers a wide range of industrial activity, with new NDT methods and

applications, being continuously developed. NDT services [6] are not only integrated with Asset Integrity

Management (AIM) solutions, but also with Material Testing laboratories and seamlessly fit into Supply Chain

services.

Page 19: Testing

[Rehabilitation].

Automotive

Engine  parts

Frame

Aviation  / Aerospace

Airframes

Spaceframes

Powerplants

Propellers

Reciprocating Engines

Gas turbine engines

Rocketry

Construction

Structures

Bridges

Cover Meter

Maintenance, repair and operations

Bridges

Manufacturing

Machine parts

Castings  and Forgings

Fabrication Inspection

Industrial plants such as Nuclear, Petrochemical, Power, Refineries, Pulp and Paper, Fabrication

shops, Mine processing and theirRisk Based Inspection programmes.

Pressure vessels

Storage tanks

Welds

Boilers

Heat exchangers

Turbine  bores

In-plant Piping

Full Storage tank Assessment

Shutdown Inspections

Page 20: Testing

In-service Equipment Inspections

Miscellaneous

Pipelines

In-line Inspection  using "pigs"

Pipeline integrity management

Leak Detection

Pipeline Open Data Standard

ASME  Pressure Vessel and Piping as-built Inspections

Piping  and Pressure Vessel Corrosion Monitoring

Railways

Rail Inspection

Wheel Inspection

Tubular NDT , for Tubing material

Corrosion Under Insulation  (CUI)

Amusement park  rides

Submarines  and other Naval warships

Wire Rope Testing for, Crane Wires, Mooring Wires, Rope-way Wires Chrish.sg (talk) 12:14, 23

September 2012 (UTC)

Medical imaging  applications (see also Medical physics)

[edit]Methods and techniques

An example of a 3D replicating technique. The flexible high-resolution replicas allow surfaces to be examined and measured

under laboratory conditions. A replica can be taken from all solid materials.

Page 21: Testing

NDT is divided into various methods of nondestructive testing, each based on a particular scientific principle.

These methods may be further subdivided into various techniques. The various methods and techniques, due

to their particular natures, may lend themselves especially well to certain applications and be of little or no

value at all in other applications. Therefore choosing the right method and technique is an important part of the

performance of NDT.

Acoustic emission  testing (AE or AT)

Blue Etch Anodize  (BEA)

Dye penetrant inspection  Liquid penetrant testing (PT or LPI)

Electromagnetic testing  (ET)

Alternating current field measurement  (ACFM)

Alternating current potential drop measurement  (ACPD)

Barkhausen  testing

Direct current potential drop measurement  (DCPD)

Eddy-current testing  (ECT)

Magnetic flux leakage  testing (MFL) for pipelines, tank floors, and wire rope

Magnetic-particle inspection  (MT or MPI)

Remote field testing  (RFT)

Ellipsometry

Guided wave testing  (GWT)

Hardness testing

Impulse excitation technique  (IET)

Infrared and thermal testing  (IR)

Thermographic inspection

Laser testing

Electronic speckle pattern interferometry

Holographic interferometry

Low coherence interferometry

Profilometry

Shearography

Leak testing  (LT) or Leak detection

Absolute pressure leak testing (pressure change)

Bubble testing

Halogen diode leak testing

Page 22: Testing

Hydrogen leak testing

Mass spectrometer leak testing

Tracer-gas leak testing method  Helium, Hydrogen and refrigerant gases

Magnetic resonance imaging  (MRI) and NMR spectroscopy

Metallographic replicas [7] [8]

Near-infrared spectroscopy  (NIRS)

Optical microscopy

Positive Material Identification (PMI)

Radiographic testing  (RT) (see also Industrial radiography and Radiography)

Computed radiography

Digital radiography  (real-time)

Neutron radiographic testing (NR)

SCAR (Small Controlled Area Radiography)

X-ray computed tomography  (CT)

Scanning electron microscopy

Surface Temper Etch  (Nital Etch)

Ultrasonic testing  (UT)

ART (Acoustic Resonance Technology)

Electro Magnetic Acoustic Transducer  (EMAT) (non-contact)

Laser ultrasonics  (LUT)

Internal rotary inspection system  (IRIS) ultrasonics for tubes

Phased array ultrasonics

Time of flight diffraction ultrasonics  (TOFD)

Time of Flight Ultrasonic Determination of 3D Elastic Constants  (TOF)

Vibration Analysis

Visual inspection  (VT)

Pipeline video inspection

Corroscan/C-scan

IRIS - Internal Rotary Inspection System

3D Computed Tomography

Industrial CT Scanning

Heat Exchanger Life Assessment System

RTJ Flange Special Ultrasonic Testing

Page 23: Testing

[edit]Personnel training, qualification and certification

Successful and consistent application of nondestructive testing techniques depends heavily on personnel

training, experience and integrity. Personnel involved in application of industrial NDT methods and

interpretation of results should be certified, and in some industrial sectors certification is enforced by law or by

the applied codes and standards.

[edit]Definitions

The following definitions for qualification and certification are given in ISO 9712[9] and EN 473[10]:

Certification: "Procedure, used by the certification body to confirm that the qualification requirements for a

method, level and sector have been fulfilled, leading to the issuing of a certificate".

Qualification: "Demonstration of physical attributes, knowledge, skill, training and experience required to

properly perform NDT tasks".

In US standards and codes, while a very similar definition of qualification is included in ASNT SNT-TC-1A,

certification is simply defined as: "Written testimony of qualification".

[edit]Training

Non-Destructive Testing (NDT) training is provided for people working in many industries. It is generally

necessary that the candidatesuccessfully completes a theoretical and practical training program, as well as

have performed several hundred hours of practical application of the particular method they wish to be trained

in. At this point, they may pass a certification examination.[11] Further, NDT training has recently become

available online. WorldSpec.org is one of the innovative companies that helped pioneer this new "era" in NDT

Training.

[edit]Certification schemes

There are two approaches in personnel certification:[12]

1. Employer Based Certification: Under this concept the employer compiles their own Written Practice.

The written practice defines the responsibilities of each level of certification, as implemented by the

company, and describes the training, experience and examination requirements for each level of

certification. In industrial sectors the written practices are usually based on recommended practice

SNT-TC-1A of the American Society for Nondestructive Testing.[13] ANSI standard CP-189 outlines

requirements for any written practice that conforms to the standard.[14]

2. Personal Central Certification: The concept of central certification is that an NDT operator can obtain

certification from a central certification authority, that is recognized by most employers, third parties

Page 24: Testing

and/or government authorities. Industrial standards for central certification schemes include ISO 9712,

[9] EN 473.[10] and ACCP.[15] Certification under these standards involves training, work experience

under supervision and passing a written and practical examination set up by the independent

certification authority.

In the United States employer based schemes are the norm, however central certification schemes exist as

well. The most notable isASNT Level III (established in 1976-1977), which is organized by the American

Society for Nondestructive Testing for Level 3 NDT personnel.[16] NAVSEA 250-1500 is another US central

certification scheme, specifically developed for use in the naval nuclear program.[17]

Central certification is more widely used in the European Union, where certifications are issued by accredited

bodies (independent organizations conforming to ISO 17024 and accredited by a national accreditation

authority like UKAS). The Pressure Equipment Directive (97/23/EC) actually enforces central personnel

certification for the initial testing of steam boilers and some categories ofpressure vessels and piping.

[18] European Standards harmonized with this directive specify personnel certification to EN 473. Certifications

issued by a national NDT society which is a member of the European Federation of NDT (EFNDT) are mutually

acceptable by the other member societies [19] under a multilateral recognition agreement.

Canada also implements an ISO 9712 central certification scheme, which is administered by Natural

Resources Canada, a government department.[20][21][22]

The aerospace sector worldwide sticks to employer based schemes.[23] In America it is based mostly on AIA-

NAS-410 [24] and in the European Union on the equivalent and very similar standard EN 4179 [25]

[edit]Levels of certification

Most NDT personnel certification schemes listed above specify three "levels" of qualification and/or

certification, usually designated asLevel 1, Level 2 and Level 3 (although some codes specify roman numerals,

like Level II). The roles and responsibilities of personnel in each level are generally as follows (there are slight

differences or variations between different codes and standards):

Level 1 are technicians qualified to perform only specific calibrations and tests under close supervision

and direction by higher level personnel. They can only report test results. Normally they work following

specific work instructions for testing procedures and rejection criteria.

Level 2 are engineers or experienced technicians who are able to set up and calibrate testing equipment,

conduct the inspection according to codes and standards (instead of following work instructions) and

compile work instructions for Level 1 technicians. They are also authorized to report, interpret, evaluate

and document testing results. They can also supervise and train Level 1 technicians. In addition to testing

Page 25: Testing

methods, they must be familiar with applicable codes and standards and have some knowledge of the

manufacture and service of tested products.

Level 3 are usually specialized engineers or very experienced technicians. They can establish NDT

techniques and procedures and interpret codes and standards. They also direct NDT laboratories and

have central role in personnel certification. They are expected to have wider knowledge covering

materials, fabrication and product technology.

[edit]Terminology

The standard US terminology for Nondestructive testing is defined in standard ASTM E-1316.[26] Some

definitions may be different in European standard EN 1330.

Indication 

The response or evidence from an examination, such as a blip on the screen of an instrument.

Indications are classified as true orfalse. False indications are those caused by factors not related to

the principles of the testing method or by improper implementation of the method, like film damage in

radiography, electrical interference in ultrasonic testing etc. True indications are further classified

as relevant and non relevant. Relevant indications are those caused by flaws. Non relevant

indications are those caused by known features of the tested object, like gaps, threads, case

hardening etc.

Interpretation 

Determining if an indication is of a type to be investigated. For example, in electromagnetic testing,

indications from metal loss are considered flaws because they should usually be investigated, but

indications due to variations in the material properties may be harmless and nonrelevant.

Flaw 

A type of discontinuity that must be investigated to see if it is rejectable. For example, porosity in a

weld or metal loss.

Evaluation 

Determining if a flaw is rejectable. For example, is porosity in a weld larger than acceptable by code?

Defect 

A flaw that is rejectable — i.e. does not meet acceptance criteria. Defects are generally removed or

repaired.[26]

Penetrant testing 

Non-destructive test typically comprising a penetrant, a method of excess removal and a developer to

produce a visible indication of surface-breaking discontinuities.[27]

Page 26: Testing

Reliability and statistics

Probability of detection (POD) tests are a standard way to evaluate a nondestructive testing technique in a

given set of circumstances, for example "What is the POD of lack of fusion flaws in pipe welds using manual

ultrasonic testing?" The POD will usually increase with flaw size. A common error in POD tests is to assume

that the percentage of flaws detected is the POD, whereas the percentage of flaws detected is merely the first

step in the analysis. Since the number of flaws tested is necessarily a limited number (non-infinite), statistical

methods must be used to determine the POD for all possible defects, beyond the limited number tested.

Another common error in POD tests is to define the statistical sampling units (test items) as flaws, whereas a

true sampling unit is an item that may or may not contain a flaw.[28] [29] Guidelines for correct application of

statistical methods to POD tests can be found in ASTM E2862 Standard Practice for Probability of Detection

Analysis for Hit/Miss Data and MIL-HDBK-1823A Nondestructive Evaluation System Reliability Assessment,

from the U.S. Department of Defense Handbook.