51
Metrology Basics Answers to Questions You Were Afraid To Ask Your Mother About.... Metrology (The Science of Measurement) Weights and Measures Weights and measures are standard units used to express specific quantities such as weight, capacity, or length. They are defined in terms of units and standards. A unit is the name of a given quantity such as a yard or pound; and a standard, until recently, has been the physical embodiment of a unit, something concrete that can be seen or touched. Now, however, the speed of light in a vacuum is used to define the meter, and only the kilogram standard remains a physical artifact. The science of weights and measures, called metrology, facilitates commercial, industrial, and scientific interaction. Prehistoric Times Concepts of size and value were very early grasped by humanoids. Both within and between tribes, interaction of individuals was necessary for basic survival. From better survival probabilities as a function of tribe population (read size) to understanding that the bigger the kill, the more food there would be. This is not to mention the fact that some fight back and some don’t... Page 1. Courtesy Cayman Systems [email protected] --- (513) 777-3394 - 3/11/2022

elsmar.com Course…  · Web viewA PHOTOMETER may be calibrated in units of turbidity. ... and radio signals that the sundial became irrelevant. ... From the same word came the name

  • Upload
    vandien

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Metrology BasicsAnswers to Questions You Were Afraid

To Ask Your Mother About....Metrology(The Science of Measurement)

Weights and Measures

Weights and measures are standard units used to express specific quantities such as weight, capacity, or length. They are defined in terms of units and standards. A unit is the name of a given quantity such as a yard or pound; and a standard, until recently, has been the physical embodiment of a unit, something concrete that can be seen or touched. Now, however, the speed of light in a vacuum is used to define the meter, and only the kilogram standard remains a physical artifact. The science of weights and measures, called metrology, facilitates commercial, industrial, and scientific interaction.

Prehistoric Times

Concepts of size and value were very early grasped by humanoids. Both within and between tribes, interaction of individuals was necessary for basic survival. From better survival probabilities as a function of tribe population (read size) to understanding that the bigger the kill, the more food there would be. This is not to mention the fact that some fight back and some don’t...

Page 1.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 2.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 3.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 4.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 5.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 6.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsMarc T. Smith

Cayman Systems [email protected]

(513) 777-3394

Page 7.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 8.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 9.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 10.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsLong ago, but maybe not so very far away, human beings needed ways to compare things. Between themselves, they probably compared fish, game, vegetables, hunting points and the like. (Who knows - the ‘wives’ might have compared cave sizes!) The concept of difference infers comparison to a known ‘standard’. These first standards might be thought of as concepts or thoughts as opposed to today where we have highly transferable standards (e.g.: Joe blocks). Long ago one might have looked at an antelope and thought: “That one is big enough to feed my family for 5 days. The other one is smaller and will not last 3 days. I will try to subdue the large one so that I may rest for an extra day”. Big versus small implies a standard of size. Association with size and how may people could be fed depending on the size of the kill was surely made.

Systems of weights and measures evolved gradually and were subject to many influences. Thus, it is difficult to trace a clear or logical course. Counting was probably the earliest form of measure. Among prehistoric communities quantities of the principal product of each tribe often were used as a unit of barter. For example, a farmer might trade 20 handfuls of grain to a shepherd for a sheep. This simple barter system endured for thousands of years.

The development and application of linear measure probably took place between 10,000 and 8000 BC, before the development of measures of weight or capacity. The units of measure in these early systems were based on natural objects. People realized that some simple proportions exist among certain dimensions of the human body, so that it was logical to use parts of the body for standards of linear measure. For example, the Egyptians defined the cubit as the distance between the elbow and the tip of the middle finger.

Early humans also learned that there was a uniformity of weight among similar seeds and grains, and so these became used as standards of weight. The carat, used by modern jewelers as a unit for weighing precious stones, was derived from the carob seed. The grain, which is still used in many areas of the world as a unit of weight, was originally the weight of a grain of wheat or barley. Other arbitrary measures of capacity were used: for example, cupped hands, hollow gourds, pots, and baskets.

All these methods of measurement depended on units that varied greatly. As primitive societies became more sophisticated, the need arose for a standardized system of weights and measures.

Egypt

Probably the earliest known unit of linear measure is the cubit. Used by the Egyptians, Sumerians, Babylonians, and Hebrews as a prime unit, its origin is uncertain. Because the distance between the elbow and the tip of the middle finger varied from person to person (from about 17.6 to 20 in), so did the cubits used among various civilizations. The Egyptians had two cubits, a short one of 0.45 m (17.7 in) and a royal cubit of 0.524 m (20.6 in). The Egyptian royal cubit was divided into units of seven palms, the palm being the width of four fingers. In turn, each palm could be subdivided into four digits, the breadth of the middle finger.

Page 11.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Egyptian weight standards were based on a unit of grain. For example, in the decimal kedet system, 140-150 grains made up a kedet; ten kedets were a deben; and ten debens equaled a sep.

Greco et Roma

The Greeks probably adopted some of their standards from the systems developed by the Egyptians and Babylonians. The Greeks, however, also introduced a new unit, the foot, which they usually divided into 16 fingers. When political, military, and commercial power in the Mediterranean passed from the Greeks to the Romans, the latter adopted and modified the Greek system of measurement.

One of the most important contributions of Rome to the modern system of measurement was the division of the foot into 12 units called inches. Five feet equaled one pace (a pace being one double step), and 1,000 paces equaled a Roman mile. The 5,000 feet of the Roman mile is closely related to the modern mile of 5,280 feet.

The Romans used two different systems of weights. The smallest unit was the uncia, abbreviated to oz, from which the term ounce is derived. One system had 16 unciae, or ounces, to one pondus, or pound. A second system used a pound called the libra (abbreviated lb), which equaled 12 unciae.

Roman systems of weights and measures were among the many customs adopted by the peoples conquered by the Romans throughout Europe and western Asia. With the decline of imperial power in the 3d century AD, however, arbitrary local systems emerged. This led to widespread confusion.

Page 12.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsMiddle Ages

In the Middle Ages, little uniformity existed among systems. Not only did each country have its own standard but frequently there were differences within the country. In medieval France, for example, weight units varied not only between provinces but also between cities. To complicate matters further, a new monarch might declare the previous monarch's system invalid and initiate his own. By the 9th century there was great confusion; errors, frauds, misunderstandings, and disputes appeared in the marketplace. Several attempts were made to simplify and consolidate the numerous and conflicting systems, but most attempts at reform were ineffectual. France did not achieve complete uniformity in weights and measures until the metric system was established in the early 19th century.

As in France, England lacked standardized systems of weights and measures. In about 1300, London merchants, influenced by the French, adopted a weight system called avoirdupois (from the old French term, aveir-de-peis), meaning "goods of weight." This system, used to weigh bulkier goods, was based on a pound of 7,000 grains or 16 ounces. It is still used in many English-speaking countries.

A second system of weighing was established during the 15th century. The troy system, probably named for the ounce of Troyes, France, was used by jewelers to weigh gold, silver, and precious metals. The troy ounce is made up of 480 grains, with 12 ounces to the pound. Although the actual standards vary, both the troy and avoirdupois pounds are similar to the Roman pounds.

The system that had been developed by medieval apothecaries and used until very recently was similar to the troy system, with 5,760 grains or 12 ounces to the pound. Now, however, most pharmacists and physicians use the metric system.

Metric System

The creation of the METRIC SYSTEM was one of the most significant contributions of the French Revolution. It was initially adopted in 1793 and, though discarded in 1812, was reinstated in 1840 as the only legal and permissible standard system of weights and measures for France.

The metric system is a decimal system based on multiples of ten. Its basic unit, the meter (from the Greek metron, which means "measure"), was first defined as 1 ten-millionth of the length of the meridian from Dunkerque to Barcelona. From 1960 to 1983 the meter was redefined in terms of the wavelength of light given off by isotope 86 of the element krypton. In 1983 the meter was again redefined as the length of the path traveled by light in a vacuum in 1/299,792,458 of a second. This standard is laboratory reproducible.

Today the acceptance of the metric system is almost worldwide. When Great Britain joined the Common Market in 1972 it became imperative for the country to adopt metric weights and measures. The United States is the only major power that lags behind the rest of the world in accepting the metric system, but the needs of international trade and other pressures are very slowly forcing it toward the process of conversion.

Page 13.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

United States

As early as 1790, Thomas Jefferson proposed that the United States adopt a decimal system, but his suggestion was rejected by Congress, and a system based on old English weights and measures continued to be used.

In 1866 the metric system was made legal but not obligatory in the United States. Science and industry made some slow progress in adopting the metric system, but in daily life the traditional standards remained in use. Attempts in the United States to change over to the metric system have not been entirely successful. Most of the resistance is due to psychological inertia. Also, a nationwide change in standards would involve great expense.

International System of Units

Just as primitive humans had to create more sophisticated systems of measure as life became more complex, advances in modern science and technology have created a need for new units of measure. In 1960 the Eleventh General Conference of Weights and Measures in Paris named a new and comprehensive system of weights and measures: the International System of Units, or Systeme International (abbreviated SI). This system is based on six units: the meter to measure length, the kilogram for mass, the second for time, the AMPERE for electric current, the kelvin for thermodynamic temperature, and the CANDELA for light intensity. Not only does this system give unity to worldwide interchange, it also is adaptable to scientific advancement.

Page 14.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsMeasurement!

Measurement is the process of obtaining quantitative information about the physical world. Methods for the collection of numerical data and for the inaccuracy of measurements are intimately associated with the growth of technology. This article discusses the methods that technology has developed for measuring particular fundamental quantities.

Units & Standards

Any measurement must involve the comparison of the measured quantity with a known standard unit. In absolute measurement, the unit may be the official unit for the quantity considered, such as the meter or the ampere. In relative measurement, a special reference unit is chosen for a given measurement; for example, the brightness of a star is expressed in terms of the brightness of another star.

A length of 3.6 meters means that the measured length is 3.6 times as large as a standard length, in this case the meter. Until 1960, the standard meter was equal to the length of a prototype meter bar kept in Paris. It was then redefined as 1,650,763.73 times the wavelength of the radiation emitted at a specified energy level by krypton-86. In 1983 it was redefined as the length of the path traveled by light in a vacuum during a time interval of 1/299,792,458 of a second. This definition has the great advantage of being reproducible in any well-equipped laboratory, rather than depending on an actual object. The wide variety of units and standards employed worldwide are similarly based on physical quantities.

The presently agreed-on system of units used for scientific work in many countries, known as the International System of Units, or simply the SI system, is based on the mks (meter-kilogram-second) system and contains seven base units of length, mass, time, temperature (kelvin), luminous intensity (candela), amount of substance (mole), and electric current (ampere). The mole is a dimensionless chemical unit that cannot be measured directly; the others are directly measurable.

Measurement of Basic Sinoelectrical Qualities

In addition to the basic dimensionally independent SI quantities, there are many other measurements closely related to these basic quantities. Thus, measurement of thermal conductivity is related to measurement of temperature, even though the two have differing dimensions. In order to measure the heat conduction in a copper rod, for example, one must determine both the rod length and the temperature at each end of the rod. The quantity of heat transmitted from one end to the other within a certain time is then determined. This quantity may be determined by measuring the initial and final temperatures of a known amount of water into which one end of the rod is immersed. The survey below deals with the principles of measuring the standard quantities as well as certain derived quantities.

Length Measurements

Page 15.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsLength measurements play a special role in measurement technology, because nearly all other analog measurements (those involving continuous--rather than stepwise, or digital--monitoring) may be reduced to measurements of length. The simplest measurement of this kind is carried out with a ruler, in which case an accuracy of approximately 1 mm can be achieved. A VERNIER caliper will correctly measure to 0.05 mm, while an accuracy of 0.01 mm is possible with a screw MICROMETER. Accurate length comparisons to 0.001 mm are possible with CALIPERS or end gauges. These are then used as length substandards, with which it is only possible to determine whether an object has the same dimensions as the substandard. Measurements of even smaller lengths are performed by the optical enlargement of an image with a microscope; the enlargement may then be measured against an ordinary scale. In this case, absolute measurement requires a knowledge of the magnification of the microscope, and since the precision with which this is known is never very great, the measurement itself cannot be considered very accurate. The most accurate length measurements require the use of an INTERFEROMETER. With this instrument, a measured distance may be compared to a given wavelength of light, which is accurately known. The standard meter is now based on such measurements.

The determination of such quantities as angles, areas, deformation, and velocity depends on accurate measurement of length. Angular measurements can be derived from length measurements if a circular ruler is used. If a straight ruler is used, the values found must be converted by using trigonometric functions. The most sensitive systems for measuring angles use mirrors in which light rays are reflected on scales. It is remarkable that there are no direct methods of measuring the areas of arbitrary surfaces, although the determination of areas for regular surfaces through mathematical relationships (such as length times width for the rectangle) again depends on a knowledge of length. Similarly, the deformation of an object by a force can be measured as a displacement, or a change in length. A well-known example is the stretched-spring principle used in many pointer-type instruments. The pointer rests at a position of equilibrium between an acting force and the restoring force exerted by the spring. Deformations are also used in various pressure gauges and the MANOMETER. Finally, velocity measurements often involve measurements of a path traversed by an object during a known time interval. This is not only true for the classical methods for determining the velocity of light, but also for many more commonplace velocity measurements.

Caliper

{kal'-i-pur}

The caliper is a centuries-old mechanical instrument for manual MEASUREMENT of small lengths. Once constructed of wood and now made of tool steel, a caliper is configured like a drawing compass, with a specially shaped tip on each leg for contacting points or edges of surfaces to determine inside or outside dimensions. A fixed caliper compares measurements against a standard. An adjustable caliper uses a calibrated screw for direct reading and, when equipped with a VERNIER scale, achieves an accuracy of .025 mm (.001 in). Electronic and pneumatic calipers improve both speed and accuracy of measurement and are widely used for automatic gauging of metal and plastic parts.

Sextant

A sextant is an optical instrument used in navigation to measure the angles of celestial bodies above the horizon from the observer's position. In use since the mid-18th century, the sextant is so named because

Page 16.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicsthe early instrument had a calibrated arc (the limb) that is one-sixth of a circle (60 degrees). Early marine sextants were hand held and leveled on the horizon. In modern sextants, which use arcs of greater or lesser size than the original type, the light ray from the celestial body is reflected in two mirrors (in series), one of which is adjustable and the other of which is half-silvered. By rotating one mirror and its attached index bar, the image of the body is brought down to the horizon. The rotation measures the altitude on the limb. Air sextants, used in aircraft navigation, use a bubble encased in fluid, a pendulum, or a gyroscope to create an artificial horizon by which altitudes can be measured.

Nomogram

A nomogram, also known as a nomograph or an alignment chart, is a relation among three variables in the form of a chart consisting of three (usually) parallel scales; each is calibrated for a variable in such a way that if the values of any of the two variables are known, the corresponding value of the remaining variable can easily, but not necessarily precisely, be obtained at the point where a straight line joining the given values of the two variables intersects the scale of the remaining variable. The cutting line is known as an isopleth. In general terms, a nomogram of several variables is a chart in which the value of one variable can quickly be determined when the values of all the remaining variables are specified.

Mass Measurements

The mass m of an object is measured by means of the force W, or weight, exerted upon it by the Earth. This force is related to the mass through the expression W = mg, where g is the known acceleration of gravity. This acceleration may in turn be measured as the distance a falling object covers within a given time frame. The simplest means for measuring mass is to use a spring balance; however, such a device is inherently inaccurate. The measurement of mass is therefore usually carried out as a null measurement, in which one mass is compared to another, known mass. A BALANCE is used for this purpose. Comparison is made through the use of several standard weights, which generally cannot be lighter than several milligrams.

Smaller differences in weight are compared to a force exerted upon the balance in one of three ways. In a beam balance, the counterforce is related to the inclination of the balance beam. In a torsion balance, the balance beam is returned to its equilibrium position by the torque exerted by a wire. This torsion wire can also be used as the suspension for the beam in a very sensitive system. The principle of a third type, the electrical balance, is similar to that of the torsion balance, but the force (known here as the Lorentz force) is exerted by a specific current that flows through a coil placed in a magnetic field.

A wide variety of measurements are performed as weight determinations. Forces can be measured by letting them act upon one arm of a balance. Accelerations can be measured by determining the forces acting on a known mass in a given time frame. This method also permits the measurement of rotational speeds, since the number of revolutions per second can be determined from a known centrifugal acceleration.

Measurements of volume and density (mass per unit volume) are also closely related to determinations of mass. Knowledge of the density of an object permits direct computation of its volume by weighing. By the same token, density can be determined by weighing a known volume of the substance considered. It is always possible to measure the volume of gases and liquids; if the volume of a solid cannot be determined directly, one can use a pycnometer, in which the solid is immersed in a liquid and the displaced volume of liquid is measured. Finally, the pressure (force per unit area) that causes a column of liquid to rise

Page 17.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicsin a manometer can be used to compute the weight of the raised liquid, if the surface area over which the pressure acts is known and the level difference between the two legs of the manometer is measured.

Balance (measurement)

A balance is a mechanical device for weighing. The term is properly applied to an instrument that opposes equal weights in two pans suspended from the ends of a lever that has its fulcrum precisely in the middle. The balance is basically a lever in which equal force is applied to its two arms at points equidistant from the fulcrum. Balances were used by the Egyptians as early as 5000 BC and have been used ever since.

DEVELOPMENT

The most primitive balance was a straight stick suspended at midpoint by a string and having pouches attached at either end. By the Middle Ages much more precise devices were in use. These had a stone or metal fulcrum and balanced metal weighing-pans. In modern times balances are widely used in commerce, science, and industry. The basic concept of balancing forces at the ends of levers is still the most widely used principle in weighing, although not all balances have arms of equal length. Some use a short arm for the load and a long one for the counterbalancing weights. The ratio of arm length times the counterbalance weight is used to calculate the true weight of the load. Many such devices are further modified by spring tension, which is used in place of weights; these devices are actually scales rather than balances.

TYPES

Analytical Balance

Instruments for weighing with extreme accuracy in very small amounts are called analytical balances. An accuracy of 1 part in 100,000,000 is theoretically possible using the best equipment. The simplest modern analytical balances are equal arm-balances with two pans, enclosed in a glass case to protect against dust and air currents. The material to be weighed is placed on one pan, and weights are added to the other until the two are in exact balance. When small weighings are made, the swings of the balance pointer to the right and left are averaged, and the deviation from absolute center is used to calculate the precise weight.

The fulcrum at the center of the beam is called a knife-edge. It is a triangular piece of hard material, such as synthetic sapphire, that balances on a smooth surface of the same material. This mounting arrangement is delicate and easily disturbed. It includes pan rests and supporting guides, which can be retracted to allow the pans to swing free during the weighing. A balance such as this can accept weights up to 200 g and weigh with a sensitivity of about 0.1 mg.

Many refinements of this basic design have been made. One such refinement is a damping device that shortens the time required for the swinging of the weighing arm to cease. This damping can be supplied by a partially closed cylinder that entraps air under the swinging arm.

Substitution Balance

In substitution balances, built-in weights counterbalance the load and are removed until the centering pointer comes to rest. The weights may be removed by levers actuated by buttons labeled with the corresponding weight values. The sum of the weights left on the beam may be indicated by a digital readout.

Page 18.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Apothecary Balance

Laboratory or apothecary balances for weighing heavier samples with less accuracy have a platform or swinging pan on the short end of the beam and sliding weights that may be moved along the long end to offset the load by furnishing greater leverage. Double- and triple-beam balances have riders of different weights on separate calibrated bars. Thus the heaviest rider may supply 0 to 100 g; the second, 0 to 10 g; and the third, 0 to 1 g. When all riders are in their highest position, they would exactly offset 111 g in the weighing pan.

The standard weights used with balances are made of special noncorrosive metal alloys, such as chrome-nickel steel, and are manufactured to conform to standards set by the U.S. Bureau of Standards. The tolerance within which they conform to these values is indicated by a grade marking. Weights should always be handled with forceps to keep oil, dirt, or moisture from altering their true values.

Time Measurements

Time measurements are always based on counting periodic phenomena, such as oscillations of atoms and molecules, electromagnetic oscillations in oscillators, and sound

Page 19.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicsor mechanical vibrations.

Page 20.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 21.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsThe use of these time standards results in a variety of clocks, including the ATOMIC CLOCK, the quartz crystal clock, and the common watch and pendulum clock.

Page 22.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsTime measurement today is so pervasive you might not even realize it. Even when you’re

Page 23.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 24.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics, you have to think about.... yes - Time to Turn so You don’t Burn!

Metronome

{met'-roh-nohm}

The metronome is a mechanical timing device used to maintain a tempo for music. It was invented by Johann Maelzel, a German inventor, who began to manufacture it in 1816. (The Dutch inventor Nikolaus Winkel also built a metronome at about the same time.) An inverted pendulum powered by a spring is used to produce audible beats, ranging from 40 to 208 beats per minute, usually for the guidance of pianists. In Maelzel's design, an adjustable, sliding weight fits snugly on a calibrated pendulum rod and alters the period of the pendulum. The entire assembly is contained in a pyramid-shaped wood or plastic cabinet. Electrically actuated metronomes are also available today.

Temperature Measurements

There are two fundamental laws on which temperature measurements may be based. The

best-known law is

Use Word 6.0c or later to

view Maci ntosh picture.

that of Robert Boyle and J. L. Gay-Lussac, according to which gas pressure depends on temperature. Temperature measurement thus becomes a pressure measurement that can be performed as a length measurement with the aid of a manometer.

The radiation law, according to which the quantity of radiation emitted by a substance depends on temperature, is less well known. The measurement in this case is that of the radiation intensity as determined by a BOLOMETER. Several other methods of temperature measurement are also known. These make use of the THERMOCOUPLE, the THERMOMETER, bimetals, and vapor-pressure thermometers.

The field of calorimetry is closely related to temperature measurement. It involves determination of the initial and final temperatures of a given, weighed amount of a liquid whose specific heat is known. Calorimetry forms the basis for a determination of specific heat, thermal conductivity, and energy, through the conversion of energy, or work, into heat in a CALORIMETER.

Geissler, Heinrich

{gys'-lur}

The German glassblower Johann Heinrich Wilhelm Geissler, b. May 26, 1814, d. Jan 24, 1879, earned an international reputation as a maker of glass scientific instruments. In the 1850s his standard thermometers, constructed of thin glass capillary tubes and carefully calibrated, were in demand by scientists throughout the world. Geissler also constructed glass tubes filled with rarefied gases and with electrodes sealed into the ends. These "Geissler tubes" became essential for chemists and physicists studying gases and electrical phenomena and eventually led to the discovery of cathode rays.

Page 25.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Measurement of Luminous Intensity

Luminous intensity may be determined by either absolute or relative measurements. Relative measurements involve comparison of the strength of an unknown light source to that of a known (and variable) light source.

This can be done very accurately through a visual null measurement by attenuating the light from the known source until both sources appear to be equally bright. Attenuation is possible through the use of a diaphragm or a light-absorbing prism, or by varying the distance to one of the light sources. Determination of the null point is then followed by the actual measurement.

Absolute light measurements are carried out by means of visual null comparison to a radiation standard, such as a piece of tungsten wire heated electrically to a specified temperature; the amount of radiation thus emitted is known from the radiation laws. Another type of absolute measurement makes use of calibrated radiation-absorption meters.

POLARIZED LIGHT is used with many measuring methods. The measurement of wavelengths combined with a determination of the corresponding intensities form the basis of SPECTROSCOPY, which is of great importance in atomic physics, chemistry, and astronomy.

Turbidimeter

{tur-buh-dim'-uh-tur}

A turbidimeter is an instrument that measures the concentration of suspended particles in solutions, a very important factor in water and environmental control systems. Sir John Tyndall observed (1860) that particles that are invisible are easily discernible when directly in the path of a strong light beam and viewed from the side; these particles become visible because they reflect some of the incident light. The measurement of the reflected light is directly related to the number of particles in suspension and is the basis of the turbidimeter. A PHOTOMETER may be calibrated in units of turbidity. A dual-beam turbidimeter can compare unknown samples directly against a standard solution. Typical uses of this instrument include control of beverage clarity, tanning operations, water and waste treatment, and determination of bacterial growth rates.

Polarimeter

{poh-luh-rim'-uh-tur}

The polarimeter is an optical instrument used to determine the chemical concentration of certain chemical substances by observing the interaction of these materials with polarized light. The same basic principles that are used in the POLARISCOPE apply to the polarimeter. The principles of polarimetric analysis date back to the early 1800s, when Jean Baptiste Biot pioneered research on optically active molecules. Kinetic polarimetry introduces the element of time to the study of reactions in which optically active materials participate, such as in enzyme assays.

Page 26.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsIn a simple visual polarimeter, the elements of the light path, starting from its source, are: (1) a light source, such as a sodium, mercury, or cadmium arc; (2) a filter to isolate a narrow band of wavelengths; (3) a polarizer prism system, usually a Nicol prism; (4) a sample tube in which the unknown solution is placed; and (5) a rotatable analyzer system (essentially identical with the polarizer) equipped with a graduated circular-reading scale. Concentration is determined by the amount of rotation, which can be read with a precision of 0.002 degrees.

In a photoelectric polarimeter, photocell sensors, a rotational servo system, and a meter readout replace visual observation and manual manipulation and greatly reduce the time required for measurements. A saccharimeter is a polarimeter specially designed and calibrated to measure sugar solutions; it is a basic tool used in sugar chemistry.

Transit Circle

A transit circle is an instrument designed to determine accurately the exact time at which a celestial object transits, or passes, an observer's celestial meridian, a reference circle in the equatorial COORDINATE SYSTEM used in astronomy. The instrument was invented by the Danish astronomer Ole Roemer in 1689. It consists of a telescope, generally 15 to 25 cm (6 to 10 in) in diameter, mounted so that it can rotate only in a north and south direction along the meridian. The telescope eyepiece contains a fine, movable vertical wire and a stationary grid of vertical wires, the central one of which coincides exactly with the meridian. The movable wire is kept centered on the observed object by means of a micrometer screw, and the exact moment of meridian transit is recorded. In modern times such recordings are made electronically.

Data obtained with transit circles are used to determine the right ascension of a celestial object, to make corrections to clocks used in observatories, and to determine the longitude of an observer's location. The declination, or angular distance of the object above the celestial equator, can also be read from a calibrated circle attached to the horizontal axis. More accurate positional data are obtained, however, with a meridian circle--a variant of the transit telescope--which contains a larger and more accurate declination circle.

Measurement of Electrical Qualities

The measurement of electric current is chiefly based on the 19th-century discovery by Hans Christian OERSTED and Michael FARADAY of the relationship between electricity and magnetism. The fundamental association is the phenomenon, first reported by Oersted in 1820, that an electric current passing through a conductor produces a magnetic field, which in turn exerts a force on other currents near it. Two currents thus always exert a force on each other (the law of Biot and Savart), and the measurement of the force exerted by one on the other gives a measure of the current. This fact is also used in the definition of the unit of current: one ampere is the current that gives rise to a force of 2 X (10 to the power of -7) newtons between two perfect conductors of infinite length, in vacuo, carrying the current at a distance of one meter from each other.

Page 27.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsUse Word 6.0c or later to

view Macintosh picture.

In practical measuring instruments that make use of this phenomenon, the conductors consist of two coils with a large number of turns. By connecting these coils in different ways, such electrodynamometers as the AMMETER, the VOLTMETER, and the wattmeter may be constructed, as well as a class of instruments known as Ferraris meters. Measurement with an electrodynamometer is based on determining the force acting between the two coils through which current flows. This is effected by measuring the deviation from equilibrium of a free-turning coil opposed by a small spring. In Ferraris meters the two coils are fixed with respect to a piece of metal that is free to turn, and the measurement is based on the eddy currents induced in the metal. In all of these two-coil instruments, the magnetic field is weak when the current intensity is low, so that these meters are not very sensitive. However, they can measure both direct and alternating currents.

Another widely used electromagnetic current-measuring instrument is the moving-coil meter, or GALVANOMETER, in which the current to be measured flows through a coil suspended in a strong magnetic field induced by a permanent magnet rather than by the current itself. Such instruments measure only direct current.

The operation of a hot-wire meter is not based on the action of electromagnetic forces. The current in such an instrument flows through a resistance wire, causing it to heat and expand. The actual measurement is thus not an electrical but a thermal one, related to the electrical quantity. An oscilloscope, based on the electrostatic deflection of a beam of electrons sent between a pair of oppositely charged plates, can also be adapted for current and voltage measurements, but it is primarily of importance in investigations of periodic phenomena.

While certain electrical phenomena such as the photoelectric effect allow for direct measurement of an emitted electric current, other electrical measurements are performed with the aid of an external current or voltage source, so that the resistance, self-inductance, or capacitance can be determined. These measurements are in the end also based on determining the current intensity or voltage. They are performed chiefly with a WHEATSTONE BRIDGE, a circuit that requires a current source, a number of comparison resistances, and a calibrated POTENTIOMETER. The measurement is based on a null measurement of the voltage between two tapping points.

Ohm's law

Stated by Georg Simon Ohm in 1826, Ohm's law establishes the mathematical relationship between voltage (V ), current (I ), and RESISTANCE (R) as V = IR, for both alternating and direct currents (AC and DC). Sometimes this linear relationship is written as I = GV, where G is called the conductance; the unit of conductance is the mho (inverse ohm). Other essentially equivalent forms of Ohm's law are also used, such as, where J is the current

Page 28.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicsdensity (current per unit area), E is the electric field, and Greek lower-case letter sigma is the conductivity, then J = (sigma) E.

In its original simple form, Ohm's law applied only to steady DC situations. In AC circuit theory, when the circuit contains resistors, inductors, and capacitors, V = IZ, where Z is a complex number called the IMPEDANCE. The standard procedure is to measure the ratio between voltage and current in ohms even if this ratio may sometimes be a complex number. Resistance may also vary with time, or resistance may be nonlinear (depending on the magnitude of the voltage or current). In these cases Ohm's law for the instantaneous resistance R in a purely resistive circuit and the instantaneous impedance Z in a complex circuit is still valid.

Transducer Measurements

When direct measurements of a particular quantity are impossible, some other measurable quantity can often be found that is linked to the former by some law. Many measuring instruments convert one form of energy into another. This is called transduction; the converter itself is called a TRANSDUCER. Although transducers as a group involve many forms of energy, in practice it is often convenient to convert a quantity into an electrically measurable one. The measurement signal is then available in the form of a current-to-voltage signal that can be processed in a number of different ways that are either impossible or very difficult to do with mechanical signals. Common electrical transducers include the photoelectric cell, the Geiger counter, the thermocouple, and the piezoelectric crystal.

Pressure

Barometer

The barometer, the single most important meteorological instrument, measures atmospheric pressure.

Page 29.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics

Page 30.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics Mercury Barometer

The invention of the mercury barometer (1643) by Evangelista TORRICELLI depended on his realization that air has weight. He noted that if the open end of a glass tube filled with mercury is inverted in a bowl of mercury, the atmospheric pressure on the bowl of mercury will affect the height of the column of mercury in the glass tube. The greater the air pressure, the longer is the mercury column. The atmospheric pressure may be calculated by multiplying the height of the mercury column by the mercury density and the acceleration due to gravity. At sea level, atmospheric pressure is equal to about 15 lb per sq in, or 29.9 in. of mercury. This is equivalent to 101.3 kilopascals, the pressure unit meteorologists now use, besides millibars.

Mercury is ideal for a liquid barometer, since its high density permits a short column, whereas a water barometer would be 10 m (33 ft) tall at one atmosphere. Another advantage of mercury is its negligible vapor pressure. This is important because the few mercury vapor molecules in the empty space above the mercury column will add only slightly to the pressure exerted by the mercury column itself. In a water barometer the vapor would cause an error of 2 percent at 15 deg C (59 deg F).

Aneroid Barometer

Most barometers are of the aneroid type and function without liquid. The aneroid barometer, dating from 1843, consists of a small metal box, almost totally evacuated of air. One side is immovable, and the opposite side is connected to a strong spring to keep the box from collapsing. The movable side will expand if the air pressure decreases and will compress if the air pressure increases. The position of the movable side is indicated by a pointer. An aneroid barometer is checked regularly against a mercury barometer for calibration.

The aneroid barometer can be easily converted into a barograph, or recording barometer, by adding a pen to the pointer. The ink in the pen describes a trace (barogram) on the paper wrapped around a cylinder. The cylinder usually rotates once a day or once a week.

The mercury barometer is used in research laboratories and in the most important weather stations. Aneroid barometers, used in the home, on board ships, and in all weather stations, are also a prominent part of RADIOSONDE instruments.

Instrumentation - Measurement Tools

Although an instrument, broadly defined, includes anything that performs or facilitates an action, in practice an important distinction is made between general instruments, which might range from a wheelbarrow to a scalpel, and scientific instruments. This article will deal with the latter category.

Scientific instruments may be divided into four broad classes according to their use: (1) instruments used in everyday life that depend on scientific or mathematical principles in their operation, including SUNDIALS, CAMERAS, SPEEDOMETERS, CLOCKS, and electrical gadgets; (2) devices employed by scientific practitioners to assist in the examination of natural phenomena, such as the astronomical TELESCOPE, MICROSCOPE, SPECTROSCOPE, and particle DETECTOR; (3) apparatus employed to produce phenomena for laboratory examination, including most chemical apparatus and demonstration apparatus; and (4) CALCULATORS, whose purpose is to facilitate mathematical operations at a simple or advanced level. Historically these four groups have always overlapped.

Page 31.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsThe history of instrumentation may be seen as the development of specialized instruments for specific tasks in response to refinements of scientific theory requiring greater precision in MEASUREMENT or the isolation of some particular phenomenon. The intimate relation between instrumentation and the generation and testing of scientific theories has always been apparent to the productive scientist. Between the scholar and the craftsman there has always been a close connection, so close that in every period of history examples can be found of the two roles combined by the same person, whether it be the 14th-century mathematician and horologist Richard of Wallingford, the 17th-century physicist, microscopist, and mechanician Robert Hooke, or the 20th-century bacteriologist Alexander Fleming, who discovered penicillin.

EARLY DEVELOPMENT OF PRACTICAL INSTRUMENTS

The earliest scientific instruments were astronomical and were concerned with timekeeping. In Megalithic times the movement of the Sun or Moon in relation to fixed landmarks was deliberately utilized to mark recurrent seasonal points or to aid the counting of the number of moons required to elapse between two events, such as harvesting and sowing. The ancient stone alignment at STONEHENGE and CARNAC may be considered the forerunners of the more elaborate mechanical instruments later used in astronomy and other sciences.

Another early instrument related to the measurement of time was the gnomon, an object set vertically that could be used as a seasonal indicator by the change in the minimum length of the shadow cast by the Sun. Known in ancient Mesopotamia, and apparently transmitted to the Greeks, the gnomon was an important instrument for astronomical operations throughout the Middle Ages and Renaissance. That it could also be used to indicate the time of day by the change in the length of its shadow was known from an early date, and by the 1st century AD it had been recognized that if the gnomon was inclined so as to lie parallel with the Earth's axis it would mark 24 hours that were equal in length with one another. This recognition was of fundamental importance for the subsequent development of sundials.

The development of geometry in the Hellenistic world made possible the development of a much wider range of instruments. Of these perhaps the most important was the ASTROLABE, which combined the functions of analog computer, observational instrument, and time-finder. Developing steadily throughout the following centuries, it was transmitted from Greece to the Islamic world and to Western Europe, where it remained in use until the end of the 17th century. In the Islamic world it retained its usefulness until comparatively recent times.

As a portable instrument for finding the time, the astrolabe was complemented by a range of portable sundials. These devices, which also originated in the Hellenistic world, slowly increased in number (especially after the invention of the magnetic COMPASS, which made portable direction dials possible) during the Middle Ages. The growth of instrument making as an independent manufacturing and retail trade from the 16th to the 18th century led practitioners to develop novel forms of sundials to increase both their reputation and trade. The growing popularity of domestic clocks in this period produced a demand for easily used sundials in order to check and set them. It was not until the electric telegraph, national systems of time distribution, and radio signals that the sundial became irrelevant.

Of fundamental importance to the development of ancient instruments was the growth of a tradition of mechanical technology at Alexandria from the 3d century BC to the 1st century AD, associated with the names of Ctesibos, Philo, Hero, and Archimedes. This technology produced interesting gadgets, many of which employed automated figures and were controlled by water, strings, and levers. A parallel development produced geared or mechanical models of the planetary system, an example of which has survived in the remarkable Antikythera machine. Transmitted to China and to the Islamic world, this

Page 32.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicstechnology gave rise to a series of outstanding, complicated, water-driven astronomical models and clocks with hour sounding and automated figures.

Although direct contact between the West and this tradition was limited, some transmission did take place, and with the independent invention in Europe of an escapement to regulate weight-driven machines, the attempt to produce automatic, astronomical models was given fresh vigor. The subsequent investigations into gearing and gear ratios for the production of accurate astronomical clocks and models was a major factor in the development of industrial machinery in the West, because the clock and instrument makers who built these elaborate devices provided a reservoir of skills not only for making other kinds of clockwork instruments such as ships logs and timing mechanisms, but also for much of the early industrial machinery.

The range of instrumentation available to the medieval West was not large. For the mechanical operations of arithmetic some aid was offered by the ABACUS, while simple forms of protractors, levels, dividers, and set squares were available to the surveyor, the mason, and the geometer.

DEVELOPMENT OF LABORATORY INSTRUMENTS

Little is known about the only substantial group of medieval laboratory apparatus, the glass and earthenware utensils of the alchemist and the apothecary. Beginning in the late 16th and 17th centuries, under the impact of new scientific theories, systematic attempts were made to investigate nature, and to quantify these investigations through accurate measurements. New devices were developed and adapted to increasingly specialized tasks. New instruments stimulated new theories and vice versa. Thus the investigations of Evangelista Torricelli into air pressure led to the invention of the BAROMETER, which immediately made possible a wide new field of investigation into meteorological phenomena.

The most important developments during the scientific revolution of the 17th and 18th centuries were the invention of optical instruments such as the telescope and microscope, electrical research instruments such as the LEYDEN JAR and electroscope, and research into the pendulum (1610-80). Machines for calculation were also developed from the 17th century onwards. Some, such as Samuel Morland's and Blaise Pascal's devices, employed mechanical principles; others, such as Napier's bones, depended on inspection only. Both the SLIDE RULE and the sector appeared in the early 17th century, the former being comparatively rare (except as an instrument of gauging) until the late 18th century, when it gradually replaced the sector. For accurate measurement the MICROMETER and the VERNIER scale were both of the first importance.

With increasing specialization, instruments began to appear in dozens of different forms, with different calibrations or shapes to fulfill different tasks. The 18th century saw the steady development of more specialized forms of instruments sharing common principles, rather than the invention of entirely new classes of instruments. The 19th century saw an enormous proliferation of instruments, including the introduction of those based on electromagnetism, such as the GALVANOMETER, AMMETER, and VOLTMETER. Not until the appearance of X-ray, radio, and nuclear instrumentation were innovations made that compared in importance to the appearance of optical, electrical, and electromagnetic apparatus.

Measurement Errors

The result of a measurement and the actual value of the quantity to be measured are often not precisely equal. The difference between these two values may be due either to random errors or to systematic errors. Random errors are those that occur in the act of

Page 33.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicsmeasurement itself; systematic errors occur as a result of instrument faults and calibration mistakes.

Errors can generally be classified under 3 sources:1. Process Environment2. Equipment Limitation3. Operator fallibility

Random Errors

In order to obtain a meaningful measurement, one must always specify the precision with which it is made--that is, the limits between which the measured quantity lies. The interval within which the real measurement value lies determines the absolute error of the measurement. For instance, the absolute error is approximately 1 mm when a length is measured with a ruler 1 m long. The relative error is equal to the absolute error divided by the value measured, and is usually expressed as a percentage. Two values, the arithmetic mean and the dispersion (the width, or spread, of the distribution curve), characterize the distribution of the possible measurement results. The arithmetic mean of the probability distribution coincides with the real value of the quantity measured when there are no systematic errors.

Two guidelines can be established for random errors: (1) repeating a measurement yields information on the magnitude of the random errors, and (2) repeating a measurement reduces the error in the final result in proportion to the square root of n where n is the number of measurements taken. The random error decreases rapidly at the beginning, but then more slowly. Once systematic errors begin to predominate, random error cannot be further reduced.

Systematic Errors

It is more difficult to estimate and reduce the magnitude of systematic errors. The measuring process must be analyzed carefully in each case. Every type of measurement has its own characteristic systematic errors, but some of the most prominent may be enumerated here:

1. Null-point errors, caused by a measuring error in the null condition or by a faulty null setting of the instrument, will often result in a constant shift of all measured values.

2. Calibration errors result when the conditions under which the reference measurement (calibration) is taken do not approximate the conditions of the actual measurement as nearly as possible. For instance, a ruler calibrated at a certain temperature will give a constant relative deviation if it is used at a different temperature.

3. The measuring instrument itself nearly always influences the magnitude of the signal to be measured. Measuring an electric potential difference lowers the actual voltage reading, because the measuring instrument places a load on the voltage source.

Page 34.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basics4. HYSTERESIS and lost motion are phenomena in which the indication of a measuring instrument depends on its previous reading.

5. Parallax errors result from the fact that the pointer in most dial instruments is located at a slight distance from the scale, and the reading thus depends on the angle from which it is taken.

The methods for dealing with random errors are easier to use than those for dealing with systematic ones. The best way to correct for systematic errors is therefore to convert them to random ones. This can be done by introducing as many variations as possible into the measuring method and instrument. For instance, it is easy to overlook a systematic error that is introduced when the same slow stopwatch is used for several time measurements (an example of calibration error). However, repeating the measurements with a number of independently calibrated stopwatches will cause the calibration errors to assume a random distribution. Application of a completely different measuring method is a time-consuming but thorough way of detecting systematic errors.

Calibration

Calibration is the comparison of an instrument against a ‘known’ standard. This will be discussed in class.

Heterodyne Principle - Frequency Calibration

{het'-ur-oh-dyn}

If two signals of different frequencies are electronically mixed together, the output of the mixer stage contains--among other components--the sum and difference frequencies of the input. This mixing process, also known as frequency conversion, frequency translation, or heterodyning, is widely used in communications RECEIVERS. In fact, the ordinary AM broadcast-band RADIO is known as a superheterodyne receiver.

Radio waves, which are a form of electromagnetic waves, are generated by many different radio stations (transmitters) and all arrive at the antenna. Each transmitter is assigned its own carrier frequency, which is varied by the audio program material prior to transmission by a process called modulation. To hear a given station, it is necessary to tune the receiver so that it will accept the signal transmitted by this station. This signal is very weak and must be strengthened, or amplified, by an amplifier so that it will be strong enough to drive a loudspeaker. Before the discovery of the heterodyne principle in the mid-1920s, it was necessary to tune each stage to the incoming frequency in what was called a tuned radio frequency (TRF) receiver. This was a very awkward procedure with many disadvantages that were overcome. Heterodyning allowed all of the stages of amplification following the mixer to operate at a single fixed frequency.

Assume a listener wants to hear a radio station that broadcasts at an radio frequency of 610 kilohertz (kHz). When the dial is turned to this frequency, the radio-frequency (RF) amplifier will be tuned to 610 kHz and the local oscillator will be tuned simultaneously to produce a frequency of 1,065 kHz. These two signals are fed to a nonlinear circuit (involving a diode, electron tube, or transistor) called a mixer or converter. The output contains frequencies of 1,675 kHz (the sum of the two frequencies) and 455 kHz (the difference). Both of these frequencies contain the same intelligence (program material) as the

Page 35.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicsincoming 610-kHz frequency; for practical considerations, however, only the difference frequency is used. Therefore, the amplifiers succeeding the RF amplifier and mixer are all tuned to the frequency of 455 kHz. These amplifiers are called intermediate-frequency (IF) amplifiers because they amplify the difference frequency, which is intermediate--in between the audio frequencies of the intelligence and the incoming carrier frequency. No matter what frequency is being received at the antenna, the local oscillator is always made to produce a signal 455 kHz higher than the incoming frequency, so the mixer output will always produce a difference frequency of 455 kHz. All radio manufacturers use the same difference frequency in order to standardize the manufacturing. The principle of heterodyning is also used in FM radio and television receivers, where the difference frequencies are 10.7 MHz and 4.5 MHz, respectively.

The principle of heterodyning may also be used for calibrating an unknown frequency by comparing it with a variable frequency of known accuracy. As the variable frequency approaches that of the unknown, the difference frequency becomes audible even if the two original frequencies are far above the audio range. The difference frequency will decrease until, when the two input frequencies are the same, a condition of "zero-beat" is obtained.

The Metric System

The modern metric system of units and standards of measure is rooted in 17th-and-18th century efforts to establish a simple, easily used system of weights and measures universally acceptable to the countries of the world. These efforts were motivated by two guiding principles. In the first place, there were many who hoped for the definition of a single unit of measure that could serve as the basis for the logical construction of a complete and consistent system of units of measure; in the second place, there was also a growing number of people favoring decimal relationships for the necessary units of the same quantity; that is, multiples by factors of ten or submultiples by factors of one-tenth were considered to be the desirable means of obtaining systematic units of measure that would be a convenient size for all needs.

The forces driving toward a change from diverse and essentially unrelated customary systems of measure included rapidly growing international commerce and the changing political structure of Europe and its colonial dependencies. Within the new national structures it became necessary to accommodate many incompatible ways of doing business. Moreover, the growth of scientific investigation not only created new demands for accuracy and uniformity in measurements, it also provided the vision for a universally acceptable scientific basis for a system of measurements. The customary systems, handed down mainly from the Babylonians, Egyptians, Greeks, and Romans, were based on unrelated objects and phenomena, including human anatomy, with no practical hope for uniformity within integrated communities, states, or aggregated nations.

Origins of the Metric System

The birth of the metric system occurred in the climate of bold reform and scientific rationalization that prevailed in France during the latter part of the 18th century. In April 1790, Charles Maurice de Talleyrand, then Bishop of Autun, placed before the National Assembly of France a plan based on a unit of length equal to the length of a pendulum that would make one full swing per second. The French Academy of Sciences organized special committees to study the related issues. While many scientists favored the concept of a unit of length derived from a pendulum, there were many recognized practical difficulties. These included variations with temperature and the different values of gravitational force at different places on the surface of the Earth. After scientific consideration of the

Page 36.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicsalternatives, the committee recommended a new unit of length equal to one ten-millionth of the length of the arc from the equator to the North Pole, or a quadrant of the Earth's meridian circle. In May 1793 this unit was given the name metre, derived from the Greek word metron, meaning "a measure." From the same word came the name of the new system. The unit of mass, the kilogram, was defined as the mass of water contained by a cube whose sides are one-tenth the unit of length. The unit of volume, the liter, was defined in the same way; thus the unit of length became the basis for the system. At that time the units of length, mass, area, volume, and time satisfied the needs of commerce. The new Republic of France adopted the recommendations of the French Academy in 1795.

Development of the Metric System

The French Academy of Sciences also recommended, for practical reasons, that the primary reference standard for the unit of length be realized from the definition of the unit by a very precise measurement of the arc of meridian between Dunkirk, France, and Barcelona, Spain. The length of the arc from the equator to the North Pole was then to be inferred from astronomical measurements of angle. The survey was completed in November 1798, and platinum artifact reference standards for the meter and the kilogram were constructed in June 1799. These two standards, deposited in the French National Archives in Paris, later came to be known as the Meter of the Archives and the Kilogram of the Archives.

The introduction of the metric system in France met with the usual resistance to change. In 1812 the old units of measure were restored by Napoleon, Emperor of France. In 1840 the metric system again became mandatory in France, and it has remained so ever since. Meanwhile, the use of the metric system spread slowly to other European countries and even to the United States, where it became legal, but not mandatory, in 1866. The international acceptance of the metric system was implemented by the Diplomatic Conference of the Meter, convened by the French government on Mar. 1, 1875, and attended by delegates from 20 countries. This conference produced the Treaty of the Meter, signed on May 20, 1875, by the delegates of 17 countries--including the United States, the only English-speaking country to sign.

The metric treaty provided the institutional machinery needed to promote the refinement, diffusion, and use of the metric system. The International Committee for Weights and Measures, widely known as CIPM (Comite International des Poids et Mesures), was established under the broad supervision of the General Conference on Weights and Measures, CGPM (Conference General des Poids et Mesures), consisting of delegates from member countries. The first General Conference met in September 1889 to approve new international metric prototype reference standards to redefine length and mass. These prototypes were based on the Archives standards. The First CGPM also ratified the equality (within known uncertainties) of a number of national prototype standards for length and mass and distributed these standards to the member nations. This was the beginning of the diffusion of a uniform metric system throughout the world. The Metric Convention also established the INTERNATIONAL BUREAU OF WEIGHTS AND MEASURES, BIPM (Bureau International des Poids et Mesures), to carry out the scientific work of the International System under the supervision of CIPM.

Metric Expansion Throughout the World

Page 37.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsFollowing the reinstitution of the metric system in France in 1840, the use of the system expanded slowly into parts of Germany, Italy, Greece, the Netherlands, and Spain. After 1850 the growing interest in large international commercial exhibitions accelerated the expansion of the use of the metric system as a common language of measurements, and by 1880 the major European countries and most of South America had adopted it. At the beginning of the 20th century the metric system was officially in use in 35 countries, and the only large industrialized countries not included in that number were the British Commonwealth countries and the United States. Both the United States (in 1875) and Great Britain (in 1884) had become signatory nations of the Treaty of the Meter, though, thereby recognizing the importance of a common international basis for their national systems of measurement.

The metric displacement of customary measurement systems in major English-speaking countries of the world has developed much more slowly. Changes in the patterns of international trade and the importance of new markets in developing--as well as developed--countries has nevertheless brought about a practical regard for the necessity of uniform units of measure on an international scale. The shift toward metric conversion was well established in English-speaking countries by the middle of the 20th century. Official action to adopt the system for nationwide everyday use was finally taken, after the establishment of the International System of Units in 1960, by Great Britain (1965), South Africa (1968), New Zealand (1969), Canada (1970), and Australia (1970). As the final quarter of the 20th century approaches, only the United States, Liberia, and Burma remain uncommitted to the mandatory use of the metric system in daily life.

The Metric System in the United States

In the United States, there had been much official and scientific interest in the development of the metric system during the earliest days of the nation. President Washington urged Congress to take action toward uniform measurements throughout the land. Thomas Jefferson and John Quincy Adams, during their terms as secretary of state, carried out comprehensive studies that included consideration of the merits of the metric system developments in France. Following an additional special study by the newly organized National Academy of Sciences in January 1866, Congress enacted legislation authorizing (but not mandating) the use of the metric system in the United States. This legislation was signed into law by President Andrew Johnson on July 20, 1866.

The Act of 1866 was an important turning point in the history of measurements in the United States. By making it lawful to employ weights and measures of the metric system, the Act made a first step toward eventually harmonizing the U.S. measurement system with those of other nations. The Act also defined by law the relationships to be used in calculating the values of customary units of measurement used in the United States from the corresponding metric units. Moreover, in that same year a joint resolution authorized and directed the secretary of the treasury to furnish each state with one set of standard metric weights and measures.

The United States was an important participant in the Convention of the meter held in Paris in 1875. Following its signing of the Metric Convention on May 10, 1875, the nation received its prototypes of the standard meter bar and standard kilogram in 1893. These became the nation's official fundamental standards for length and mass. In 1901 the U.S.

Page 38.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsNATIONAL BUREAU OF STANDARDS was established for the purpose of serving the worlds of science and technology. Despite its efforts, little progress was made toward a wider U.S. acceptance of metric units.

Following World War II, however, and particularly following the USSR's successful launching (1957) of the first space satellite, Sputnik--which opened the age of space exploration--a renewed interest in the metric system developed in the United States. By 1968 the spread of metric measurements throughout the world was nearly complete. Arguments for conversion based on expanding foreign markets were becoming increasingly persuasive. Recognizing these trends, Congress, on Aug. 9, 1968, authorized the secretary of commerce to undertake an intensive study to determine the advantages and disadvantages of increased U. S. use of the metric system. The resulting report, The U. S. Metric Study (1970-71), concluded that the nation eventually would join the rest of the world in using the metric system and urged a carefully planned transition to this use. On the recommendation of the study, Congress enacted the Metric Conversion Act of 1975 and established the U. S. Metric Board "to coordinate the voluntary conversion to the metric system." The Office of Metric Programs then replaced the Metric Board in 1982.

Despite such efforts by the federal government, no states have as yet enacted legislation mandating the adoption of International Units. Thus, popular use of the metric system was still almost nonexistent by the early 1990s. The kind of pressure to adopt the system that has a greater likelihood of success is instead coming from the business community. Such pressure is being exerted in the cause of international competition and trade. Organizations such as the European Economic Community, for example, have threatened to restrict U.S. imports that do not conform to metric standards, and some nations on occasion have already rejected shipments outright for such reasons. Rather than trying to maintain dual inventories for domestic and foreign markets, a number of U.S. corporations have chosen to go metric. (For example, motor vehicles, farm machinery, and computer equipment are manufactured to metric specifications.) As business goes, so probably will go the nation. The Omnibus Trade Bill, passed in 1988, has already required almost all federal agencies to use metric units in their procurements, grants, and business activities by 1992.

Base and Derived Units

When the metric system was first conceived, one of the goals was the definition of a single unit from which the essential system of measurements could be constructed. Indeed, it was thought that the unit of length, the meter, should be regarded in this way, and much scientific effort went into the careful selection of an acceptable definition. It was also necessary to rely on the properties of pure water in order to define a unit of mass, the kilogram. Thus, the measurement system required for trade and commerce in the 18th century rested on the definitions of two units; units for other necessary quantities, such as area and volume, were derived from them. The ultimate goal of a complete system of measurements logically derived from the definition of a single unit was not realizable when the metric system was first established, and it is not realizable today. Nevertheless, the fundamental idea persisted, and a modern metric system has been founded on six base units and designated by the 11th CGPM (1960) as the International System of Units with the international abbreviation SI. The SI base units--expanded to seven in 1971--are independent by convention, and are the meter, kilogram, second, ampere, kelvin, mole, and candela. It is possible, in principle, for industrial nations to maintain complete systems

Page 39.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology Basicsof measurement that are equivalent within acceptable limits of uncertainty by comparing national standards for the SI base units to those maintained by the International Bureau of Weights and Measures, BIPM (Bureau International des Poids et Mesures), in Sevres, France.

Future Trends

The seven SI base units constitute a complete set in the sense that all the other necessary units of measure can be logically derived from them. In a practical sense, these seven also constitute an irreducible set within which no member can be derived from any combination of the others. It is, however, possible that advances in science and technology will result in a reduction of the number of SI base units. Since 1967 the SI unit for time, the second, has been defined as exactly 9,192,631,770 periods of radio radiation emitted as a result of gyroscopic precession of the outermost electron in undisturbed cesium atoms. From 1960 to 1983 the SI unit for length, the meter, was defined as exactly 1,650,763.73 wavelengths of one of the spectral lines of krypton-86. By 1983, however, even this laser-generated wavelength came to be considered insufficiently accurate in reproducibility, and the meter was redefined as the length of the path traveled by light in a vacuum during a time interval of 1/299,792,458 of a second. That is, the standard unit of length is defined in terms of the speed of light.

Modern methods for the measurement of luminous energy provide another example of advances that may, in principle, reduce the number of necessary SI base units. The unit of luminous intensity, the candela, is defined in terms of the radiation from a defined small area of a platinum body at a specified high temperature. It has become possible to measure such radiation by direct comparison to equivalent small amounts of electrical power. Therefore, electrical units--watts--are, in principle, sufficient for the measurement of optical radiation flux, as well as of electrical power.

Recent advances using X rays to sense the positions of atoms in pure samples of perfect crystalline structures have made it possible to determine the number of atoms in a known amount of substance with great accuracy. On this basis, it may also become practical to derive the SI mole directly from the kilogram, contributing thereby to further simplification of the SI base units. Such advances even point the way to a possible redefinition of the kilogram in terms of the mass of a selected universally available atom. The present kilogram is the only SI base unit that is still defined in terms of an artifact kept at Sevres.

In the case of special units in different disciplinary fields, it is clearly desirable to encourage a trend toward uniform practice. For example, the units used to measure the physiological effects of optical radiation include a factor for the average efficiency of the human eye, while the corresponding units used in physics and engineering for the same quantity do not. Similar examples exist in the case of other units that are used for physiological responses, including acoustic power and energy, and ionizing radiation dose. Those who are concerned with the refinement of the modern metric system seek ways to harmonize such diverse measurement practices while at the same time avoiding any tendency to make the system less useful to those who have special needs. The objective is to reduce the potential for confusion and error arising from the limitations of measurement language used in widely different fields of scientific and technological specialization.

Page 40.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023

Metrology BasicsBibliography: Asimov, Isaac, Realm of Measure (1960); Chiu, Yishu, A Dictionary for Unit Conversion (1975); Doebelin, E. O., Measurement Systems, 4th ed. (1990); Griffin, H. J., ed., Comparative History of Metrology (1983); Irwin, Keith G., The Romance of Weights and Measures (1960); Hewitt, P. L., Modern Techniques in Metrology (1984); Johnstone, William, For Good Measure: A Complete Compendium of International Weights and Measures (1976); Jones, Stacy V., Weights and Measures: An Informal Guide (1963); Kisch, Bruno, Scales and Weights: A Historical Outline (1965); Klein, Arthur, The World of Measurements (1974); Lenzen, D. L., Ancient Metrology (1989); Lieberg, Owen S., The Wonders of Measurement (1972); McQueen, Matthew, SI Unit Pocket Guide (1990); Page, Chester H., and Vigoureux, Paul, eds., The International Bureau of Weights and Measures, 1875-1975 (1975); Skinner, Frederick G., Weights and Measures: Their Ancient Origins and Their Development in Great Britain up to 1855 A.D. (1967); Sydenham, P. H., Introduction to Measurement Science and Engineering (1989). Anthony, D. M., Engineering Metrology (1987); Dilke, O. A., Mathematics and Measurement (1987); Doeblin, E. O., Measurement Systems, 3d ed. (1982); Drazil, J. V., Quantities and Units of Measurement (1983); Geczy, Steven, Basic Electrical Measurements (1984); Hewitt, P. L., Modern Techniques in Metrology (1984); Liebman, J. F., and Greenberg, A., eds., Physical Measurements, vol. 2 (1986); Reissland, M. U., Electrical Measurement (1989); Sirohi, R. S., and Krishna, R., Mechanical Measurements (1983). Batchelder, J. W., Metric Madness: 150 Reasons for Not Converting to the Metric System (1981); Chisholm, L. J., Units of Weights and Measures (1967; repr. 1975); Griffin, H. J., ed., Comparative History of Metrology (1983); Hewitt, P. L., Modern Techniques in Metrology (1984); Lytle, R. J. American Metric Handbook (1981); Mechtly, E. H., The International System of Units (1977); Nelson, R. A., The International System of Units, 2d ed. (1983); Watson, F. D., Going Metric (1991).

Page 41.

Courtesy Cayman [email protected] --- (513) 777-3394 - 5/14/2023