6
QUANTITY OF MEASUREMENT INFORMATION* V. I. Rabinovich and M. P. Tsapenko Translated from Izmeritel'naya Tekhnika, No. 4, pp. 7-10, April, 1963 The volume and importance of numerous flows of information increase with the extension of scientific research and the development of automatic control systems. Exchange of information occurs between the studied (or con- trolled) events and the systems of information processing, and also within these systems. Pressing need has arisen in this connection for establishing unified methods of investigating different flows of information at various stages of its transition (from collection and transmission to its utilization). The information-transmitting systems are at present successfully analyzed and synthesized by means of the theory of information. It would appear that the application of this method to systems intended for receiving measuring information [1] can be of some value.* * The principal basic proposition which demonstrates the expediency of using the theory of information (as well as the theory of probability and mathematical statistics) for investigating the processes and means of measurement consists of the possibility of applying the mechanism of these disciplines so as to allow for the random nature of errors arising not only in measurements, but also in the measured quantities. Yet another important consideration can be cited. In order to make the investigations comprehensive it is necessary to compare the degree to which the numerous factors affect the measuring process. Such a possibility is presented by an integral evaluation of the process or means of measurement, including all the factors which can be accounted for. The application of the information theory mechanism provides the required generalized evaluation in the form of the quantity of information or a certain func- tion of it. It can he assumed that such an approach to the investigation of the measuring process will lead to new, more perfect measuring equipment and methods of measurement and to a more rational utilization of existing equip- ment. Let us note, however, that the new results provided by the application of research methods characteristic of the theory of information does not in any way render less valuable the achievements which have now been attained in measurement techniques, mainly in determining the intrinsic characteristics of the measuring equipment. Since Shannon's work [4] the transmission of information has been considered in the theory of information both in the discrete and continuous forms. In the first instance it is assumed that the groups of input and output signals are finite, and in the second case that they form sets with an infinite number of conditions. Each measurement can produce at the instrument output only one result out of a finite set irrespective of the nature of the input (measured) quantity. This is due to the operation of the quantizing device and to the existence of a reference quantity, which has a discrete nature [5, 6], Measurement results are expressed in the form of numeri- cai values of the measured quantity with a certain degree of accuracy, Let us note that the above considerations hold both for digital and pointer instruments. In the latter instance the operations of dividing the instrument scale into discrete values and quantizing are performed by the operator. Moreover, the minimum discernible quantizing interval is selected by the operator and is determined by his physiological capacity, the conditions of the experiment, and the characteristic of the instrument's output device. A schematic which reflects the most important functional peculiarities of measuring instruments is shown in Fig. 1. The measured quantity x undergoes several transformations. Absotute transformation errors [, which are random by nature, are formed in intermediate converter 1 and added to x. The input of quantizing device 2 is fed * This article is written on the basis of the material contained in a paper read by the authors at the All-Union Conference on Automatic Control and Electrical Measuring Methods, held in September, 1962, in Novosibirsk. * *This was indicated, in particular, by the first results obtained in the papers [2, 3 and others] and in the papers of V. V. Sidel'nikov, M. I. Lanin and S. M. Mandel'shtam; B. M. Pushnoi and V. I. Chistyakov; Yu. P. Drobyshev, and the authors Of the present article, which were read at the conferences held in June, 1962, in Leningrad, and September, 1962, in Novosibirsk. 285

Quantity of measurement information

Embed Size (px)

Citation preview

Page 1: Quantity of measurement information

Q U A N T I T Y OF MEASUREMENT I N F O R M A T I O N *

V. I . R a b i n o v i c h and M. P. T s a p e n k o

Translated from Izmeritel'naya Tekhnika, No. 4, pp. 7-10, April, 1963

The volume and importance of numerous flows of information increase with the extension of scientific research and the development of automatic control systems. Exchange of information occurs between the studied (or con- trolled) events and the systems of information processing, and also within these systems. Pressing need has arisen in this connection for establishing unified methods of investigating different flows of information at various stages of its

transition (from collection and transmission to its utilization). The information-transmitting systems are at present successfully analyzed and synthesized by means of the theory of information. It would appear that the application of this method to systems intended for receiving measuring information [1] can be of some value.* *

The principal basic proposition which demonstrates the expediency of using the theory of information (as well as the theory of probability and mathematical statistics) for investigating the processes and means of measurement consists of the possibility of applying the mechanism of these disciplines so as to allow for the random nature of errors arising not only in measurements, but also in the measured quantities. Yet another important consideration can be cited. In order to make the investigations comprehensive i t is necessary to compare the degree to which the numerous factors affect the measuring process. Such a possibility is presented by an integral evaluation of the process or means of measurement, including all the factors which can be accounted for. The application of the information theory mechanism provides the required generalized evaluation in the form of the quantity of information or a certain func- tion of it. It can he assumed that such an approach to the investigation of the measuring process will lead to new, more perfect measuring equipment and methods of measurement and to a more rational utilization of existing equip- ment. Let us note, however, that the new results provided by the application of research methods characteristic of the theory of information does not in any way render less valuable the achievements which have now been attained in measurement techniques, mainly in determining the intrinsic characteristics of the measuring equipment.

Since Shannon's work [4] the transmission of information has been considered in the theory of information both in the discrete and continuous forms. In the first instance it is assumed that the groups of input and output signals are finite, and in the second case that they form sets with an infinite number of conditions.

Each measurement can produce at the instrument output only one result out of a finite set irrespective of the nature of the input (measured) quantity. This is due to the operation of the quantizing device and to the existence of a reference quantity, which has a discrete nature [5, 6], Measurement results are expressed in the form of numeri- cai values of the measured quantity with a certain degree of accuracy, Let us note that the above considerations hold both for digital and pointer instruments. In the latter instance the operations of dividing the instrument scale into discrete values and quantizing are performed by the operator. Moreover, the minimum discernible quantizing interval is selected by the operator and is determined by his physiological capacity, the conditions of the experiment, and the characteristic of the instrument's output device.

A schematic which reflects the most important functional peculiarities of measuring instruments is shown in Fig. 1. The measured quantity x undergoes several transformations. Absotute transformation errors [, which are random by nature, are formed in intermediate converter 1 and added to x. The input of quantizing device 2 is fed

* This article is written on the basis of the material contained in a paper read by the authors at the All-Union Conference on Automatic Control and Electrical Measuring Methods, held in September, 1962, in Novosibirsk.

* *This was indicated, in particular, by the first results obtained in the papers [2, 3 and others] and in the papers of V. V. Sidel'nikov, M. I. Lanin and S. M. Mandel'shtam; B. M. Pushnoi and V. I. Chistyakov; Yu. P. Drobyshev, and the authors Of the present article, which were read at the conferences held in June, 1962, in Leningrad, and September, 1962, in Novosibirsk.

285

Page 2: Quantity of measurement information

with a "mixture ~ z = x + y. The values of z are compared in this device with the discrete values of a reference

quantity fed from source 4, A logica l reduction of a number of such comparison results shows to which interval of

the reference-quanti ty values a given value of zcorresponds. The measurement result is then displayed on output

device 3 after d ig i ta l coding.

Fig. 1

Below we examine a method of determining the quantity of in- formation obtained in the measurement process under the following conditions.

1. Since the measured quantity and the in termediate convertor

error may assume unpredictable values, they should natural ly be con- sidered as random quantities.

The laws of probabil i ty distribution of these quantities are as- sumed to be invariable with t ime, i .e . , the above processes are taken

to be stationary (in measurement technology, processes of such a type are often encountered).

2. The measurement results are assumed to be independent. In other words, it is assumed that the value of the preceding measurement result does not reduce the quantity of information obtained in the succeeding measurement , This holds provided the quantity of information obtained in a given measurement is not affected by the results of the preceding measurement . In the case of independent measurements the quantity of information per measurement is at a maximum with the remaining conditions being equal.

3. The errors of the in termedia te converter are assumed to be independent of the measured parameter values and addit ive to them. This assumption is general ly accepted in the theory of errors.

4. In order to find the basic relationship i t is first assumed that the errors in forming the reference-quanti ty values and the dead zone of the quantizing device are negl igibly small . The errors of the output device, on the other hand, are produced, as a rule, only by equipment defects.

The theory of information deals with situations in which the appearance of an event (out of a number of pos- sible events in a given situation) cannot be predicted unambiguously, in other words, in which there exists an e l emen t of chance. In such cases the most comple te description of the situation consists in character iz ing each event by the probabil i ty of its appearance. If the appearance of a fixed event in a given situation changes the probabi l i ty o f the appearance of events in other situations, such situations are known as s ta t is t ical ly dependent. The quantity of infor- mat, ion is a measure which determines the degree of this dependence. The quantity of information about oneof the situations obtained by observing events in another situation increases with an increase in the correlat ion of the two situations. The quantity of information becomes equal to zero i f the two situations are comple te ly independent, and i t becomes maximum if there exists a mutual ly s ingle-valued functional relationship between the events of the two

situations under consideration.

f#)

.I

Fig. 2

The above reasoning can be interpreted with respect to the measure-

merit process in the following manner. The appearance of cer tain measurement results is related to e lements of chance. Hence, the measurement results can be considered as random events, and the measurement experiment as a situation in which they can be effeeted. Measurement results (see Fig. 1) appear as a consequence of comparing the reference-quanti ty values with those of a sum consisting of the measured value x and the error Z" Nevertheless, i t is possible by means of these results to evaluate x in its "pure" form. The impossibi l i ty of predict ing which value of x will appear at the input of the measuring

instrument provides a reason for considering the appearance of these values as a random event. The quantity of in- formation obtained during measurements determines how comple te ly i t is possible to es t imate the values of the measured quantity by the results of the measurement exper iment . Since the quantity of information represents a s tat is t ical evaluation, it is necessary to know for its computat ion the probabil i ty characterist ics of the measured quantity x and the error X" A natural requirement for the function which deten-nines the quantity of information during measurements consists in the possibili ty of caIcula t ing it, provided that the probabil i ty distribution laws of the measured quantity x and the error X are given (arbitrarily) and the remaining parameters of the measuring instrument

286

Page 3: Quantity of measurement information

are known. The possibility of calculating this function must not depend on the number of values which the measured quantity can assume, since in practice this number is hardly ever known. It is also obvious that in determining the

quantity of information it is necessary to use parameters which reflect the actual properties of the measuring instru- ment and the measured quantity.

According to [4, 7, 8] the quantity of information about the measured value x can be determined from the formula

I (x/z) = l i (x) - H (x/z) , (1)

where H(x) and H(x/z) are,respectively, the (unconditional) entropy of the measured value x for y = 0 and the relative entropy of x for condition z, i.e., when y ~ 0.

The concept of entropy is basic to the theory of information. It characterizes the degree of uncertainty in the situation under investigation. The entropy is determined by the number of possible events in a given situation and the probability of their appearance. In order to become a function characterizing the degree of uncertainty, the en- tropy must meet the following requirements:

1) it must vary continuously for a monotonic variation of the argument;

2) the entropy of independent situations must equal the sum of their entropies;

3) it must be at a maximum for an equal probability of all the possible events in a given situation;

4) the entropy must equat zero if one of the events is certain and the remaining events impossible.

It has been shown in [4] that the above requirements are satisfied by a single function

N

H ( a ) = - - ~ pklog/~, (2) K=I

where Pk is the probability of the k-th event, N is the number of events which can occur in the given situation.

The base of the logarithm determines the measurement unit of entropy.

It has already been mentioned that in calculating the quantity of information it is necessary to use parameters which reflect the actual properties of the measuring instrument and the measured quantity. This condition is met in the following manner when the entropy of the measured value x is being determined. The probability distribution law of the values of the measured quantity is taken as one of its characteristic parameters, and the quantizatio n in- terval A along axis z, which is determined by the reference quantity of the measuring instrument, is taken as the other parameter. This interval (A) serves as a distinctive measure which determines entropy. By taking these con- siderations into account we find that the first term in (1) becomes equal to

N H a ( x ) = - - ~ p (xt) log p (xl).

i = l

(3)

In this expression and henceforth,subscripti is assigned to the current values of x, and subscriptj to the current values of _z. Moreover, each quantization interval is assigned a subscript of its right-hand boundary. Then x i is the value of the measured quantity determined by the corresponding value of the reference quantity, p(xi) is the proba- bility of this value of x appearing, N is the number of intervals A contained in the instrument measuring range D. It is obvious that N = 13/A.

Let us assume that the probability distribution law of the measured quantity is given (Fig. 2). The form in which it is presented may vary: it can be analytical or graphical, continuous or discrete. This form determines the method for calculating probability P(Xi). It is the probability density fix) which is given most frequently. Then we have

IA

p (xt)= ! f ( x ) ax , ( l - )A

(3a)

and (3) assumes the form

287

Page 4: Quantity of measurement information

N IA 14

tt a (x)=~-- E ! f (x) dx log S f (x) dx. t = t ( l - ) a 0 - 1 ) a

(4)

In the absence of errors y this formula would determine the quantity of information. It is only natural that, owing to transformation errors, there remains a certain additional uncertainty with respect to the actually existing values of x after the results of the measurement experiment have been obtained. The conditional entropy HA(x/z ) serves as a measure of this uncertainty. It represents the sum of particular conditional entropies HA(x/zj) weighted by appropriate probabilities:

N

n A (x/z) = ~ p (z/) tq (x/z/), (5) 1= 1

where zj is one of the possible results of the measurement experiment, p(zj) is its probability.

The formula for computing HA(x/zj) has the form

N

tr A (x/z/)=-- Y, p (xdz A log p (xdz/), (6) 1=1

where P(Xi/Zj) is the conditional probability that the value of the measured quantity falls within the i-th interval if it is known that the measurement result corresponds to the j- th interval of z.

] f~l

./1( �9 ~fzjl~l I I ["

1 I

i

\ J

Fig. 3

The determination of the values of P(Xi/Zj) is linked with the necessity of plotting the particular probability distribution laws of z for the condition that x remains within the i-th quantization interval. The distribution law of'the sum of two independent random quantities x and y (their composition) is traced by means of methods used in the theory of probability [7, 9]. For an analytical method of presenting the distribution laws of _x and ~, their composition can be obtained in the form of probability densities f(z/xi) (Fig. 3). These composi- tions represent the resultant distribution law for a simultaneous ef- fect of the measured quantity x (within interval_i) and the inter- mediate converter error y. Let us note that the plotting is done on the basis of the reference-quantity scale (Fig. 3). Each composition of this type can occur with the probability p(xi). Moreover, for de- termining the conditions of entropy it is necessary to know with respect to intervals j the probability values of separate sections of these compositions for the condition that x is within interval i. Obviously, these probabilities are

p (zyx i )= ~ f ( z /x i ) az. (J--1)A

(7)

By substituting the relationships well known in the theory of probability

and

p (zdx,) p (xi) = p (x,/zj) p (z j)

N

p (z/) = ~ p (~//x~) p (x~) i = 1

for corresponding values in (5), we find that

N HA (x/zs) = ~-a "p (zj/xl ) p (xl)

t = 1 p (zy)

p (zj/xt) p (xO log N

~, p (z)/,q)p(xl) /=1

(8)

288

Page 5: Quantity of measurement information

Expressing it in terms of a general conditional entropy we can write

N N 14,, (xl~) ~] ~] p (zs/xi) p (xO log p (z/xD p txD

1= l l= t y p (zy/xt)p(xl)

l-1

(9)

p (Zj/Xl') p (Xl) By replacing log N

Y,, p (~.,/~D p (xD

p (zy/~) in (9) by the sum of log p(x i) and log N , taking into account that

]~ p (zslxD p (xD 1=1

N p (zj/xl)=l, and substituting the values thus found for HA(X/Z) and HA(x) in (1), we obtain the final formula for the

]=1

quantity of measured information:

N N ja tA

{ j = l = t 7 - - ) & (l--l) a

jt~

f (z/xt) dz Xdx log U-I) a x ja ia (10)

l=1 (j-- l) A (1-1) A

Thus, both terms of formula (1),which determines the quantity of measured information,have now been found.

The present article is too short to be able to show that in the overwhelming majority of practical cases the ex- pression for I(x/z) can be simplified substantially. Simplifications become possible, for instance, owing to the fact that:

1) the distribution law of x can be represented with sufficient accuracy in the form of a stepped curve (as shown by the dotted lines in Fig. 8). This simplifies considerably the calculation of P(Xi), since it is then possible to assume that p(xi) ~-. f(x=iA)A, where f(x = iA) is the probability density of x when its value is equal to iA. Moreover, the plotting of particular compositions is simplified owing to the fact ~a t one of the quantities becomes uniformly dis- tributed;

2) the distribution laws of errors y a m usually assumed to be either uniform or normal, and the formulas for such compositions of uniform distributions (for x in a given interval) are well known and convenient for computation [7, 9 and others].

In conclusion it should be noted that this article deals in a general way with the method of computing the quan- tity of information obtained by means of measuring devices. We propose to show in future how the quantity of in- formation can be determined for the most important cases.

1.

2.

3.

4.

5.

6.

7.

L I T E R A T U R E C I T E D

K. B. Karandeev, "Measuring information systems and automation." Vestnik AN SSSR, 1961, No. 10. V. Madoni, "Application of the information theory in telemetering." From a collection entitled "Technique of Transmitting Measurements by Radio from Rockets and Missiles" [Russian translation]. Voenizdat, M., 1959.

M. Klein, G. Morgan, and M. Arouson, Digital Techniques for Computatiom and Control [Russian translation]. IIL, M,, 1960.

K. Shannon, "Statistical theory of electrical signals transmission." From a collection entitIed "Theory of Electrical Signal Trammission in the Presence of Noise " [Russian translation], IL, 1953. M. P. Tsapenko, tzmerit, tekh., 1961, No. 5. K. B. Karandeev, V. I. Rabinovich, and M. P. Tsapenko, Izmerit. tekh., 1961, No. 12. g. S. Venttset', Theory of Probability [in Russian]. Fizmatgiz, M., 1962.

289

Page 6: Quantity of measurement information

8,

9.

A. M. Yaglom and I. M. Yaglom, Probabil i ty and Information [in Russian]. F izmatgiz . , M., 1960.

B. V. Gnedenko, Course on the Theory of Probabili ty [in Russian]. F i z m a t g i z . , M., 1961.

All abbreviations of periodicals in the above bibliography are letter-by-letter transliter- ations of the abbreviations as given in the original Russian journal. Some or all of t h i s peri-

od ica l l i te ra ture may wel l be ava i lab le in Eng l ish t ranslat ion. A complete l is t of the cover-to-

cover English translations appears at the back of this issue.

290