11
1 PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – HOW MUCH INFORMATION IS CONTAINED IN THE OBJECTIVE FUNCTION? O. GUTSCHKER BTU Cottbus, Department of Applied Physics / Thermophysics, D-03013 Cottbus, Germany Tel.: +49-355-692447 e-mail: [email protected] ABSTRACT Parameter identification techniques are a powerful tool for the determination of thermophysical properties of building components. However, their correct application often demands a certain degree of experience. Especially in short or/and heavily disturbed data sets a formal use of a parameter identification software can lead to completely wrong results. This paper illustrates these problems by means of some simulation calculations and encourages to a discussion about possible solutions. Keywords: Parameter identification; Dynamic analysis; Global minimum; Autocorrelation 1. INTRODUCTION Modern buildings consist of a great number of different components, which should be coordinated among each other. Therefore it is important to know the thermophysical properties of the individual parts. Most of these parameters can only be determined experimentally under natural climate conditions, either in special test cells or in existing buildings. For the analysis of such experiments often parameter identification methods are used. The principle procedure is mostly similar, see e.g. [1], [2], [3]. At first the respective component is described in a model, and then a set of (differential) equations with an adequate number of parameters is derived from this model. By means of this set of equations an objective function is deduced, and by minimization of this objective function the unknown parameters are estimated. The practical application of this approach leads to some problems. The first is a numerical one – the search for the global minimum of the objective function is not a trivial task, especially if the number of free parameters is high (say 10). In the literature different procedures are described, but all of them have assets and drawbacks, depending on the particular problem, see e.g. [4]. Fig. 1 shows two examples for an objective function, which depends on two parameters. The function on the left side is relatively smooth, but the function on the right side shows a lot of local minima, which make the search for the global one very difficult. Figure 1. Different shapes of an objective function.

PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

1

PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – HOW MUCH INFORMATION IS CONTAINED IN

THE OBJECTIVE FUNCTION?

O. GUTSCHKER

BTU Cottbus, Department of Applied Physics / Thermophysics, D-03013 Cottbus, Germany

Tel.: +49-355-692447 e-mail: [email protected]

ABSTRACT Parameter identification techniques are a powerful tool for the determination of thermophysical properties of building components. However, their correct application often demands a certain degree of experience. Especially in short or/and heavily disturbed data sets a formal use of a parameter identification software can lead to completely wrong results. This paper illustrates these problems by means of some simulation calculations and encourages to a discussion about possible solutions. Keywords: Parameter identification; Dynamic analysis; Global minimum; Autocorrelation 1. INTRODUCTION Modern buildings consist of a great number of different components, which should be coordinated among each other. Therefore it is important to know the thermophysical properties of the individual parts. Most of these parameters can only be determined experimentally under natural climate conditions, either in special test cells or in existing buildings. For the analysis of such experiments often parameter identification methods are used. The principle procedure is mostly similar, see e.g. [1], [2], [3]. At first the respective component is described in a model, and then a set of (differential) equations with an adequate number of parameters is derived from this model. By means of this set of equations an objective function is deduced, and by minimization of this objective function the unknown parameters are estimated. The practical application of this approach leads to some problems. The first is a numerical one – the search for the global minimum of the objective function is not a trivial task, especially if the number of free parameters is high (say 10). In the literature different procedures are described, but all of them have assets and drawbacks, depending on the particular problem, see e.g. [4]. Fig. 1 shows two examples for an objective function, which depends on two parameters. The function on the left side is relatively smooth, but the function on the right side shows a lot of local minima, which make the search for the global one very difficult.

Figure 1. Different shapes of an objective function.

Page 2: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

2

The second problem is substantial. All of the used models and equations are only approximations, all measured values have an error range, caused by noise, drifts, calibration errors, weathering, outside influences etc. Therefore it casts into some doubt whether the true result really lies at the point of the global minimum of the objective function. The next problem arises from the last one: How reliable are the results? Most of the commercial available software packages point out in their manuals that users should critically proof the results, try different models and starting values etc. ([5]-[9]). For an automated processing of such measurements, e.g. as part of a controller, this approach is unsuitable. There is a need for reliable criteria which yield to doubtless results. In this paper the mentioned problems are illustrated by means of simple simulation calculations with the aim to initiate a discussion about possible solutions. 2. THE TEST CASE For the purpose of this paper a very simple case will be considered: the one-dimensional heat flow through a homogenous wall. This measurement problem is very common in practice and the analysis of such measurements should mostly not cause problems. However, this case should considered here, because the described problems appear in a similar way also in more complicated situations. The used model is shown in fig. 2.

Figure 2. Used model for the simulations. Measured values are the surface temperatures on both sides of the wall, and the heat flux on the internal side. From the 5 parameters H1-2, H2-3, H3-4, C2 and C3 the interesting global properties U-value (heat transfer coefficient) and C (heat capacity) can be derived:

U = 1

1

H1-2 + 1

H2-3 + 1

H3-4

C = C2 + C3 First of all a clean ideal data set with the following parameters was created: H1-2 = 5 W/K H2-3 = 1 W/K H3-4 = 8 W/K C2 = 0.1 MJ/K C3 = 0.4 MJ/K The application of the parameter identification procedure on this data set should result exactly in these values and in the global results Utrue = 0.757 W/m2K, Ctrue = 0.5 MJ/K.

Page 3: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

3

Now the clean data set was disturbed by 5 different levels of noise. To all of the “measured” values normal distributed random values with a standard deviation of 1..5 K were added. Then it was tried to identify the (in this case already known) parameters from these noised data sets. Fig. 3 shows as an example the original curve of the internal temperature in comparison with the disturbed one (highest level).

Figure 3. Internal temperature – below original curve, above disturbed curve.

3. THE OBJECTIVE FUNCTION The most simple case of an objective function is the sum of the squares of the deviation between a measured and a calculated value – the output error. In that case the surface temperature of the internal side of the wall was used. The minimization of this output error should in theory lead to the true parameter set. For the first calculations 4 of the 5 parameters were fixed at their real values, and only H2-3 was varied. The result is a continuous function – the output error as a function of the U-value. The minimum of this function should be at U = 0.757 W/m2K. Fig. 4 shows this function – for the clean data set as well as for the 5 noised data sets.

Page 4: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

4

Figure 4. Only one free parameter – the output error is a continuous function. At first view the curves look consequentially – the more disturbed the data, the more flat is the minimum of the corresponding curve, but it is in all cases located at the true result. However, the zoom in fig. 5 shows that the minima are shifted – the more disturbed the data, the more the minimum is shifted to higher U-values.

Figure 5. Zoom of fig. 4 – the minima are shifted.

Page 5: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

5

Thus – it can be doubtless stated, that in case of noised data the minimum of the output error function shows not inevitably to the true result. 4. THE AUTOCORRELATION FUNCTION It is often recommended to analyse the residuals, that means the difference between calculated and measured values. In the ideal case this function should be zero everywhere. Any deviation means an error in the model or an error in the data. Measurement errors are never avoidable – therefore the residuals are never zero. But in the best case the residuals should contain white noise only, that means they should show no regularities. In that case the maximal possible amount of information was extracted from the measurement data. A measure to characterize the noise is the autocorrelation function of the residuals. In the ideal case it should be zero. Therefore, a further possible strategy in parameter identification could be to minimize the autocorrelation function of the residuals. In order to investigate whether this approach is more suitable for the test case in this paper, the autocorrelation function for the 5 noised data sets was calculated. Again, only the parameter H2-3 was varied. Fig. 6 shows the results.

Figure 6. Also the autocorrelation of the residuals is a continuous function in case of only one free parameter.

The picture is similar to that in fig. 4: The more disturbed the data, the more flat is the corresponding curve in the region around the minimum, and the minimum is shifted slightly to higher U-values. Thus, also the autocorrelation function is no clear criterium. But – a zoom of the interesting area shows a little surprise.

Page 6: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

6

Figure 7. Zoom of figure 6 – the curves have an intersection point exactly at the true value.

All of the 5 curves intersect in one point. Exactly at this point lies also the true result of the U-value. The value of the autocorrelation function at this point is not zero, but slightly negative. An explanation for this behaviour was not found yet. At all of the previous calculations 4 of the parameters were fixed and only one of them was varied. The advantage of this simplification is that the output error function as well as the autocorrelation function are continuous functions and can be displayed easily. In the following part all of the 5 parameters were varied, that means they were generated by uniformly distributed random numbers. Then the ouput error function and the autocorrelation function were calculated for each of the parameter sets. But these functions are no longer continuous – therefore the “graphs” are no smooth curves but something like clusters of points, which forebode that it would be very difficult to find a minimum. Fig. 8 shows the relationship between the estimated U-value and the output error function, now (and also in all of the following figures) only for the case of the most disturbed data.

Page 7: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

7

Figure 8. Five free parameters – now the function must be displayed as point cluster.

Of course – also in that case a minimum can be identified, and again it shows a slightly higher than the true U-value. Fig. 9 shows the autocorrelation function.

Figure 9. Also the autocorrelation function can be displayed only as point cluster.

Page 8: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

8

The autocorrelation function also shows a minimum near the true U-value. But a comparison of fig. 8 and fig. 9 shows a difference – the output error function ascends steeply at lower U-values, whereas the autocorrelation function ascends at higher U-values. A suitable combination of both functions could lead to better results. From experience it is known that the determination of the heat capacity C is more difficult than the determination of the U-value. Fig. 10 shows that descriptive.

Figure 10. For the heat capacity the identification of the minimum is more complicated.

Close to the true result a minimum can be found, but possibly this is a local one. At higher C-values exist a lot of further local minima, and maybe also the global one. This phenomenon often can be observed when using parameter identification software: Already after a short time the software finds a reasonable solution, but after a longer time a “better” solution is found with a very questionable result but with a lower value of the objective function. Fig. 10 explains this behaviour: The apparent “better” solution is only caused by stochastic fluctuations in the measurement data. Obviously, this behaviour makes the search for the true solution very difficult, especially at complicated cases, where the order of magnitude of the solution is unknown. Fig. 11 shows the autocorrelation function in dependence of the estimated C-value.

Page 9: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

9

Figure 11. Also the autocorrelation function does not show a sharp minimum.

Also the autocorrelation function shows only a very blurred minimum, and again there is a risk to identify the minimum at a completely worse “solution”. Intermediate conclusion: By a combined evaluation of the output error function and of the autocorrelation function it should be possible in most cases to identify the U-value with sufficient confidence. The determination of the C-value remains problematical. However, once the U-value is determined, all points in the figures 10 and 11 which give a wrong U-value could be erased – with the expection of sharper minima. The last two figures show the results. Figures 12 and 10 as well as figures 13 and 11 show the same relationships, but in figures 12 and 13 only points were considered which give U-values around 2% from the true value.

Page 10: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

10

Figure 12. The same function as in fig. 10, but only with sets of parameters, which belong to the correct

U-value.

Figure 13. The same function as in fig. 11, but only with sets of parameters, which belong to the correct

U-value.

Page 11: PARAMETER IDENTIFICATION UNDER DIFFICULT CONDITIONS – … · This paper illustrates these problems by means of some simulation calculations and encourages to a ... For the first

11

Both pictures show: The problem of the fluctuating points which could lead to a wrong solution is still not solved. But, with some “imagination” in both pictures an accumulation of points can be detected. This accumulation has the shape of a triangle, which “points” to the true solution. It is a matter of further research to what extend this pure visual impression can be evaluated mathematically. 5. CONCLUSIONS It could be clearly shown that the minimum of the objective function must not necessary give the parameter set with the true solution. Especially in short or/and heavily disturbed data sets a formal use of a parameter identification software can lead to completely wrong results. This paper raises more questions than it gives answers. It was not the aim to leave a pessimistic impression. Though the used model case is relatively simple, the perturbation of the data was very high, so that the difficulties in the analysis of the data are explainable. The questions and problems should encourage engineers, mathematicians and statisticians to contribute to solutions. ACKNOWLEDGEMENTS The work described in this paper was partly supported by the EU-projects IQ-TEST (ERK6-CT-1999-20003) and DAME-BC (ENK6-CT-2002-80650) as well as by the national German project “Electrochromic windows” (0327233G). REFERENCES [1] Madsen H, Holst J. Estimation of continuous-time models for the heat dynamics of a building.

Energy and Buildings 1995;22:67-79.

[2] Baker P. Analysis of Round Robin Tests carried out for the IQ-TEST Thematic Network using the PASLINK test cell environment. Workshop “Dynamic Analysis and Modelling Techniques for Energy in Buildings”, Ispra, Italy, 2003:153-63.

[3] Park CS, Augenbroe G, Messadi T et al. Calibration of a lumped simulation model for double-skin facade systems. Energy and Buildings 2004;36:1117-30.

[4] Wetter M, Wright J. A comparison of deterministic and probabilistic optimization algorithms for nonsmooth simulation-based optimization. Building and Environment 2004;39:989-999.

[5] Jiménez MJ. Application of Dynamic Analysis to estimate the Heat Transfer Coefficient of a wall from In-situ measurements. Workshop “Dynamic Analysis Methods Applied to Energy Performance Assessment of Buildings”, Warsaw, Poland, 2004:207-85.

[6] van Dijk HAL, van der Linden GP. MRQT user guide, TNO Builiding and Construction Research, Delft, The Netherlands, 1993.

[7] Madsen H, Melgaard H. CTSM User Guide, Institute of Mathematical Statistics and Operations Research, Technical University of Denmark, Lyngby.

[8] MATLAB / System Identification Toolbox 6.1.2., User manual, The Mathworks.

[9] Gutschker O. LORD – Modelling and identification software for thermal systems, user manual, BTU Cottbus, 2004.