24
Design of Experiments and Sequential Analysis used as an Optimization Technique of a Refrigerator Cabinet Axel J. Ramm Whirlpool Corporation, Joinville, Santa Catarina, Brazil. Abstract Global competitive pressures are causing organizations to find ways to better meet the needs of their costumers, to reduce costs, and to increase productivity. Nowadays the appliance industry, particularly refrigeration, is inserted in a very competitive scenario and there is a big effort to achieve cost reductions without causing quality degradation in order to survive in the core business. Regarding variability in manufacturing process, refrigeration cabinets are the most affected components. Basically they are built of external sheet metal parts, plastic inner liners and the internal volume is filled with polyurethane foam. In this paper it will be presented some techniques to better represent refrigerator cabinets finite element modeling. Statistical tools will be used to help in numerical model calibration and sources of variation evaluation. Once having a calibrated model, an optimization analysis based on Design of Experiments technique will be carried out in ANSYS. Therefore Design of Experiments and sequential analysis will be directly compared to subproblem approximation method of ANSYS optimization routine using APDL (ANSYS Parametric Design Language), which is an essential step in the optimization process. The projected cost savings covered by this paper contents are about US$1.250.000,00 per year and the generated knowledge will be extremely useful for refrigerator cabinet design guidelines. Introduction Reducing the time and cost of product and process development is a key concern of today’s competitive industrial environment. Since there is increasing evidence in refrigerator cabinets cost reductions it is required to plan a systematic and complete study to achieve its structural optimization – saving in the product cost while not decreasing its quality. The main reason for guaranteeing cabinet robustness is that the lack of stiffness can generate insulation loss due to door misalignment and aesthetics degradation, affecting directly the perceived quality by the consumers. Concerning the cabinet robustness, the current production cabinets are approved under a “Product Test Method” which establishes a distortion limit for a fully assembled refrigerator cabinet at specified angular door openings of its fully loaded door. In the same manner, there is a specification limit for door deflection relative to the cabinet (door drop). Once the door is loaded there occurs a cabinet distortion and the door ends up moving downwards referentially to its original position. In long periods of time this phenomenon can be increased by polyurethane foam viscoelastic behavior whose main consequence is the occurrence of creep, that is, an increasing deformation under sustained load and the rate of strain depending on the stress. Therefore, the main focus was given to the response variable “door drop”. In the other hand it is very complicated to evaluate an accurate value of cabinet deflection or door drop since there is a lot of variation between products of the same model. The manufacturing process is surrounded of a great number of noises that affect directly the cabinet stiffness. In the same way, the product test conditions can significantly influence in the structural assessment results.

2006 Int Ansys Conf 120

Embed Size (px)

Citation preview

Page 1: 2006 Int Ansys Conf 120

Design of Experiments and Sequential Analysis used as an Optimization Technique of a Refrigerator Cabinet

Axel J. Ramm Whirlpool Corporation, Joinville, Santa Catarina, Brazil.

Abstract

Global competitive pressures are causing organizations to find ways to better meet the needs of their costumers, to reduce costs, and to increase productivity.

Nowadays the appliance industry, particularly refrigeration, is inserted in a very competitive scenario and there is a big effort to achieve cost reductions without causing quality degradation in order to survive in the core business.

Regarding variability in manufacturing process, refrigeration cabinets are the most affected components. Basically they are built of external sheet metal parts, plastic inner liners and the internal volume is filled with polyurethane foam.

In this paper it will be presented some techniques to better represent refrigerator cabinets finite element modeling. Statistical tools will be used to help in numerical model calibration and sources of variation evaluation. Once having a calibrated model, an optimization analysis based on Design of Experiments technique will be carried out in ANSYS. Therefore Design of Experiments and sequential analysis will be directly compared to subproblem approximation method of ANSYS optimization routine using APDL (ANSYS Parametric Design Language), which is an essential step in the optimization process.

The projected cost savings covered by this paper contents are about US$1.250.000,00 per year and the generated knowledge will be extremely useful for refrigerator cabinet design guidelines.

Introduction

Reducing the time and cost of product and process development is a key concern of today’s competitive industrial environment. Since there is increasing evidence in refrigerator cabinets cost reductions it is required to plan a systematic and complete study to achieve its structural optimization – saving in the product cost while not decreasing its quality.

The main reason for guaranteeing cabinet robustness is that the lack of stiffness can generate insulation loss due to door misalignment and aesthetics degradation, affecting directly the perceived quality by the consumers.

Concerning the cabinet robustness, the current production cabinets are approved under a “Product Test Method” which establishes a distortion limit for a fully assembled refrigerator cabinet at specified angular door openings of its fully loaded door. In the same manner, there is a specification limit for door deflection relative to the cabinet (door drop). Once the door is loaded there occurs a cabinet distortion and the door ends up moving downwards referentially to its original position. In long periods of time this phenomenon can be increased by polyurethane foam viscoelastic behavior whose main consequence is the occurrence of creep, that is, an increasing deformation under sustained load and the rate of strain depending on the stress. Therefore, the main focus was given to the response variable “door drop”.

In the other hand it is very complicated to evaluate an accurate value of cabinet deflection or door drop since there is a lot of variation between products of the same model. The manufacturing process is surrounded of a great number of noises that affect directly the cabinet stiffness. In the same way, the product test conditions can significantly influence in the structural assessment results.

Page 2: 2006 Int Ansys Conf 120

The dispersal environment make it only possible to discern fairly changes in cabinet stiffnesses since the differentiation will effectively occur if the data results for a modified design retain robust from all sources of variation. The initial step of the proposed study will be focused in sources of variation appraisal in order to determine data dispersion (standard deviation from the mean results under a reliability degree) and to provide conditions to plan systematic laboratory tests to help in finite element model calibration.

Considering all the relevant structural components and fixations, it was carried out a modeling of an overall refrigerator cabinet finite element. In addition, it was modeled the refrigerator and freezer doors with the purpose to provide proper loading distribution and serve as reference to evaluate the door drop due to cabinet distortion.

A proper finite element model validation is essential to start the cabinet optimization study because knowledge about variation has been gained and now the numerical model structural behavior is quantitatively in accordance with the real conditions.

The essential technique used to approach the study was the sequential strategy based on Design of Experiments.

The Design of Experiments is a technique used to plan experiments, consisting in a series of tests of a system carried out by changing levels of factors and background variables and an observation of the effect of that change on the response variable.

The sequential nature of learning should be considered in planning experiments. As knowledge is gained sequentially, experiments will be refined by using new levels for the factors (inference space), adding some new factors or eliminating the negligible ones.

To set up a factorial design, the investigator determines the factors to be studied and the levels for each. A full factorial design consists of all possible combinations of the factors and levels, making it possible to obtain the maximum experiment resolution, since there are not any aliasing between main effects and higher-order interactions.

The number of runs required by a full factorial design increases geometrically as the number of factors increases and an increasing amount of the data is used to estimate higher-order interactions. These interactions are usually negligible and therefore are of little interest to the experimenter. Fractional factorial designs are an important class of experimental designs that allow the size of experiments to be kept practical while still enabling the estimation of important effects. The negative point of using fractional factorials is the confounding of factors and its interactions, depending on the experiment resolution.

All of the most important steps concerning Design of Experiments sequential analysis are presented in this paper since fractional factorial designs and parameters selection up to the final optimum solution. Afterwards ANSYS traditional optimization techniques specifically Subproblem Approximation Method will be used as a comparison of the study based on sequential analysis.

Refrigerator Cabinet Geometry

The product considered in this analysis is a 450 liters double door refrigerator which has been in production for over two years.

The refrigerator cabinet construction comprises basically external sheet metal parts, polyurethane foam filling and internal plastic liners. The metal parts are: wrapper (with roll-formed reinforcements), back panel, bottom deck, intermediary rail, front rail, compressor mounting plate, glider rails and hinges (upper, center and bottom). The cabinet inner liners are made of plastic material adhering to the polyurethane foam with outer layers and internally forming the refrigerator and freezer compartments.

Figures 01 and 02 show some of the refrigerator cabinet most important parts.

Page 3: 2006 Int Ansys Conf 120

Figure 01. Cabinet Components – Front View

Page 4: 2006 Int Ansys Conf 120

Figure 02. Cabinet and Doors Assembly – Opened and Closed Doors

Model Description and Modeling Techniques

The finite element model is composed of solid and shell elements. The polyurethane foam, EPS mullion, hinges and levelers were modeled with solid tetrahedrons elements (Solid 45) and other parts, basically composed of thin plates, were modeled with Shell 181 elements. For all the connections of clinch joints and screws it was adopted the use of the element beam 181.

All the materials properties were considered as linear isotropic and the input data was picked from laboratory tests and supplier technical reports. Particularly for polyurethane foam, once it is susceptible to a high level of variation (position to position in the cabinet and due to injection process fluctuation), the elastic properties were evaluated in a laboratory test device and elasticity modulus was included as a factor in the analysis considering the levels as the mean of its minimum and maximum values.

In order to adequately represent the real loading conditions, it was evaluated the mass of all cabinet parts. All the evaluated masses were distributed in its proper position and eventually the total mass and the gravity center of the real fully assembled cabinet was very similar to the finite element model one.

On its bottom base the cabinet was constrained simulating a real operating condition. Considering that all cabinet supports were able to rotate, only translations were restricted at three base locations. The figure 03 shows the constraints application.

Page 5: 2006 Int Ansys Conf 120

Figure 03. Boundary Conditions.

The door attachments with the hinges were modeled with constraint equations which allowed controlling translations and rotations to be correctly transferred from the doors to the cabinet. Thus, it was possible to impose door support only at its bottom position and release rotations from hinge pins. Therefore it was necessary to set two additional constraints at the upper liner of each door to eliminate rigid body motion (figure 03).

Regarding the Product Test Method, the masses corresponding to the cabinet and doors loading were distributed at each own particular location. The acceleration due to gravity was applied as load boundary condition. The figure 04 shows the masses disposition in the cabinet and doors liners which were applied to the finite element model.

Page 6: 2006 Int Ansys Conf 120

Figure 04. Cabinet Loading (Mass) Distribution.

For the door drop assessment it was considered the analytical model with its doors at the closed position. The response variable was established as the subtraction of door drop and cabinet drop with the purpose to consider only the relative displacement between door and cabinet. The figures 05 and 06 show the evaluation scheme of cabinet door drop.

Page 7: 2006 Int Ansys Conf 120

Figure 05. Analyzed Finite Element Model: Cabinet Distortion due to Door Loading.

Figure 06. Cabinet Door Drop Evaluation.

Page 8: 2006 Int Ansys Conf 120

Sources of Variation Evaluation

Only when a numerical model is calibrated it can be used to perform a correct assessment of its structural performance and afterwards run an optimization analysis.

The first step of a model calibration was the definition of some possible sources of variation which could generate errors to test data collection. From the Test Process Map it was elected four factors considered as the most influencing ones: different products of the same model, test setup, door opening velocity, time to read data and different measurements. The Sample Three Diagram is presented in figure 07.

Figure 07. Experiment Sample Tree Diagram.

The total sum of cabinet lateral displacement (sway X) was considered to be the response variable. Regarding the test conditions, initially the doors were closed (zero position), thus the signal starts to be saved and hence the doors were opened. The figure 08 shows an example a sample of signal collected from one experiment run. The sum modulus of the signal peak and valley is considered to be the cabinet Sway X.

Figure 08. Signal Collected from a Laboratory Test.

In the Sample Three Diagram it can be realized that the data collection started with the random choice of four products of the same model. Each product was tested under two different setups (put the cabinet at the test position), two door opening velocities (fast and slow) and the time for reading the cabinets deflection was computed at 15 seconds and 3 minutes respectively. Each configuration was measured three times, totaling a number of 96 measures.

Page 9: 2006 Int Ansys Conf 120

For the data collection analysis it was used an important statistical tool: Control Charts. Control Charts for Averages and Ranges are a simple and effective way to present data for a use as a basis for process stability evaluation, making it possible to track both the process level and process variation at the same time, as well as detect the presence of special causes. The figure 09 shows the Sample Mean (Averages) and Sample Range chart for displacement measurement at the cabinet upper lateral corner.

Sample

Sa

mp

le M

ea

n

3128252219161310741

5,0

4,8

4,6

4,4

4,2

__X=4,5094

UC L=4,6283

LC L=4,3904

Sample

Sa

mp

le R

an

ge

3128252219161310741

0,3

0,2

0,1

0,0

_R=0,1162

UC L=0,2993

LC L=0

11

11

1

11

1

1

1

1

11

1

1

11

Xbar-R Chart of Sway X

Figure 09. Sample Means Average and Sample Range Charts.

The process is said to be SPC (stable, predictable and consistent) if all the points existing in the Range Chart are inside the control limits. Analyzing the Sample Range Chart in figure 09 it can be checked out the process stability.

An important issue to carry out is the MSE (Measurement System Evaluation). The measurement system is said to have enough discrimination if there are as much as required levels of resolution in the Range chart. The capacity to evaluate measure between subgroups occurs if there are more than 50% of the points out of the control limits in the Sample Mean chart. Looking at Sample Mean Chart in figure 09 it can be realized that these requirements are also verified for the measurement process.

Figure 10 shows a very important graph - Variability Chart. Variability Chart plots the mean for each level side by side. Along with the data, you can view the mean, range, and standard deviation of the data in each category, seeing how they change across the categories.

This graph is also very helpful to look for systematic effects between factors and levels. A very interesting thing to observe is that always at the time of 3 minutes the measures are greater than at the time of 15 seconds – it can be explained by the creep effect of the polyurethane foam inside the cabinet.

Page 10: 2006 Int Ansys Conf 120

Figure 10. Variability Chart and Components of Variation (COV) Calculation.

To quantify the contribution of each factor it is employed a variance components analysis (COV). Performing a COV study it could be checked that the variation between products is the most significant over the other selected factors. This issue indicates that no matter levels of setup, velocity, time and measure levels are configured, always the product-by-product variation will stand out among them. Check that the bottom part of figure 10 displays that 71,7% of total variation in the response variable is due to differences among products (manufacturing).

With the purpose to assure better reliability in data for numerical model calibration it was decided that the number of samples for different products of the same model had to be increased. Therefore 16 products were randomly selected from the manufacturing line and tested considering only the factors: products and measures. The figure 11 shows the control charts for the assigned experiment.

Page 11: 2006 Int Ansys Conf 120

Sample

Sa

mp

le M

ea

n

15131197531

4,8

4,6

4,4

4,2

4,0

__X=4,442

UC L=4,569

LC L=4,314

Sample

Sa

mp

le R

an

ge

15131197531

0,3

0,2

0,1

0,0

_R=0,1247

UC L=0,3211

LC L=0

1

1

1

1

1

1

1

11

1

1

Xbar-R Chart of X

Figure 11. Sample Means Average and Sample Range Charts.

The figure 12 shows the data distribution from the collected samples with normal fit disposal.

X

Freq

uenc

y

5,14,84,54,23,9

12

10

8

6

4

2

0

Mean 4,442StDev 0,2842N 48

Histogram of XNormal

Figure 12. Normal Fit Distribution for the Collected Samples.

The calculated mean and standard deviation were 4,442 mm and 0,2842 mm respectively. Or better,

2842,0442,4 ±=XSway .

Page 12: 2006 Int Ansys Conf 120

Numerical Model Calibration

For numerical model calibration it was selected the product with the value of the response variable (sway X) nearest to the samples mean.

The product support at the rear corner of the base on the hinge side was removed because this location lifts off the floor ending up not carrying any load onto the floor.

From this step on the response variable considered in the analysis was Door Drop, as shown in figures 05 and 06.

Initially the cabinet was set on the test platform and displacement transducers were positioned at the specified locations. An additional gauge was placed on the cabinet upper position with the purpose to measure cabinet door drop.

In order to simulate the numerical model loading boundary condition it was removed all the loaded shelves from the refrigerator, the doors were closed and all the displacement gauges were put at zero reference. The acquisition data system was set to start the signal recording. The refrigerator doors were open and the shelves with distributed mass were carefully placed at their original positions. Thus, the doors were closed and the signal was stopped to be recorded after 20 seconds of measurement. The figure 13 shows the output data signal: in blue it is represented the lateral displacement and in red the back displacement.

Figure 13. Data Signal from a Laboratory Calibration Test.

All the measurements were repeated five times in order to make sure the acquired data was reliable. These values indicated a good correlation with the finite element model data - see figure 14.

Page 13: 2006 Int Ansys Conf 120

Figure 14. Correlation Between Real Data and Numerical Model.

Design of Experiment and Sequential Analysis Optimization

The first optimization approach was to execute sequential virtual DOEs (Design of Experiments). The factors and levels selection were based on 6-Sigma tools. 6-Sigma is a framework based on QFD (Quality Function Deployment) method whose philosophy is the use of engineering knowledge and critical thinking to evaluate the design factors that could possibly affect the response variable.

The goal was to determine the effects of the cabinet components design on the door drop. These relationships are generically termed Y = f(x) at Whirlpool Corporation. The Y being the dependent variable (i.e. door drop in this case) and the x's being the independent variables, i.e. component geometry and thickness (stiffnesses).

It was also used a specialized software in statistical tools (Minitab) to calculate the significance of the assigned factors and the interactions among them.

All the finite element analysis executed during this procedure were performed in ANSYS environment taking advantage of APDL batch mode to run the DOE´s treatments.

The design variables considered in the analysis were: SCR (screw fixation between intermediary rail and wrapper front flange), BP (back panel stiffness), IR (intermediary rail thickness), CMP (compressor mounting plate thickness), GR (glider rail thickness), GRL (glider rail lateral ribs), BD (bottom deck thickness) and FR (front rail thickness). The state or response variable used to control the cabinet stiffness was door drop under static load application and the cost function to be minimized was cabinet sheet metal parts mass.

Assuming that the most prominent factor responsible for products manufacturing variation was polyurethane foam stiffness, another factor, PU (polyurethane foam stiffness) was considered in the analysis.

The initial design was a fractional factorial design with 10 factors (design variables) and 16 runs, resulting a resolution III experiment - main effects confounded with second order effects. The FRD (Factors Relationship Diagram) is shown in figure 15.

Page 14: 2006 Int Ansys Conf 120

Figure 15. Factors Relationship Diagram for the First DOE.

The graph presented in figure 16 is called Normal Probability Plot of the effects. The points that do not fall near the line usually signal important effects. Important effects are larger and further from the fitted line than unimportant effects. Unimportant effects tend to be smaller and centered around zero.

Figure 16. Normal Probability Plot for the First DOE.

Page 15: 2006 Int Ansys Conf 120

The Pareto chart (figure 17) allows you to look at both the magnitude and the importance of an effect. This chart displays the absolute value of the effects, and draws a reference line on the chart. Any effect that extends past this reference line is potentially important.

Figure 17. Pareto Plot for the First DOE.

A Main Effects Plot is a plot of the means at each level of a factor. Minitab plots the means at each level of the factor and connects them with a line. A main effect occurs when the mean response changes across the levels of a factor. You can use main effects plots to compare the relative strength of the effects across factors.

Figure 18. Main Effects Plot for the First DOE.

Page 16: 2006 Int Ansys Conf 120

Regarding to increase the data reliability, it was decided to increase the resolution, performing a fold-over (mirror) of the experiment. Therefore, analyzing both experiments (original, figure 15 and fold-over, figure 19) together, the output resolution was increased to IV, providing only aliasing of main effects and three way interactions.

Figure 19. Factors Relationship Diagram for the Fold-Over Experiment.

Page 17: 2006 Int Ansys Conf 120

Figure 20. Normal Probability Plot for the Fold-Over Experiment.

As the experiment resolution has been increased, the most significant difference was that the factor K (confounding structure K = AB = CE = DH = FG) which appeared as significant in the first DOE, in fact, was revealed to be the interaction AB in the fold-over experiment (resolution IV).

Figure 21. Pareto Plot for the Fold-Over Experiment.

Page 18: 2006 Int Ansys Conf 120

Figure 22. Main Effects Plot for the Fold-Over Experiment.

Considering a mean value for PU (polyurethane foam stiffness) and setting aside irrelevant factors for the response variable (door drop) such as compressor mounting plate thickness (CMP), glider rail thickness (GR), glider rail lateral ribs (GRL), bottom deck thickness (BD) and front rail thickness (FR), it was decided to run a full factorial experiment (resolution ∞).

Figure 23. Factors Relationship Diagram for the Full Factorial Experiment.

Page 19: 2006 Int Ansys Conf 120

Figure 24. Normal Probability Plot for the Full Factorial Experiment.

Figure 25. Pareto Plot for the Full Factorial Experiment.

Page 20: 2006 Int Ansys Conf 120

Figure 26. Main Effects Plot for the Full Factorial Experiment.

The chart presented in figure 27 is called Interactions Plot. An Interactions Plot is a plot of means for each level of a factor with the level of a second factor held constant.

An interaction between factors occurs when the change in response from the low level to the high level of one factor is not the same as the change in response at the same two levels of a second factor. That is, the effect of one factor is dependent upon a second factor. You can use interactions plots to compare the relative strength of the effects across factors.

Figure 27. Interactions Plot for the Full Factorial Experiment.

Page 21: 2006 Int Ansys Conf 120

The most sensitive factor regarding mass (cost) is the cabinet wrapper. There is an increase of 5,2% in cabinet lateral displacement and 3,4% in door drop if it is decided to reduce the thickness from 0,6mm to 0,55mm (8,3%).

The presence of a screw connector between intermediary rail and wrapper was the most significant factor to retain the door drop at a reasonable level. To compensate the increase of door drop due to wrapper thickness reduction (3,4%) it was studied the possibility of using two screw connectors between the wrapper front flanges and the intermediary rail. This design change generated a reduction of 12% in door drop, making up for the wrapper stiffness loss.

APDL Optimization Techniques The ANSYS program offers two optimization methods to accommodate a wide range of optimization problems. The subproblem approximation and the first order method.

The subproblem approximation method can be described as an advanced zero-order method in that it requires only the values of the dependent variables (objective function and state variables), and not their derivatives. There are two concepts that play a key role in the subproblem approximation method: the use of approximations for the objective function and state variables, and the conversion of the constrained optimization problem to an unconstrained problem. The conversion is done by adding penalties to the objective function approximation to account for the imposed constraints.

For this method, the program establishes the relationship between the objective function and the DVs by curve fitting. This is done by calculating the objective function for several sets of DV values (that is, for several designs) and performing a least squares fit between the data points. The resulting curve (or surface) is called an approximation. Each optimization loop generates a new data point, and the objective function approximation is updated. It is this approximation that is minimized instead of the actual objective function.

State variables are handled in the same manner. An approximation is generated for each state variable and updated at the end of each loop.

The search for a minimum of the unconstrained objective function approximation is then carried out by applying a Sequential Unconstrained Minimization Technique (SUMT) at each iteration.

The first step in minimizing the constrained problem is to represent each dependent variable by an approximation, represented by the ^ notation. For the objective function, and similarly for the state variables,

errorxfxf +≤ )()(^

errorxgxg +≤ )()(^

errorxhxh +≤ )()(^

errorxwxw +≤ )()(^

The most complex form that the approximations can take on is a fully quadratic representation with cross terms. Using the example of the objective function,

∑ ∑∑++=n

i

n

i

n

jjiijii xxbxaaf 0

^

Page 22: 2006 Int Ansys Conf 120

You can control curve fitting for the optimization approximations. You can request a linear fit, quadratic fit, or quadratic plus cross terms fit. By default, a quadratic plus cross terms fit is used for the objective function, and a quadratic fit is used for the SVs.

A weighted least squares technique is used to determine the coefficient, ai and bij.

With function approximations available, the constrained minimization problem is recast as follows:

Minimize )(^^

xff = Subject to

iii xxx_

≤≤−

(i = 1,2,3, … ,n)

iii gxg α+≤_^

)( (i = 1,2,3, … ,m1)

)(^

xhh iii ≤−−

β (i = 1,2,3, … ,m2)

iiiii wxww γγ +≤≤−−

_^)( (i = 1,2,3, … ,m3)

The next step is the conversion from a constrained problem to an unconstrained one. This is accomplished by means of penalty functions, leading to the following subproblem statement.

Minimize

++++= ∑ ∑ ∑∑= = ==

n

i

m

i

m

i iim

i iikk wWhHgGxXpffpxF1 1 1

^^

1

^

02 31 )()()()(),(

in which X is the penalty function used to enforce design variable constraints; and G, H, and W are penalty functions for state variable constraints. The reference objective function value, f0, is introduced in order to achieve consistent units. A sequential unconstrained minimization technique (SUMT) is used to solve the equation above each design iteration. The subscript k above reflects the use of sub iterations performed during the subproblem solution, whereby the response surface parameter is increased in value (p1 < p2 < p3 etc.) in order to achieve accurate, converged results.

At the end of each loop, a check for convergence (or termination) is made. The problem is said to be converged if the current, previous, or best design is feasible and any of the following conditions are satisfied:

• The change in objective function from the best feasible design to the current design is less than the objective function tolerance; • The change in objective function between the last two designs is less than the objective function tolerance; • The changes in all design variables from the current design to the best feasible design are less then their respective tolerances; • The changes in all design variables between the last two designs are less than their respective tolerances.

Convergence does not necessarily indicate that a true global minimum has been obtained. It only means that one of the four criteria mentioned above has been satisfied. Therefore, it is your responsibility to determine if the design has been sufficiently optimized. If not, you can perform additional optimization analyses.

Page 23: 2006 Int Ansys Conf 120

Subproblem Approximation Method Procedure

The same problem previously assigned by Design of Experiments and Sequential Analysis was submitted to ANSYS Subproblem Approximation Optimization.

The Design Variables considered in the analysis were: SCR (screw fixation between intermediary rail and wrapper front flange), BP (back panel stiffness), IR (intermediary rail thickness), CMP (compressor mounting plate thickness), GR (glider rail thickness), GRL (glider rail lateral ribs), BD (bottom deck thickness) and FR (front rail thickness). The State Variable was Door Drop and the Cost Function to be minimized was Cabinet Mass.

The results of ANSYS Subproblem Approximation technique are listed in figure 28 in comparison of the results obtained by Design of Experiments and Sequential Analysis.

Figure 28. Comparison of DOE Optimization Technique and Subproblem Approximation

Method.

Conclusions

The study startup was directed to evaluate sources of variation from manufacturing and structural tests which could influence in the response variables. As knowledge about variable was obtained, it was carried out refrigerator cabinet finite element modeling associated with real calibration data.

Design of Experiments technique is a very powerful tool to learn about the response variable sensitivities in a low cost manner. The Sequential Analysis makes it possible to retain information about factor effects and interactions over the experiments. This feature is crucial to appraise elimination of non-significant factors, robust design guided by factors interactions and increase in experiments resolution.

The results given by the two optimization approaches were very similar, as depicted in figure 28. However each method was characterized of individual advantages.

The final optimized design provided 15% of mass saving per product, resulting in a cost reduction of about US$1.250.000,00 per year.

Page 24: 2006 Int Ansys Conf 120

References [1] WHEELER, D. J., CHAMBERS D. S., Understanding Statistical Process Control, Second Edition, SPS Press, Knoxville, Tennessee, 1992;

[2] MINITAB, Minitab Statistical Software, Release 13.31 for Windows, 2000;

[3] MONTGOMERY, D. C., Design and Analysis of Experiments, John Wiley & Sons, Inc New York, 1997;

[4] Manual ANSYS, Design Optimization, ANSYS Inc., 2005;

[5] ARORA, J. S., Introduction to Optimum Design, McGraw-Hill, 1989;

[6] VANDERPLAATS, G. N., Numerical Optimization Techniques for Engineering Design with Applications, McGraw-Hill, 1984.