147
UNIVERSITY OF CINCINNATI _____________ , 20 _____ I,______________________________________________, hereby submit this as part of the requirements for the degree of: ________________________________________________ in: ________________________________________________ It is entitled: ________________________________________________ ________________________________________________ ________________________________________________ ________________________________________________ Approved by: ________________________ ________________________ ________________________ ________________________ ________________________

Design Optimization of Mechanical Components

  • Upload
    sahar

  • View
    132

  • Download
    3

Embed Size (px)

Citation preview

UNIVERSITY OF CINCINNATI

_____________ , 20 _____

I,______________________________________________,hereby submit this as part of the requirements for thedegree of:

________________________________________________

in:

________________________________________________

It is entitled:

________________________________________________

________________________________________________

________________________________________________

________________________________________________

Approved by:________________________________________________________________________________________________________________________

Design Optimization Of Mechanical Components

A Thesis submitted to the

Division of Research and Advanced Studies

of the University of Cincinnati

in partial fulfillment of the

requirements for the degree of

MASTER OF SCIENCE

in the Department of Mechanical, Industrial and Nuclear Engineering

of the College of Engineering

2002

by

Dinar V. Deshmukh

B. E., University of Pune, Pune, 1999

Committee Chair : Dr. David F. Thompson

Abstract

The need for high performance, low cost designs in the engineering industry is what

drives designers to search for a utopian design. Design optimization demands greater importance

in product development than any other aspect involved. Usually for a design problem under

consideration, there are multiple parameters that need to be optimized and often these are

conflicting in nature. The designer needs to arrive at a suitable design that fulfills all the

requirements but also meets all the optimization parameters to the greatest extent possible.

With this goal, an integrated approach to optimization is presented here with the scope of

designing mechanical components in mind. The method presented incorporates a goal

programming approach called the Compromise Decision Support Problem Technique combined

with a Branch and Bound Algorithm to solve multi-objective, constrained non-linear problems

with mixed variables. The approach is initially presented qualitatively, and later illustrated with

practical design problems.

The approach is well suited to solving real life engineering problems that are highly

constrained, have multiple objectives and the designs generated need to adhere to industry

standards. We have applied the approach to designing a helical compression spring and a three-

stage gear train; components that find universal application in mechanical engineering. The

nature of Pareto Optimal solutions for multiple objective problems is explored and a trade-off

analysis is proposed. The results obtained are compared with existing solutions in literature, and

comparisons are drawn.

Acknowledgement

First and foremost I would like to express my gratitude to Dr. David F. Thompson for

introducing me to ‘optimization’. The search that started as a graduate course has developed into

this thesis. I would like to thank him for his constant encouragement and input that has

contributed substantially to my work. Without his guidance, I would not have been able to

explore the rich and diverse subject of optimal design engineering.

I would also like to thank Dr. Ed Berger and Dr. Sam Anand for serving on my thesis

committee. Their appraisal of this work has gone into improving it beyond what I could have

envisioned. I thank them for their support and encouragement.

I express my thanks towards my colleagues in the Department of Mechanical, Industrial

and Nuclear Engineering who were always present for valuable discussion and much needed

input. They provided me with a congenial work environment and a friendly banter that

encouraged me throughout my stay at the University of Cincinnati. I also thank the staff at the

Engineering Library at the University of Cincinnati for their assistance in procuring many of the

references, which would not have been accessible otherwise.

I would like to thank my parents who have provided me with an excellent opportunity to

seek knowledge. Without their blessings I would not have been able to pursue my academic

goals. My thanks would be incomplete without mentioning Bela’s support and encouragement,

who has been an inspiration throughout.

-1-

TABLE OF CONTENTS

Sr. No. Title Pg. No.

A List of Figures 2

B List of Tables 3

C List of Symbols 4

1 The Design Process 5

2 Single Objective Optimization 13

3 Multiple Objective Optimization 22

4 The Compromise Decision Support Problem Technique 38

5 The Spring Design Problem 56

6 The Gear Train Design Problem 67

7 Conclusion 89

References 92

Appendix 1 96

Appendix 2 105

-2-

LIST OF FIGURES

Sr. No. Title Pg. No.

1.1 Conventional Design Process 7

1.2 Design Optimization Process 9

3.1 Geometrical Interpretation of the Weighting Objectives Method 28

3.2 Weighting Objectives Method for a Non-Convex Problem 29

3.3 Graphical Definition of Pareto Optimal 35

3.4 (a) Convex Feasible Set 36

3.4 (b) Non-Convex Feasible Set 36

4.1 Graphical Representation of the Compromise DSP 43

4.2 Progress of the Branch and Bound Algorithm 53

5.1 Helical Compression Spring 57

6.1 Surface Fatigue Life Factor 69

6.2 Three Stage Gear Train 71

6.3 Lewis Geometry Factor 76

6.4 Pareto Optimal Curves for Constant Torque 83

6.5 Pareto Optimal Curves for Constant Speed Ratio 86

-3-

LIST OF TABLES

Sr. No. Title Pg. No.

5.1 Allowable Wire Diameters for ASTM A228 58

5.2 Comparison of Optimal Solutions 64

5.3 Percentage Improvement 64

5.4 Comparison of Solution Time 65

6.1 Input Parameters 72

6.2 Material Properties and Constants 73

6.3 Pareto Optimal Solutions for Constant Torque 84

6.4 Pareto Optimal Solutions for Constant Speed Ratio 87

-4-

LIST OF SYMBOLS

The list of variables below is used to describe a general optimization problem in this

work. Other symbols specific to individual problems are listed as and when they are used.

xi Design Variable

X Design Vector

di Deviation Variable (Compromise Decision Support Problem)

f Objective Function

Ai Actual Value of System Goal

Gi Desired Value of System Goal

gi Inequality Constraints

hi Equality Constraints

xlb Lower Bound on Design Variables

xub Upper Bound on Design Variables

wi Weights Attached to Individual Objective Functions

-5-

Chapter 1

The Design Process

1.1 Introduction

The process of design in general may be broadly termed as the process of converting

information that characterizes the needs and requirements for a product into knowledge about a

product. This definition can be applied to products as well as processes. By product we mean a

tool necessary to fulfill a specified objective with a given set of parameters. A process however,

can be the method used to arrive at such a product by choosing among the various alternatives

available [1].

The job of a designer is not simply limited to computing various parameters associated

with a design. Increasingly, decisions taken during the design process define the primary

responsibility of the designer. Decisions in design are invariably multileveled and

multidimensional in nature. They involve information that may come from different sources and

disciplines. There may not be a single parameter that defines merit and performance. Problems

that have more than one performance parameter are often known as multi-objective optimization

problems. All the information required to arrive at a decision may not be available. Some of the

information used in arriving at a decision may be hard, that is, based on scientific principles and

some information may be soft, based on the designer's judgment and experience.

There are constraints that determine the feasibility of a design or that of a decision. In the

end there might even not be a feasible solution to the problem. However in most real life

scenarios, problems are generally not well defined and there exists great ambiguity in

-6-

determining the design parameters. As such there may not be benchmarks to choose one design

over the other. This is usually the case in �open systems� � a system that cannot be analyzed

independent of the environment in which it exists. However, for a closed form well formulated

problem, the designer may arrive at a suitable solution or a family of solutions.

In general, the design can be classified into the following three families [1] :

! Original Design : An original solution principle is determined for a desired system and used

to create the design of a product. The design specification for the system may require the

same, similar or a new task altogether.

! Adaptive Design : An original design is adapted to different conditions or tasks; thus, the

solution principle remains the same but the product will be sufficiently different so that it can

meet the changed tasks that have been specified.

! Variant Design : The size and/or arrangement of parts or subsystems of the chosen system

are varied. The desired tasks and solution principle are not changed.

However, in any case, the basic structure of the design process remains the same.

Conventional design has been largely based on the knowledge gained by the designer over years

of experience. The designer can use this knowledge in the form of heuristics to arrive at a

satisfactory solution to a design problem. The structure of conventional design can be illustrated

in the flow-chart below [2].

-7-

Analyze the System

Collect Information to describe the system

Estimate Initial Design

Check PerformanceCriteria

Is the designsatisfactory ?

Change design based onexperience/heuristics

STOPYes

No

Fig. 1.1 Conventional Design Process [2]

1.2 Motivation

Mechanical design optimization remains a vastly researched topic till date. Over the

years, many optimization methods have been proposed. They span a vast milieu of problems

such as linear programming, unconstrained non-linear optimization, constrained non-linear

optimization, single and multi-objective optimization and such. Presenting all the well-known

optimization methods is beyond the scope of this work; however relevant topics are addressed in

subsequent chapters.

-8-

There exist well-documented solution methods for certain standard types of problems.

The Simplex Algorithm can solve linear programming problems. Novelties of the linear

programming such as integer and discrete valued linear programming have also been addressed

in literature. There exist optimization methods for unconstrained non-linear optimization such as

the Steepest Descent Method, Conjugate Gradient Method and Quasi-Newton Methods.

However, these methods are useful only in an academic sense because almost all of the

engineering problems are constrained non-linear optimization problems.

Such problems present a challenge to researchers since solution methods for constrained

optimization involve line-search methods. Usually an initial guess is presented and a systematic

search procedure is implemented for better and better solutions, until an �optimal solution� is

found that cannot be improved further. Thus the optimization scheme is an iterative one and

involves at least some degree of trial and error. As the problem complexity is increased, the

search procedure becomes tedious and may not guarantee a solution in all cases. However, the

rigorous formulation of the design problem helps the designer to better understand the overall

scope of the problem. The conventional design process is inexact and time consuming. In

contrast, an optimization procedure is fast and accurate. It also does not rely on the designers�

experience as much as the conventional design process does. The overall structure of an

optimization approach to design is presented in the flow-chart below [2].

-9-

Indentify :i. Design variablesii. Objective function to be optimizediii. Constraints to be satisfied

Collect data to describethe system

Estimate Initial Design

Analyze the system

Check the constraints

Does the design satisfyconvergence criteria ?

Update the design usingan optimization scheme

STOP

No

Yes

Fig. 1.2 Design Optimization Process [2]

Optimization problems with multiple objectives present an even greater challenge. Firstly

the existing problem needs to be reformulated so that it can be solved using existing line-search

methods. The designer has to rank or weigh all the objectives in some order of importance. The

-10-

actual solution procedure involves a higher degree of complexity, and this affects the

performance of the solution algorithm greatly. These objectives are usually conflicting in nature,

and hence the designer needs to arrive at an optimum trade-off between all the objectives.

In industry many mechanical parts have been standardized. Components such as bolts,

shaft sizes, gear pitch, rivets etc. have standard dimensions specified by a governing organization

such as the ASTM. Hence, any design on paper has to adhere to these standards before it can be

put to practice. If the designer arrives at a design, which does not meet certain standards, it

cannot be implemented. Hence the designer has an added constraint of obtaining the best

possible solution and at the same time adhering to existing standards.

Thus, the field of engineering optimization is a rich and diverse one. Researchers come

up with efficient algorithms that can solve the most complex of problems. However, the

algorithm may not be suitably adapted to other applications. Each optimization problem presents

the designer with a new set of challenges and hence demands newer solution methods. Hence

there exists the need for a solution method that can be applied to any generic optimization

problem and is not restricted to only a certain class of problems.

In this work we present an integrated approach to solving constrained non-linear

engineering optimization problems with multiple objectives. In addition we address the issue of

standardization of mechanical components, which calls for a mixed variable solution algorithm.

Our approach is to use the Compromise Decision Support Problem method to initially solve the

multi-objective non-linear constrained optimization problem. Then we proceed to use the Branch

and Bound Algorithm to solve the mixed variable problem.

-11-

As mentioned earlier, the motivation for this work has come from the need for a sound

optimization algorithm that is both efficient as well as universally applicable. We have presented

some test problems and their solutions to illustrate our approach. It is reasonable to assume that

this is by no means an ideal optimization technique. However, we present a solution

methodology for problems that do not have any existing standard solution procedures. The mixed

variable, multi-objective constrained non-linear optimization problem is most widely occurring

problem kind in mechanical component design. This work primarily addresses the approach as

applied to mechanical component design. However, there exist a large variety of problems with

essentially the same numerical complexity. Some of them include system design, circuit design,

control systems, transportation engineering, chemical engineering, process control etc.

1.3 Outline

With this introduction to the scope of this work, the remaining document is outlined as

follows :

! Chapter 2 presents an introduction and overview of single objective non-linear constrained

optimization problems. The type of problems and the theory associated with their solution is

briefly presented here.

! Chapter 3 deals with multi-objective non-linear constrained optimization. Various existing

methods to reformulate the problem as a single objective optimization problem are outlined.

The concept of Pareto Optimality is also introduced.

! Chapter 4 presents our solution methodology, which is the Compromise Decision Support

Problem, combined with the Branch and Bound Algorithm. Initially a qualitative discussion

is presented followed by a mathematical formulation.

-12-

! With our methodology presented, we proceed to apply it to a �Spring Design Problem� in

Chapter 5, which is a well-discussed problem in past literature. This is a single objective

optimization problem, which has discrete, integer and continuous design variables [3, 4].

Though this is a relatively �small� problem, it effectively illustrates the application of our

approach.

! The approach is applied to a multi-objective optimization problem in Chapter 6. The problem

selected is the one presented in Thompson et al [5]. This is a richer problem than the one

presented earlier, and validates our approach as applied to design optimization of mechanical

components.

! Finally we present a discussion of our methodology and the results obtained in Chapter 7.

Some conclusions and scope for future work is presented.

-13-

Chapter 2

Single Objective Optimization

2.1 Optimization

After our initial analysis, we are in a position to consider various alternatives that are

effective solutions to the design problem at hand. We therefore have some knowledge of both the

performance requirements (demand) and the extent to which the product as designed will satisfy

the requirements (capability). This set of alternatives defines what is called the feasible set of

solutions. The design problem then changes to one of choosing between these alternatives, based

on a set of parameters decided by the designer. The relative merits and shortcomings of these

alternatives can then be analyzed simultaneously and the most suitable alternative can be made

available for selection [6].

The method of �fine-tuning� a feasible solution can be one of trial and error. It can be

based solely on the judgment and experience of the designer. The design parameters associated

with the product are to be varied sequentially and its effect on the performance parameters can be

evaluated. However this is not the best possible approach when the number of design alternatives

is not limited. This method can be cumbersome when the feasible design space is large or the

product has a large number of design parameters.

Hence it is essential that we clearly define a set of objectives, which may be used as a

parameter to compare alternate solutions. It is necessary to obtain a set of values of the design

parameters that yield a feasible solution and at the same time optimize the given set of

objectives. It becomes important that the optimization problem can be effectively expressed

-14-

through hard information in terms of mathematical equations. In words, the optimization

problem can be described as [7] :

Given : An alternative that is to be improved through modification.

Assumptions used to model the domain of interest.

The system parameters.

The goals/objectives for the design.

Find : The values of the independent system variables (also known as the design

variables).

Satisfy :

The system constraints that must be satisfied for the solution to be

feasible.

The system goals that must achieve a optimal value.

Bounds : The lower and upper bounds on the system variables.

Minimize : The set of goals and their associated priority levels or relative weights,

which is a measure of the system performance.

2.1 Types of Single Objective Optimization Problems

2.1.1 Linear Single Objective Problem :

This is the simplest type of optimization problem. The system constraints are written in

terms of the system variables. In engineering, system constraints are invariably inequalities.

The system constraints and bounds must be satisfied for feasibility. A constraint invariably

has two or more system variables. A bound contains only one system variable and is always

parallel (geometrically) to the axis represented by the system variable. Rarely is a constraint

-15-

specified in terms of a single system variable. In this case the constraint plays the same role

as a bound in the design space [8].

The set of all combinations of the system variables that satisfy all constraints and bounds

simultaneously is called the set of feasible solutions and the space consisting of the feasible

solutions is called the feasible design space. A solution that results in the violation of any of

the constraints or bounds is called an infeasible solution. A constraint or bound that does not

border the feasible design space is called a redundant constraint or bound.

For a two dimensional optimization problem, with linear constraints and bounds, the

feasible design space is a polygon. It can be shown that the optimal solution to this problem

lies at one of the corner points of this polygon. The most common solution procedure for

these problems is the Simplex Method.

2.1.2 Linear Single Goal Problem :

A linear single objective optimization problem can be easily formulated as a goal

achievement problem. A target value is first assigned to the objective, which can then be

written as a goal. Then depending on the objective an appropriate deviation variable is

included in the optimization function. A deviation variable measures the difference between

a target value or desired goal, and the achievable value of the objective function [8].

The objective is then to minimize this deviation variable without violation any constraints

or bounds. The optimal solution is one that satisfies all the constraints and bounds and

achieves the goal as far as possible. Sometimes the exact goal cannot be achieved, as the

-16-

feasible design space is limited. In this case, the deviation variable is minimized and the

solution to this problem is the solution to the original problem.

2.3 Mathematical Introduction to Non-linear Programming

Whenever a mathematical model is available to simulate a real-life application, a

straightforward idea is to apply mathematical optimization algorithms for minimizing a so-called

cost function subject to constraints. A typical example is the minimization of the weight of a

mechanical structure under certain loads and constraints for admissible stresses, displacements,

or dynamic responses. Highly complex industrial and academic design problems are solved

today by means of non-linear programming algorithms without any chance to get equally

qualified results by traditional empirical approaches.

Non-linear programming is a direct extension of linear programming, when we replace

linear model functions by non-linear ones. Numerical algorithms and computer programs are

widely applicable and commercially available in form of black box software. However, to

understand how optimization methods work, how corresponding programs are organized, how

the results are to be interpreted, and, last not least, what are the limitations of the powerful

mathematical technology, it is necessary to understand at least the basic terminology. Thus, we

present a brief introduction into optimization theory; in particular we introduce optimality

criteria for smooth problems [2].

In this review, we consider only smooth, i.e. differentiable constrained non-linear

programming problems. A general non-linear constrained optimization problem can be expressed

as :

-17-

min f(x) x∈ Rn

gi(x) ≤ 0 i =1,2,�m (2.1)

hj(x) = 0 j = 1,2,�n

xl ≤ x ≤ xu

Here, x is a n-dimensional parameter vector, also called the vector of design variables,

f(x) the objective function or cost function to be minimized under non-linear inequality and

equality constraints given by gi(x) and hi(x). It is assumed that these functions are continuously

differentiable in Rn. The above formulation implies that we do not allow any discrete or integer

variables, respectively. But besides this we do not require any further mathematical structure of

the model functions.

To facilitate the subsequent notation, we assume that upper and lower bounds xu and xl

are not handled separately, i.e. that they are considered as general inequality constraints.

We now define some notations for the first and second derivatives of a differentiable

function [2].

1. The gradient of a real-valued function f(x) is

T

n

xfx

xfx

xfx

xf

∂∂

∂∂

∂∂≡∇ )()()()(

21

K (2.2)

2. One further differentiation gives the Hessian matrix of f(x)

njiji

xfxx

xf,...2,1,

22 )()(

=

∂∂∂≡∇ (2.3)

-18-

3. The Jacobian matrix of a vector-valued function F(x) = (f1(x), . . . , fl(x))T is also written

in the form

∇ F(x) = ( ∇ f1(x), . . . , ∇ fl(x)) (2.4)

4. The fundamental tool for deriving optimality conditions and optimization algorithms is

the so-called Lagrangian function

∑=

−≡m

jjj xguxfuxL

1)()(),( (2.5)

defined for all x ∈ Rn and u = (u1, . . . , um)T ∈ Rm. The purpose of L(x, u) is to link

objective function f(x) and constraints gj(x), j = 1, . . . , m. The variables uj are called the

Lagrangian multipliers of the non-linear programming problem.

2.4 Convexity

The characterization of minimum and maximum points whether global or local is related

to the concavity and convexity of functions. A univariate concave function has a negative second

derivative everywhere and guarantees global maximum. A univariate convex function has a

positive derivative everywhere yielding a global minimum [2].

In general we can only expect that an optimization algorithm computes a local minimum

and not a global one, i.e. a point x* with f(x*) ≤ f(x) for all x ∈ P ∩ U(x*), where U(x*) is a

suitable neighborhood of x*. However, each local minimum of a non-linear programming

problem is a global one if the problem is convex, for example, if f is convex, gi linear for i =

-19-

1,2,�,m and hj concave for j = 1,2,�n. These conditions force the feasible region P to be a

convex set.

A function f : Rn → R is called convex, if f(λx + (1 - λ)y) ≤ λf(x) + (1 - λ)f(y) for all x, y ∈

Rn and λ ∈ (0, 1), and concave, if we replace �≤� by �≥� in the above inequality.

For a twice differentiable function f(x), convexity is equivalent to the property that

∇ 2f(x) is positive semi definite, i.e. zT ∇ 2f(x)z ≥ 0 for all z ∈ Rn. Convexity of an optimization

problem is important mainly from the theoretical point of view, since many convergence, duality

or other theorems can be proved only for this special case. In practical situations, however, we

have hardly a chance to test whether a numerical problem is convex or not. Discussion of

convexity is also presented in Chapter 3, with Pareto Optimal Solutions under consideration. For

our case, we presume that the problems are convex and proceed to solve them.

2.5 Karush-Kuhn-Tucker Conditions

For developing and understanding an optimization method, the subsequent theorems are

essential. They characterize optimality and are therefore also important for testing a current

iteration with respect to its convergence accuracy.

2.5.1 Theorem 1 (necessary second order optimality conditions)

Let f, gi and hj be twice continuously differentiable for i = 1,2,�m and j = 1,2,�n. Let x*

be a local minimizer of the non-linear optimization problem. There exists a u*∈ Rm, such that

-20-

njxhu

uxL

njxh

mixg

njmiu

jj

x

j

i

j

,...2,1,0)(

0),(

,...2,1,0)(

,...2,1,0)(

,...2,1;,...2,1,0

**

**

*

*

*

==

=∇

=≥

==

==≥

(First Order Conditions) (2.6)

and

0),( **2 ≥∇ suxLs xT

for all nRs ∈ with 0)( * =∇ sxg Ti (Second Order Conditions) (2.7)

The first order conditions are called the Karush-Kuhn-Tucker-condition. It says that at a

local solution the gradient of the objective function can be expressed by a linear combination of

gradients of active constraints. Moreover the second order condition implies that the Lagrangian

function is positive semi-definite on the tangential space defined by the active constraints.

2.5.2 Theorem 2 (sufficient second order optimality conditions)

Let f, gi and hj be twice continuously differentiable for i = 1,2,�m and j = 1,2,�n. Let

mn RuRx ∈∈ ** , be given, so that the following conditions are satisfied,

njxhu

uxL

njxh

mixg

njmiu

jj

x

j

i

j

,...2,1,0)(

0),(

,...2,1,0)(

,...2,1,0)(

,...2,1;,...2,1,0

**

**

*

*

*

==

=∇

=≥

==

==≥

(First Order Conditions) (2.8)

and

-21-

0),( **2 ≥∇ suxLs xT

for all nRs ∈ with misxgs Ti ,...2,1,0)(,0 * ==∇≠ for all s with, njsxh T

j ,...2,1,0)( * ==∇ and

0* >ju (Second Order Conditions) (2.8)

Then x* is an isolated local minimum of f on P.

This introduction to non-linear optimization does not define the complete scope of the

existing optimization techniques, but serves as a first step to solving optimization problems.

There are many optimization softwares available today, which do not demand any sort of

rigorous mathematical formulation of the problem. We have used the Matlab ® Optimization

Toolbox for solving the optimization problems in this work.

-22-

Chapter 3

Multiple Objective Optimization

3.1 Introduction

Optimal design engineering has been an important research field. Most of the engineering

design problems are characterized by the multiple conflicting objectives. We have identified that

it is the designers� job, not only to find a solution to the engineering problem, but also to make a

decision regarding which solution to opt for in case there are multiple solutions available. The

multi-objective problem presents an inherent difficulty in the solution process. As in the case of a

single objective optimization problem, which has a single �optimal solution�, there cannot be a

single optimal solution to the multi-objective problem. We can present two approaches, which

broadly illustrate the design processes used by a designer to model a multi-objective

optimization problem [9].

3.1.1 A Priori Articulation Of Preference

This method is preferred when adequate knowledge about the system parameters is

available a priori. This information is embedded in a suitable optimization scheme and an

optimal solution is obtained. The approach can be divided into three stages.

a. Conception : In this stage, which is an initial analysis stage, various design concepts are

created. Possible solutions to the problems are proposed and a broad search space is

identified in which the solution can be expected to be obtained.

-23-

b. Formulation : At this stage, a performance parameter is formulated. It is important to

have adequate information of the system parameters at this stage; otherwise the objective

of the optimization process cannot be formulated.

c. Analysis and Optimization : After the problem has been formulated, any suitable

optimization scheme can be applied and the solution obtained.

Thus, in this approach, the problem is completely formulated and defined before the

designer has any idea regarding the optimal solution. However, all the three stages require the

designer to take decisions. If the optimal solution obtained is not suitable, then the designer can

go back and make suitable changes at any stage.

3.1.2 A Posteriori Articulation Of Preference

This approach to decision making in multi-objective design is applicable when all the

system parameters are not known to the designer. This approach can also be categorized into

three stages.

a. Conception : This stage is very similar to the Conception stage in the earlier approach. As

earlier, design concepts are generated and a search space is defined. However, design

objectives are also identified at this stage. Different design concepts warrant different

design objectives and vice versa. However, the design objectives cannot be quantified or

ranked in any order, since adequate system information may not be available.

b. Analysis and Optimization : With knowledge regarding the design objectives available to

the designer, he proceeds to the analysis stage. The design concepts are analyzed using a

suitable optimization process and a rich set of good designs is obtained. Thus, the

-24-

designer obtains a large number of solutions to the problem instead of only one solution

as in the earlier case.

c. Selection : With a large selection set available for the designers to consider, the final task

is simply to select the most suitable design. The designs may be evaluated by themselves

or the various objective functions associated with the designs can be evaluated.

If one particular design (Design A) is found to be �superior� to another design (Design B)

in all respects, then the designer can discard Design B in favor of Design A. In this case, Design

A is said to dominate Design B. However, there may be designs which are better than other

designs in some respects but not all. In this case, the designs are called non-dominated designs.

�A set of non-dominated designs from the universe of feasible designs is called the Pareto

Optimal Set�. This is a rather crude understanding of Pareto Optimality. The concept of Pareto

Optimal solutions is discussed extensively later in the text [10].

The �A Priori Articulation Of Preference� method is preferred when all the necessary

system information is available beforehand. This results in a well-defined problem formulation

and a single optimal solution to the problem. However, this is not always the case in practical

engineering problems. It is easier for the designer to make a decision for the selection of a

suitable design when he is presented with a set of non-dominated solutions obtained by the �A

Posteriori Articulation Of Preference� method. It helps the designer in understanding the scope

of the problem better. However, the Pareto Optimal set of solutions can be very large for many

real-life engineering problems. Hence the task of selection may become overwhelming.

-25-

3.2 Non-linear Multi-Goal Optimization Techniques [11]

In a multi-goal problem, we wish to find a set of values for the design variables that

optimizes a set of objective functions. The set of variables that produces the optimal outcome is

designated as the optimal set and denoted by x*. The optimal set is referred to as the Pareto

optimal set and it yields a set of possible answers. A set of points is said to be Pareto optimal if,

in moving from (point A) to another (point B) in the set, any improvement in one of the objective

functions from its current value would cause at least one of the other objective functions to

deteriorate from its current value. The Pareto optimal set yields an infinite set of solutions, from

which the engineer can choose the desired solution. In most cases, the Pareto optimal set is on

the boundary of the feasible region [12, 13].

The majority of engineering design problems are multi-objective, in that there are several

conflicting design aims, which need to be simultaneously achieved. If these design aims are

expressed quantitatively as a set of n design objective functions

fi ( x ) : i = 1 . . . n (3.1)

where x denotes the design parameters chosen by the designer, the design problem could be

formulated as a multi-objective optimization problem :

minx e X fi ( p ) , for i = 1 . . . n (3.2)

where X denotes the set of possible design parameters x. In most cases, the objective functions

are in conflict, so the reduction of one objective function leads to the increase in another.

Subsequently, the result of the multi-objective optimization is known as a Pareto-optimal

solution. A Pareto-optimal solution has the property that it is not possible to reduce any of the

-26-

objective functions without increasing at least one of the other objective functions. The problem

with multi-objective optimization is that there is generally a very large set of Pareto-optimal

solutions. Subsequently there is a difficulty in representing the set of Pareto-optimal solutions

and in choosing the solution that is the best design.

Some of the other techniques used to solve these types of problems are discussed below

[14]. All of them involve converting the goal attainment problem to a single objective problem,

with a suitable transformation of the objective function. However all these methods differ from

the Compromise Decision Support Problem formulation as the objective function is still a non-

linear function of the design variables.

3.3 Method of Weighting Objectives [15]

This method takes each objective function and multiplies it by a fraction of one, the

�weighting coefficient� which is represented by wi. The modified functions are then added

together to obtain a single cost function, which can easily be solved using any Single Objective

Optimization method. Mathematically, the new function is written as

.1

10

)()(

1

1

=

=

=

≤≤

=

k

ii

i

k

iii

w

andw

where

xfwxf

(3.3)

-27-

If the problem is convex, then a complete set of non-inferior or Pareto solutions can be

found. However, if the problem is not convex, then there is no guarantee that this method will

yield the entire Pareto set.

In this method, the weighting coefficients are assumed beforehand. The coefficients are

then varied to yield a set of feasible optima, the Pareto Optimal set. The designer is expected to

pick the values of the variables from this set of solutions. However as mentioned earlier, the

weights for each of the objectives depends on the judgment of the designer which can be only

obtained from experience.

3.4 Geometrical Interpretation of the Weighting Objectives Method [16]

Consider the convex space of objective functions for a two-objective optimization

problem. The problem is as follows

min f1

min f2 (3.4)

Upon transformation to a weighting objective problem, the above problem becomes

min f = w1 f1 + w2 f2 (3.5)

This new function f is the equation for line L1 (Fig. 3.1). Line L1, with a slope of -w1/w2,

is drawn in the space of objective functions between the origin and the feasible space of

objective functions. The optimum point associated with these two weighting objectives would be

achieved by moving the line L1 away from the origin, toward the feasible region, until it is just

tangent to the boundary of the feasible region. This tangent point is the optimum point, x*, for a

prescribed w1 and w2.

-28-

Fig 3.1 Geometrical Interpretation of the Weighting Objectives Method

3.5 Weighting Objectives Method for a Non-Convex Problem

This method will guarantee the entire set of non-inferior solutions for a function that is

convex. For non-convex functions, it may not be possible to locate the entire Pareto set. Consider

the space of objective functions F. Lines L1, L2, and L3 all have the same slope, using the method

of weighting functions, only point A will be shown as a minimum. In reality, points A, B, and C

are all non-inferior solutions of the Pareto set shown as a bold line. (Fig. 3.2)

-29-

Fig 3.2 Weighting Objectives Method for a Non-Convex Problem

3.6 Hierarchical Optimization Method [1]

This method allows the designer to rank the objectives in a descending order of

importance. Each objective function is then minimized individually subject to a constraint that

does not allow the minimum for the new function to exceed a prescribed fraction of a minimum

of the previous function.

Once the k objectives have been ordered from f1 (most important) to fk (least important),

the solution procedure goes as follows :

-30-

Step 1 : Find the optimum point for f1 (that is, find x*(1)), subject to the original set of

constraints. Assume that all other objective functions do not exist. Repeat Step 2 for j = 2, 3, ...,

k.

Step 2 : Find the optimum point of the jth objective function fj subject to an additional constraint :

( ) .,..3,2)(100

11)( 1

11 kjforxfxf jj

jj =

−±≤ −

−−

ε (3.6)

where εj is the assumed coefficient of the function increment, expressed in percent.

Note: This method is known as the Lexicographic method when εj = 1.

3.7 Global Criterion Method [1]

Here the designer finds the vector of decision variables that is the minimum of a global

criterion. The global criterion is usually defined as seeing how close the designer can get to the

ideal vector .of The quantity of is the ideal solution, which is sometimes replaced with a so-

called demand level d

f that is specified by the designer (if the ideal solution is unknown). The

scalar objective function for this method is usually written as :

∑=

−=

k

i

P

oi

io

i

fxff

xf1

)()( (3.7)

The value of P is set by the designer.

-31-

3.8 Goal Programming Method (Compromise Decision Support Problem)

This is perhaps the most well known method of solving Multi-Goal problems. This

method was originally developed by Charnes and Cooper and Ijiri in 1965. For this method, the

designer must construct a set of goals (which may or may not be realistic) that should be

obtained (if possible) for the objective functions. The user then assigns weighting factors to rank

the goals in order of importance. Finally a single objective function is written as the

minimization of the deviations from the above stated goals [17, 18].

A �goal constraint� is slightly different than a �real constraint� in goal programming

problems. A �goal constraint� is a constraint that the designer would like to be satisfied, but a

slight deviation above or below this constraint is acceptable. This method is discussed in detail

later in Chapter 4.

3.9 Trade-off Method [1]

This method is also known as the constraint method or the ε - constraint method. The

steps in the solution of a problem are as follows.

Step 1 : Convert

[ ])(),...,(min 1 xfxfXx k∈

to

)(min*)( xfXx

xf rr ∈=

riandkiforxf

ts

ii ≠=≤ ,...,1)(

..

ε (3.8)

-32-

Then, find the minimum of the rth objective functions, i.e., find *x such that

)(min*)( xfXx

xf rr ∈= (3.9)

subject to an original set and an additional set of constraints represented as

riandkiforxf ii ≠=≤ ,...,1)( ε (3.10)

εi is an assumed value that the designer would prefer not to exceed.

Step 2 : Step 1 is repeated for various values of εi until a set of acceptable solutions is compiled.

This method allows the designer to determine the complete Pareto set of optimal points, but only

if all possible values of εi are used.

3.10 Method of Distance Functions [1]

Another form of the global function is the method of distance function. Here, the

distance between the ideal solution and the present solution is minimized. A family of functions

is defined as

∞≤≤

−= ∑=

pxfffLpk

i

p

io

ip 1,)()(1

1 (3.11)

Example :

L f f f x

L f f f x

L fi I

f f x

io

fi

k

io

ii

k

io

i

11

2

2

1

12

( ) ( )

( ) ( )

( ) max ( )

= −

= −

=∈

=

=

∑ (3.12)

-33-

One can see that if L2(f) is minimized, then so is the distance between the final solution

and the ideal solution. It is usually recommended that a relative deviation is examined, rather

than the actual deviation from the ideal solution. In this case,

∞≤≤

−= ∑

=

pf

xfffL

pk

i

p

oi

io

ip 1,

)()(

1

1 (3.13)

This method will yield one non-inferior solution to the problem. If one uses a set of

weighting values, then a set of non-inferior solutions can be found.

There are two disadvantages to this method.

i. The ideal solution should be known, otherwise a demand level is assumed.

ii. If the wrong demand level is chosen, the result will be non-Pareto solutions.

The two demand levels shown both would lead to a non-Pareto solution using this

method. For this reason it is very important to exercise great care in picking the demand level.

3.11 Pareto Optimal Solutions

A multi-objective problem is one in which a designer�s goal is to simultaneously

optimize (maximize or minimize) two or more objectives. As in the case of a single objective

problem we find the optimal set of design variables that satisfies all the constraints and optimize

each of the objectives, which usually are in conflict with each other. In this case, the term

optimize refers to a solution that gives values of each of the objectives as desired by the designer.

Hence to express these conflicting objectives, Pareto presented a concept of non-inferior

solutions in 1906. This idea of Pareto solutions still remains the most important concept in multi-

objective optimization.

-34-

Most real-world search and optimization problems involve multiple objectives (such as

minimizing fabrication cost and maximize product reliability, and others) and should be ideally

formulated and solved as a multi-objective optimization problem. However, the task of multi-

objective optimization is different from that of single-objective optimization in that in multi-

objective optimization, there is usually no single solution, which is optimum with respect to all

objectives. The resulting problem usually has a set of optimal solutions, known as Pareto-optimal

solutions or non-inferior solutions. Since there exists more than one optimal solution and since

without further information no one solution can be said to be better than any other Pareto-

optimal solution, one of the goals of multi-objective optimization is to find as many Pareto-

optimal solutions as possible [9, 12, 13].

The basic concept is that a design vector x* is considered to be Pareto Optimal if no

objective can be improved without worsening at least one of the other objectives. As such, there

is no single solution (utopia point) for the multi-objective optimization problem. We obtain a set

of non-inferior solutions called the Pareto Optimal Solution Set. This set defines a curve (for two

objectives, surface for three objectives and hyper surface for more than three objectives) in the

objective space, called the Pareto Optimal Curve [2].

A set of points is said to be Pareto optimal if, in moving from (point A) to another (point

B) in the set (Fig. 3.3), any improvement in one of the objective functions from its current value

would cause at least one of the other objective functions to deteriorate from its current value.

Note that, based on this definition, point C is not Pareto Optimal. The Pareto optimal set yields

an infinite set of solutions, from which the engineer can choose the desired solution

-35-

Fig. 3.3 Graphical Definition of Pareto Optimal

3.12 Pareto Optimality for the Weighted Objective Method [16]

We have used the Weighted Objective Method to solve the multi-objective problems

considered in this work. Hence we present a detailed discussion of Pareto Optimality using this

method. The performance of the Weighted Objective Method for multi-objective optimization

depends on the feasible solution set for the problem. The solution space can be classified as

being convex or non-convex. The convexity of the solution space is an important parameter

when a selection is to be made from an available set of Pareto Optimal solutions. Consider the

two figures below.

-36-

f2

f1

Pareto Solutions

Feasible Region

Fig. 3.4 (a) Convex Feasible Set (Top) (b) Non-Convex Feasible Set (Bottom)

In Fig. 3.4 (a) the black circles represent the Pareto Solutions obtained by the Weighted

Objective Method. The lines represent the weighted sums of all the objective functions. Different

slopes of the lines are due to different weights assigned to each of the objective functions. We

can observe that by systematic variation of the weights, we can trace the boundary of the

Feasible Region and obtain all possible Pareto Optimal Solutions. Hence an optimization of the

f2

f1

Feasible Region

Pareto Solutions

-37-

scalar valued weighted sum of the objective functions is sufficient for Pareto Optimality of the

multi-objective problem, if it is assumed that weighted sum increases monotonically with respect

to each objective function. Another observation can be made that the Pareto Optimal Solutions

lie on the boundary of the Feasible Region. Hence we conclude that solution of the problem

formulated using a Weighted Objective Methods guarantees Pareto Optimality if the Feasible

Region is convex.

Now consider Fig. 3.4 (b), in which case, the Feasible Region is not a convex set. As in

the earlier case we can obtain Pareto Optimal solutions (black circles) by systematically varying

the weights associated with each of the objectives. However, this does not guarantee that all the

Pareto Optimal solutions can be determined by this method. As we obtain the solutions from left

to right in the figure, the slope of the weighted sum of the objective function becomes less and

less negative. However, we are bound to miss some of the solution points, such as the white

circle. Thus, the Weighted Objective method cannot guarantee all Pareto Optimal solutions when

the Feasible Region is non-convex.

-38-

Chapter 4

The Decision Support Problem Technique

4.1 Introduction

With this understanding of the design process in mind, it becomes increasingly important

to develop a tool that assists the designer in the decision making process. The Decision Support

Problem (DSP) Technique is one such tool that provides support for human judgment in the

overall design process. The Decision Support Problem Technique consists of three principal

components: a design philosophy, an approach for identifying and formulating Decision Support

Problems (DSP�s), and the software [1, 6, 7, 20].

Formulation and solution of Decision Support Problems provide a means for making the

following types of decisions [1] :

! Selection : The indication of a preference, based on multiple attributes, for one among

several feasible alternatives.

! Compromise : The improvement of a feasible alternative through modification.

! Coupled or hierarchical : Decisions that are linked together - selection/selection,

compromise/compromise and selection/compromise decisions may be coupled.

! Conditional : Decisions in which the risk and uncertainty of the outcome are taken into

account.

! Heuristic : Decisions made on the basis of a knowledge base of facts and rules of thumb;

Decision Support Problems that are solved using reasoning and logic only.

-39-

The primary goal of the Decision Support Problem is to assist the designer in arriving at

the best possible solution. This is achieved in three major stages [1] :

! In the first step we use the available soft information to identify the more promising �most-

likely-to-succeed� concepts. This is accomplished by formulating and solving a preliminary

selection Decision Support Problem.

! Next, we establish the functional feasibility of these most likely to succeed concepts and

develop them into candidate alternatives. The process of development includes engineering

analysis and design; it is aimed at increasing the amount of hard information that can be used

to characterize the suitability of the alternative for selection.

! In the third step we select one (or at most two) candidate alternative for further development.

This is accomplished by formulating and solving a selection Decision Support Problem. The

selection Decision Support Problem has been designed to use both the hard and the soft

information that is available.

4.2 Mathematical Formulation of the Compromise Decision Support Problem [1, 7] :

4.2.1 System Variables :

These are the parameters that completely define the design problem. When represented as

a row vector, the system variables are called the design vector. System variables are independent

of the other parameters and can be modified as required by the designer to alter the state of the

system. System variables that define the physical attributes of an artifact are always nonzero and

positive.

X = (x1, x2,... xn), xi ≥ 0 (4.1)

-40-

4.2.2 System Constraints :

These constraints are a function of one or many system variables. The constraint can be

linear or non-linear; equality or inequality. Invariably these are rigid constraints and always have

to be satisfied for a feasible solution of the design problem. In the Compromise Decision Support

Problem formulation D(X) is the demand or expectation of the system and C(X) is the capacity or

capability of the system to meet this demand.

Ci(X) ≥ Di(X); i = 1, 2, 3 ..., m (4.2)

4.2.3 System Bounds :

For all physical systems, the design variables are non-negative. However in addition to

that condition, some system variables have certain minimum and maximum values that cannot be

violated. This condition may be expressed as an equality; either ≥ or ≤ as the case may be.

xi ≥ min(xi) and/or xi ≤ max(xi) (4.3)

4.2.4 System Goal :

The objective of the optimization problem is to achieve a certain goal Gi(X) which is a

function of the system variables. This is the value that the designer aspires for. However, the

actual value of this system goal obtained is Ai(X) which is the actual attainment. These two

quantities can be related in the following ways :

a. Ai(X) ≤ Gi(X) :

-41-

In this case, the actual value of the goal is less than or equal to the desired expectation.

e.g. Speed of an electric motor. Overachievement of this system goal may result in unsafe

operation and hence is not desirable.

b. Ai(X) ≥ Gi(X) :

In this case, the actual value of the goal is greater than or equal to the desired expectation.

e.g. Escape velocity of a bullet. If the bullet leaves the gun with a lower speed, it may not

accurately hit the target.

c. Ai(X) = Gi(X) :

In this case, the actual value of the goal is equal to the desired expectation.

4.2.5 Deviation Variables :

Mathematically the deviation variable is defined as :

d = Gi(X) � Ai(X) (4.4)

Since we have seen earlier that the actual value of the goal Ai can be greater than or less

than the desired value of the goal Gi, it follows that the deviation variable d can be either

negative or positive. However this does not comply with our earlier statement that all variables

should be non-negative. Hence the deviation variable is expressed as :

d = di- - di

+ (4.5)

where,

di-. di

+ = 0 (4.6)

-42-

and,

di- , di

+ ≥ 0 (4.7)

With this transformation, the system goal can be expressed as :

Ai(X) + di- - di

+ = Gi ; i = 1, 2, ..., m (4.8)

where,

di- , di

+ ≥ 0 and di-.di

+ = 0 (4.9)

This condition enforces all the deviation variables to be positive. Also, since their product

is zero, one of the deviation variables is always zero. Which deviation variable out of di- and di

+

goes to zero is decided by the following rules :

1. If Ai ≤ Gi (i.e. underachievement of goal) then di- > 0, di

+ = 0

2. If Ai ≥ Gi (i.e. overachievement of goal) then di+ > 0, di

- = 0

3. If Ai = Gi (i.e. exact achievement of goal) then di- = 0, di

+ = 0

The deviation variables can be thought of as a degree of latitude that the designer has in

achieving the system goal G. As the attained goal, A becomes closer to the desired goal G, the

deviation variables limit towards zero.

-43-

Fig. 4.1 Graphical Representation of the Compromise DSP [1]

Fig. 4.1 is a graphical representation of the Compromise Decision Support Problem for a

two variable problem. The arrows indicate the direction of feasibility for each constraint and

bound and eventually, the shaded area in gray is the feasible design space. There is a distinct

difference between system variables (xi) and deviation variables (di). As seen in Fig. 4.1, the ith

system variable is the distance from the origin in the ith dimension. However, the deviation

variable originates from the curve (or surface) of the system goal G. Hence it does not have a

fixed origin, but is merely the deviation from the system goal curve (or surface).

-44-

Traditional optimization problems have an objective function, which is a function of the

design variables under consideration. This function is then either minimized or maximized to

achieve a desired goal. In the Compromise Decision Support Problem setup, the objective

function is a function of the deviation variables only. Hence to achieve a desired goal G, our

objective shall always be to minimize the objective function. The smaller the objective function,

the closer the system is to the desired goal and hence better the performance.

However, there are numerical issues with the minimization of the objective function,

which need to be addressed. Goals are not equally important to a designer. Therefore to solve the

problem, given a designer�s preferences, the goals are rank-ordered into priority levels. Within a

priority level it is imperative that the deviation variables are of the same order of magnitude.

Otherwise while optimizing the objective function, the deviation variable with the larger

numerical value shall take precedence and dominate the optimization process. Hence

normalization of the system goal is necessary.

The procedure to overcome this issue is to normalize the goal attainment level (Ai) with

respect to the desired level (Gi). The different cases to be considered are :

1. If the objective is to maximize the attainment level (Ai) so that it reaches the desired goal

(Gi) from below, consider the ratio Ai/Gi. Since Ai is always going to be less than Gi, this

ratio is always less than 1.

Ai(X) ≤ Gi → Ai(X)/Gi ≤ 1 (4.10)

Hence, the transformed system goal can be expressed as an equality by introducing the

deviation variables at this stage.

-45-

Ai(X)/Gi + di- - di

+ = 1 (4.11)

With this transformation, the deviation variables vary between 0 and 1.

2. If the objective is to minimize the attainment level (Ai) so that it reaches the desired goal

(Gi) from above, consider the ratio Gi/Ai. Since Ai is always going to be more than Gi,

this ratio is always less than 1.

Ai(X) ≥ Gi → Gi(X)/Ai ≤ 1 (4.12)

Hence, the transformed system goal can be expressed as an equality by introducing the

deviation variables at this stage.

Gi(X)/Ai - di- + di

+ = 1 (4.13)

Again, the deviation variables vary between 0 and 1. However in this case, the signs of

the deviation variables are exchanged to account for the inverse ratio.

3. If the objective is to exactly achieve the system goal (Gi), two cases arise.

a. If the system goal is approached from below by Ai, then use

Ai(X)/Gi + di- - di

+ = 1

and minimize the sum (di- + di

+) (4.14)

b. If the system goal is approached from above by Ai, then use

Ai(X)/Gi - di- + di

+ = 1

-46-

and minimize the sum (di- + di

+) (4.15)

4.3 The Objective (Deviation) Function [1] :

In the compromise Decision Support Problem formulation the aim is to minimize the

difference between that which is desired and that which can be achieved. This is done by

minimizing the deviation function, Z(d-, d+), which is always written in terms of the deviation

variables. In all cases it might not be feasible to achieve the desired goal. Hence the designer has

to make a compromise on the achievement level of the desired goal. However this compromise is

to be as small as possible and this then becomes the objective of the Compromise Decision

Support Problem. Some linear or non-linear combination of the deviation variables (di-, di

+) can

be used to form the objective function. The value of this function can then be used as an

indicator of the degree to which the desired goals are achieved.

The deviation variables themselves are deviations from each of the system objectives.

Normalization of the system objectives makes the deviation variables comparable. However all

the system objectives may not be equally important, and hence all the deviation variables are not

to be minimized to the same extent. Hence an appropriate function needs to be defined which

correctly represents the relative importance of all the deviation variables. Two different

approaches are presented here which define the objective function of the Compromise Decision

Support Problem [1].

-47-

4.3.1 Archimedean Approach (Weighted Objectives Method) :

In this case, the deviation variables are simply multiplied by a weight and the objective

function is then only a linear combination of these products. It can be mathematically

represented as :

Z(d-,d+) = ∑wi(di- + di

+) i=1,..., m (4.16)

where each of the individual weights represents the relative importance of the system objective

associated with that deviation variable. The weights wi satisfy the following conditions :

∑wi = 1, wi ≥ 0 for all i (4.17)

We know that one of our deviation variables associated with each system objective is

always zero. Hence the weight wi is applied to only one of the deviation variable, either di- or di

+.

Though this seems to be a relatively simple way to formulate the objective function for

the Compromise Decision Support Problem, determining the actual weights for the deviation

variables is an important task, which can only be achieved through experience.

4.3.2 Preemptive Approach (Lexicographic Method) :

In this approach, the weighted sum of the deviation variables is replaced by a rank

ordering of the deviation variables. The highest ranked objective is minimized first and then the

next and so on until the last. The preemptive approach can be mathematically represented as :

Z = [ f1(di- , di

+), f2(di- , di

+),� fk(di- , di

+)] (4.18)

-48-

The method of approach is to achieve the first objective f1 and then the second one and so

on. However this is a qualitative approach as compared to the Archimedean approach. This

method is preferred when little mathematical information is available about the objective.

However there are no mathematical formulations involved and it serves as a good indicator of

the goal achievement.

4.4 Branch and Bound Algorithm

Branch and Bound is a general exhaustive search method. Suppose we wish to minimize

a function f(x), where x is restricted to some feasible region defined by explicit mathematical

constraints. To apply branch and bound, we must have a means of computing a lower bound on

an instance of the optimization problem and a means of dividing the feasible region of a problem

to create smaller sub problems. There must also be a way to compute an upper bound for at least

some instances; for practical purposes, it should be possible to compute upper bounds for some

set of nontrivial feasible regions [8, 21].

The method starts by considering the original problem with the complete feasible region,

which is called the root problem. Essentially, the Branch and Bound Algorithm generates a tree-

like structure to identify and solve a set of increasingly constrained sub-problems derived from

the root problem. The lower-bounding and upper-bounding procedures are applied to the root

problem. If the bounds match, then an optimal solution has been found and the procedure

terminates. Otherwise, the feasible region is divided into two or more regions, each strict sub

regions of the original, which together cover the whole feasible region; ideally, these sub

problems partition the feasible region. These sub problems become children of the root search

node. The algorithm is applied recursively to the sub problems, generating a tree of sub

-49-

problems. If an optimal solution is found to a sub problem, it is a feasible solution to the full

problem, but not necessarily globally optimal. Since it is feasible, it can be used to prune the rest

of the tree: if the lower bound for a node exceeds the best known feasible solution, no globally

optimal solution can exist in the subspace of the feasible region represented by the node. If we

know the solution to the continuous variable problem, that can serve as a lower bound for the

Branch and Bound Algorithm. This is because adding additional constraints (which are added

when the problem is branched into sub-problems) shall make the feasible design region smaller,

never greater. Hence the solution to the Branch and Bound Algorithm problem shall always be

inferior to the continuous variable problem. The search proceeds until all nodes have been solved

or pruned, or until some specified threshold is meet between the best solution found and the

lower bounds on all unsolved sub problems [22].

A large number of real-world planning problems called combinatorial optimization

problems share the following properties: They are optimization problems, are easy to state, and

have a finite but usually very large number of feasible solutions. The problems in addition share

the property that no polynomial method for their solution is known. All of these problems are

NP-hard.

Branch and Bound Algorithm is by far the most widely used tool for solving large-scale

NP-hard combinatorial optimization problems. The Branch and Bound Algorithm is, however, an

algorithm paradigm, which has to be modeled out for each specific problem type, and numerous

choices for each of the components exist. Even then, principles for the design of efficient the

Branch and Bound Algorithms have emerged over the years.

-50-

Solving NP-hard discrete optimization problems to optimality is often an immense job

requiring very efficient algorithms, and the Branch and Bound Algorithm paradigm is one of the

main tools in construction of these. The Branch and Bound Algorithm searches the complete

space of solutions for a given problem for the best solution. However, explicit enumeration is

normally impossible due to the exponentially increasing number of potential solutions. The use

of bounds for the function to be optimized combined with the value of the current best solution

enables the algorithm to search parts of the solution space only implicitly. At any point during

the solution process, the status of the solution with respect to the search of the solution space is

described by a pool of yet unexplored subset of this and the best solution found so far. Initially

only one subset exists, namely the complete solution space, and the best solution found so far is

∞. The unexplored subspaces are represented as nodes in a dynamically generated search tree,

which initially only contains the root, and each iteration of a classical The Branch and Bound

Algorithm processes one such node. The iteration has three main components: selection of the

node to process, bound calculation, and branching. The sequence of these may vary according to

the strategy chosen for selecting the next node to process. If the selection of next sub problem is

based on the bound value of the sub problems, then the first operation of an iteration after

choosing the node is branching, i.e. subdivision of the solution space of the node into two or

more subspaces to be investigated in a subsequent iteration. For each of these, it is checked

whether the subspace consists of a single solution, in which case it is compared to the current

best solution keeping the best of these. Otherwise the bounding function for the subspace is

calculated and compared to the current best solution. If it can be established that the subspace

cannot contain the optimal solution, the whole subspace is discarded, else it is stored in the pool

of live nodes together with it's bound. The search terminates when there are no unexplored parts

-51-

of the solution space left, and the optimal solution is then the one recorded as "current best". This

is known as the eager strategy for node evaluation, since bounds are calculated as soon as nodes

are available. The alternative is to start by calculating the bound of the selected node and then

branch on the node if necessary. The nodes created are then stored together with the bound of the

processed node. This strategy is called lazy and is often used when the next node to be processed

is chosen to be a live node of maximal depth in the search tree [21].

The problem is to minimize a function :

f(x) of variables (x1,� xn) over a region of feasible solutions, S :

min Sx ∈ f(x) (4.19)

The function f(x) is called the objective function and may be of any type. The set of

feasible solutions is usually determined by general conditions on the variables, e.g. that these

must be non-negative integers or binary, and special constraints determining the structure of the

feasible set.

The term sub problem is used to denote a problem derived from the originally given

problem through addition of new constraints. A sub problem hence corresponds to a subspace of

the original solution space.

The solution of a problem with a Branch and Bound Algorithm is traditionally described

as a search through a search tree, in which the root node corresponds to the original problem to

be solved, and each other node corresponds to a sub problem of the original problem. Given a

node Q of the tree, the children of Q are sub problems derived from Q through imposing

-52-

(usually) a single new constraint for each sub problem, and the descendants of Q are those sub

problems, which satisfy the same constraints as Q and additionally a number of others. The

leaves correspond to feasible solutions, and for all NP-hard problems, instances exist with an

exponential number of leaves in the search tree. To each node in the tree a bounding function g

associates a real number called the bound for the node. For leaves the bound equals the value of

the corresponding solution, whereas for internal nodes the value is a lower bound for the value of

any solution in the subspace corresponding to the node. Usually the bounding function g is

required to satisfy the following three conditions:

1. g(Pi) ≤ f(Pi) for all nodes Pi in the tree

2. g(Pi) = f(Pi) for all leaves in the tree

3. g(Pi) ≥ g(Pj) if Pj is the parent of Pi

These state that g is a bounding function, which for any leaf agrees with the objective

function, and which provides closer and closer (or rather not worse) bounds when more

information in terms of extra constraints for a sub problem is added to the problem description

[23].

-53-

Fig 4.2 Progress of the Branch and Bound Algorithm

The search tree is developed dynamically during the search and consists initially of only

the root node. For many problems, a feasible solution to the problem is produced in advance

using a heuristic, and the value hereof is used as the current best solution (called the incumbent).

In each iteration of a Branch and Bound Algorithm, a node is selected for exploration from the

pool of live nodes corresponding to unexplored feasible sub problems using some selection

strategy. If the eager strategy is used, a branching is performed: Two or more children of the

node are constructed through the addition of constraints to the sub problem of the node. In this

way the subspace is subdivided into smaller subspaces. For each of these the bound for the node

is calculated, possibly with the result of finding the optimal solution to the sub problem. In case

the node corresponds to a feasible solution or the bound is the value of an optimal solution, the

-54-

value hereof is compared to the incumbent, and the best solution and its value are kept. If the

bound is no better than the incumbent, the sub problem is discarded (or fathomed), since no

feasible solution of the sub problem can be better that the incumbent. In case no feasible

solutions to the sub problem exist the sub problem is also fathomed. Otherwise the possibility of

a better solution in the sub problem cannot be ruled out, and the node (with the bound as part of

the information stored) is then joined to the pool of live sub problems. If the lazy selection

strategy is used, the order of bound calculation and branching is reversed, and the live nodes are

stored with the bound of their parent as part of the information.

The bounding function is the key component of any Branch and Bound Algorithm in the

sense that a low quality bounding function cannot be compensated for through good choices of

branching and selection strategies. Ideally the value of a bounding function for a given sub

problem should equal the value of the best feasible solution to the problem, but since obtaining

this value is usually in itself NP-hard, the goal is to come as close as possible using only a

limited amount of computational effort. A bounding function is called strong, if it in general

gives values close to the optimal value for the sub problem bounded, and weak if the values

produced are far from the optimum. One often experiences a trade off between quality and time

when dealing with bounding functions: The more time spent on calculating the bound, the better

the bound value usually is. It is normally considered beneficial to use as strong a bounding

function as possible in order to keep the size of the search tree as small as possible.

The strategy for selecting the next live sub problem to investigate usually reflects a trade

off between keeping the number of explored nodes in the search tree low, and staying within the

memory capacity of the computer used. If one always selects among the live sub problems one of

-55-

those with the lowest bound, called the best first search strategy, BeFS, no superfluous bound

calculations take place after the optimal solution has been found.

A sub problem P is called critical if the given bounding function when applied to P

results in a value strictly less than the optimal solution of the problem in question. Nodes in the

search tree corresponding to critical sub problems have to be partitioned by the Branch and

Bound Algorithm no matter when the optimal solution is identified - they can never be discarded

by means of the bounding function. Since the lower bound of any subspace containing an

optimal solution must be less than or equal to the optimum value, only nodes of the search tree

with lower bound less than or equal to this will be explored. After the optimal value has been

discovered only critical nodes will be processed in order to prove optimality. The preceding

argument for optimality of BeFS with respect to number of nodes processed is valid only if eager

node evaluation is used since the selection of nodes is otherwise based on the bound value of the

parent of each node. BeFS may, however, also be used in combination with lazy node

evaluation.

All branching rules in the context of The Branch and Bound Algorithm can be seen as

subdivision of a part of the search space through the addition of constraints, often in the form of

assigning values to variables. If the subspace in question is subdivided into two, the term

dichotomic branching is used, otherwise it is called polytomic branching. Convergence of the

Branch and Bound Algorithm is ensured if the size of each generated sub problem is smaller than

the original problem, and the number of feasible solutions to the original problem is finite.

Normally, the sub problems generated are disjoint - in this way the problem of the same feasible

solution appearing in different subspaces of the search tree is avoided.

-56-

Chapter 5

The Spring Design Problem

5.1 Introduction

Design of helical compression springs for various engineering applications has been a

long-standing optimization problem. Springs find applications in almost all machines in some

form or the other. Springs can be designed for static loading or dynamic loading or both. Usually

there are geometrical constraints, which limit the size of the spring. Also there are standard wire

diameters of the spring from which a designer has to make a choice [3, 4].

From the optimization point of view, three main design variables for the spring are

considered. As mentioned earlier the wire diameter (d) is one important system variable.

However this is not a continuous design variable but a discrete one. Another system variable is

the coil diameter (D). This is a continuous variable; however it does depend upon the wire

diameter selected. The last system variable under consideration is the number of turns (N) of the

spring coil. Again, this is not a continuous variable but can take only integer values. These three

design variables completely define the geometry of the spring. When a suitable material for the

spring is chosen all the other spring characteristics such as spring rate, free length, solid length

etc. are completely defined [3, 4].

-57-

Figure 5.1 Helical Compression Spring

Helical compression springs can be found in numerous mechanical devices. They are

used to exert force, to provide flexibility, and to store or absorb energy. To design a helical

spring, design criteria such as fatigue, yielding, surging, buckling, etc., should be taken into

consideration. To obtain a solution, which meets various mechanical requirements, an

optimization study should be performed. Thus the spring design problem is a challenging one

considering the three different types of design variables involved in the design process. Using a

certain algorithm, a continuous solution to the spring design problem may be obtained. However

this might not be a viable solution because of the discrete nature of the coil diameter and the

integer nature of the number of turns.

-58-

5.2 Compromise DSP Formulation for the Spring Design Problem

The theoretical Compromise Decision Support Problem Formulation presented in

Chapter 4 is applied to the problem of designing a helical spring. Our main objective is to

minimize the weight (i.e. solid volume) of the spring, subjected to certain geometrical

constraints. Other restrictions on the design arise due to material strength and spring loading

parameters. We present a problem that is well discussed in literature, and is used to test the

efficiency of various optimization algorithms. The spring is a helical compression spring with a

circular cross-section of the wire, manufactured from music wire spring steel ASTM A228.

Hence the wire diameter of the spring can only take the following discrete values given in Table

5.1 below [3, 4]. The ends of the spring are to be squared and ground.

0.0090 0.0095 0.0104 0.0118 0.0128 0.0132

0.0140 0.0150 0.0162 0.0173 0.0180 0.0200

0.0230 0.0250 0.0280 0.0320 0.0350 0.0410

0.0470 0.0540 0.0630 0.0720 0.0800 0.0920

0.1050 0.1200 0.1350 0.1480 0.1620 0.1770

0.1920 0.2070 0.2250 0.2440 0.2630 0.2830

0.3070 0.3310 0.3620 0.3940 0.4375 0.5000

Table 5.1 : Allowable Wire Diameters for ASTM A228 (in.)

-59-

5.3 Design Parameters for the Spring

As mentioned earlier, there are three design variables associated with the spring design

problem :

1. The coil diameter (D), which is a continuous variable.

2. The wire diameter (d), which is a discrete variable and can take values in Table 1 above.

3. The number of turns (N), which can take integer values only.

The design limitations or geometric constraints can be listed as follows [3] :

1. The preload (Fp) is 300 lb.

2. Maximum working load (Fmax) is 1000 lb.

3. Maximum allowable deflection under preload (∂pm) is 6 in.

4. Maximum deflection from preload position to maximum loading (∂w) is 1.25 in.

5. Maximum free length of the spring (lmax) is 14 in.

6. Maximum outside diameter of the spring (Dmax) is 3 in.

7. Minimum wire diameter (dmin) is 0.2 in.

The objective is to minimize the solid volume of the spring, which is given by

)2(4

)( 22

+= NDdxf π (5.1)

The material constants associated with the spring are :

1. Allowable shear stress (S) = 189,000 psi.

2. Shear Modulus (G) = 1.15 x 108

Other parameters associated with the spring are :

Spring Index : dDC = (5.2)

-60-

Wahl�s Correction Factor : CC

CC f615.0

4414 +

−−= (5.3)

Working Load Deflection : .)(max inK

F=∂ (5.4)

Free Length : dNl f )2(05.1 ++∂= (5.5)

Preload Deflection : KFp

p =∂ (5.6)

Since we have only one objective (i.e. Minimization of volume), we define a single

deviation variable d1. Also, we assume a target value for the spring volume, VTV, to be the

optimal value of the spring for a continuous variable problem. From a simple non-linear

constrained optimization of this problem, we observe that this goal cannot be achieved (due to

discrete variables), hence the deviation variable shall be a measure of how far we are from this

goal [1].

5.4 Constraint and Goal Formulation for the Compromise DSP

The constraints can be expressed in mathematical form as follows [5] :

18

3max

1 ≤=dS

DFCg f

π (Shear Stress) (5.7)

1max

2 ≤=ll

g f (Free Length) (5.8)

1min3 ≤=

ddg (Wire Diameter) (5.9)

-61-

1max

4 ≤+=D

dDg (Outside Diameter) (5.10)

135 ≤=

Cg (Winding Limit) (5.11)

16 ≤=pm

p

dd

g (Preload Deflection) (5.12)

1)2(05.1max

7 ≤

+−

−−∂

=f

pp

l

dNK

FF

g

(Working Deflection Consistency) (5.13)

1max8 ≤

−=

w

p

KdFF

g (Deflection Requirement) (5.14)

In the Compromise Decision Support Problem formulation, the goal can be expressed as :

0)2(

4: 1

22

=−

+

+dV

NDdF

TV

π

(5.15)

where, VTV is the target value for the optimization goal and d1 is the deviation variable.

The upper and lower bounds for the design variables can be defined as :

Coil Diameter : DLB £ D £ DUB ; with DLB =1.0, DUB = 6.0 (5.16)

Wire Diameter : dLB £ d £ dUB ; with dLB =0.0, dUB = 0.5 (5.17)

-62-

Number of Turns : NLB £ N £ NUB ; with NLB =3.0, NUB = 30.0 (5.18)

5.5 Branch and Bound Algorithm

The Branch and Bound algorithm, is especially useful in solving constrained non-linear

integer programming problems, in spite of being an exhaustive searching algorithm. However it

can be also used to solve problems with discrete valued design variables with a suitable

modification. This approach is briefly explained here. We use the current problem under

consideration as an example to illustrate the procedure [21].

We have a discrete valued design variable; wire diameter (d). A set of values that this

variable can take are listed in Table 5.1. We define the wire diameter to be a binary (0-1)

combination of this set. The wire diameter can thus be expressed as the following matrix

multiplication :

d = [0.0090 0.0095 0.0104 �0.5000][x_status] (5.19)

where x_status is a column vector, with as many rows as elements in the discrete valued set.

Only one element of this column vector is 1, rest all being zero. This x_status vector is appended

to the existing design vector. All the elements in the x_status vector are integer-valued variables.

Two additional linear equality constraints are added to any other existing constraints. The

first is the definition of the discrete variable in terms of elements of x_status vector. From the

above formulation of wire diameter we have,

d - {0.0090*x_status(1) + 0.0095*x_status(2) + 0.0104*x_status(3) + �+ 0.5000*x_status(n)}

= 0 (5.20)

-63-

where n is the length of the discrete valued set.

The second equality constraint pertains solely to the x_status vector. The sum of all the

elements of the x_status vector must be 1. Since all elements of the x_status vector are integer

valued variables, this constraint ensures that only one element of the vector shall have a value of

1, whereas the rest shall all be zero�s at any given iteration stage. Hence the discrete valued

variable shall always take a value from the allowable discrete value set.

x_status(1)+ x_status(2)+ x_status(3)+�+ x_status(n) = 1 (5.21)

Thus the Branch and Bound algorithm, which can be effectively used to solve integer

optimization problems, can also be used to solve optimization problems with discrete valued

variables.

However, the procedure becomes extremely sensitive to the starting vector. It is

important to choose a suitable starting vector for the optimization procedure, since the

performance of the Branch and Bound algorithm depends on it. It can be assumed that the

solution to the mixed (continuous, integer, discrete) variable problem will be close to the solution

of the same problem with continuous variables. We initially solve the continuous variable

problem, and use the solution as a starting vector for the Branch and Bound algorithm.

5.6 Results and Discussion

We present results by Sandgren and Kannan & Kramer who have solved the same

problem using different optimization techniques [3, 4]. Also the solution obtained using

continuous variables is presented purely for comparison. We have used the Branch and Bound

algorithm in conjunction with the Compromise Decision Support Problem formulation. The

-64-

starting vector for Sandgren and Kannan & Kramer is different than what we have used. As

mentioned earlier, we use the solution of the continuous variable problem as the starting vector

for the branch and bound algorithm.

Design Variables

Starting Values

{1}

Continuous Solution

{2}

Sandgren Solution

{3}

Kannan & Kramer Solution

{4}

Compromise DSP Solution

Coil Diameter D (in.) 2.000 1.000 1.180 1.329 1.000

Wire Diameter d (in.) 0.375 0.269 0.283 0.283 0.283

Number of Turns N 10 3 10 7 3

Solid Volume V (in3) 8.327 0.891 2.799 2.365 0.988

Table 5.2 Comparison of Optimal Solutions

We find that the Compromise Decision Support Problem solution is superior to both the

Sandgren Solution and Kannan & Kramer Solution. It is also only slightly greater than the

Continuous Solution. The reduction in spring volume (objective function) can be expressed in

terms of percentage improvement in Table 5.3 below.

Improvement of Solution using Compromise DSP and Branch & Bound Algorithm over

Sandgren Solution Kannan & Kramer Solution Continuous Solution

64.7017 % 58.2241 % -10.8866 %

Table 5.3 Percentage Improvement

-65-

We also mentioned that the efficiency of the Branch and Bound Algorithm depends on

the initial guess vector. Hence we present results for different random starting vectors. A

measure of the efficiency of the algorithm can be assessed from the number of cycles (i.e.

iterations) needed to find an optimal solution.

Starting Vector Initial Guess Values (D, d, N) Number of Cycles

Execution Time for Branch & Bound

Algorithm

{1} (2.000, 0.375, 10) 121 24.305 s

{2} (1.000, 0.269, 3) 19 4.236 s

{3} (1.180, 0.283, 10) 28 6.670 s

{4} (1.329, 0.283, 7) 9 2.093 s

Table 5.4 Comparison of Solution Time

As expected we find that the starting guess vector does affect the performance of the

Branch and Bound Algorithm. The starting vector {1} is farthest from the actual solution (in

terms of objective function value) and it requires the largest iteration time. Our procedure to use

the continuous solution as a starting vector for the Branch and Bound Algorithm performs better

than all other starting vectors except {4}, which is the solution obtained by Kannan & Kramer.

This could be considered as an exceptional case, since there are no set rules for using a specific

starting vector. But we can see the obvious advantage of using {2} (Continuous Solution) as a

starting vector as opposed to {1}.

-66-

Our procedure of combining the Compromise Decision Support Problem formulation,

and then using the Branch & Bound Algorithm to obtain solution for a mixed variable problem is

shown to be superior to other methods previously published. However, the problem under

consideration is a relatively small sized problem. Hence any general statements regarding the

superiority of this method over other methods cannot be made at this stage. However, its

application to solve mixed variable problems is suitably illustrated. In the subsequent chapter we

present its application on a fairly large sized problem, with a larger design vector as well as more

constraint equations. Also, the spring design problem is a single goal optimization problem. The

�Gear Train Design� problem, which is presented in Chapter 6, is a multiple goal optimization

problem, which applies even more aspects of the Compromise Decision Support Problem

approach.

-67-

Chapter 6

The Gear Train Design Problem

6.1 Introduction

Multi-stage power transmission units find a variety of applications in today�s world.

Almost all automobile and aerospace applications make use of a multi-stage gearbox as a

primary power transmission unit. In this regard, the design of a multi-stage gearbox is of

considerable importance. Usually, the necessary reduction ratio (speed ratio) and transmitted

torque are parameters that control the design of a gearbox.

A lot of research has gone into developing suitable algorithms for non-linear constrained

optimization. Many researchers [24-28] have presented different approaches to optimize a Gear

Train Design Problem in some form or the other. We do not present their approaches here for the

sake of brevity, however it suffices to say that Gear Train Design is a rich and diverse design

problem. Recently, the focus of optimization algorithms has shifted from traditional methods to

evolutionary methods such as expert systems, fuzzy logic, genetic algorithms and such.

The focus of gearbox design can be said to be the strength of the mating gears, especially

the teeth. The tooth bending failure and surface fatigue failure of gear teeth account for almost

all cases of gearbox failures. It can be assumed that one of these two mechanisms is responsible

for failure of the gear. If failure by either of these mechanisms is equally likely, then such a gear

design can be considered to be an optimal one.

With this idea of an optimal gear design, the goal of the gear train design problem

becomes one of reduction of the weight of the gearbox. This need for weight reduction is even

-68-

more prominent in aerospace applications, which can have a large penalty associated with

increased weight. It is also an important aspect in the development of today�s fuel-efficient cars,

which cannot do without a power transmission unit such as a gearbox. As mentioned earlier,

usually a multi-stage gearbox is employed for this purpose. It is intuitive to consider that the

weight of the gearbox shall increase with the number of stages (two-stage or three-stage). This is

because, the number of mating gear pairs shall increase, as well as supplemental elements such

as connecting shafts, bearings etc. However, for a desired constant reduction ratio, the speed

ratio in every stage becomes smaller as the number of stages increase. If it were desired to have a

very large reduction ratio in a single stage, the volume of the gear set would be very large. A

smaller reduction ratio in each stage would mean that the gears could be made smaller, and

hence result in a more compact gearbox [5].

With this background of the gear train design problem, we can conclude that it is a

multiple objective optimization problem. The primary objective would be to minimize the weight

(i.e. volume) of the complete gear train. It is also desired that tooth bending fatigue failure and

surface fatigue failure of the gear teeth should be equally likely for an optimal design. For a

multi-stage gearbox it is also desired that the strength of the mating gears in each pair should be

comparable, because the weakest gear pair would determine the failure criteria for the complete

gearbox.

6.2 Tradeoff Analysis

It is to be expected that in general, stronger gears would result in greater weight of the

gears. The surface fatigue life factor (CLi) of a gear represents the 99% reliability lifetime. A plot

of the variation of the surface fatigue life factor with the lifetime expressed in number of cycles

-69-

is shown in Fig. 6.1. In traditional gear design approaches, a desired lifetime for the gears is

chosen and this determines the surface life fatigue factor for the gear. However, in our

formulation, the surface fatigue life factor is one of the design parameters. It is desired to

maximize the lifetime (thereby minimize the surface life fatigue factor) for a given loading and

geometry of the gearbox.

Fig 6.1 Surface Fatigue Life Factor [29]

Hence a tradeoff analysis is proposed between the surface fatigue life factor (CLi) and the

optimal volume of the gearbox. This results in what is commonly referred to as the Pareto

Optimal Curve. We expect that the two objectives in the scope of this problem i.e. minimization

of volume and maximizing lifetime (or minimizing the surface life fatigue factor) are conflicting

in nature. Hence there exist multiple �optimal designs� for a given loading and geometry. These

�optimal� solutions form the tradeoff curve, which is the Pareto Optimal Set.

-70-

6.3 Problem Formulation

The scope of our study is limited to three-stage gear train design, though the

methodology presented here can be applied to any multi-stage gearbox. The basic problem

formulation is obtained from Thompson et al [5]. However, the paper presents a formulation,

which is general non-linear optimization problem. We reformulate the problem to adhere to the

Compromise Decision Support Problem method. Also, all variables were considered to be

continuous in Thompson�s work. In this section we outline the nature of the design variables and

present a Compromise Decision Support Problem formulation that is suitable for the Branch and

Bound Algorithm.

The nomenclature used in this section is the same as that found in Juvinall and Marshek

[29]. Though an effort is made to define all the parameters used in this formulation, the readers

may refer to the original text for more details. We present the design methodology for a three-

stage gear train, with the understanding that this can be extended to any multi-stage gearbox

design. Fig 6.2 illustrates the geometric and design parameters associated with a three-stage gear

train.

-71-

Fig. 6.2 Three Stage Gear Train

6.4 Input Parameters :

In general the gear train design depends on the torque transmission capacity of the

gearbox and the overall speed reduction ratio desired. The input torque to the gear train (Tin) and

the overall speed ratio (e) are the input parameters for our design problem. Keeping these two

input parameters constant, we generate the Pareto Optimal Curves. Choosing different values for

these parameters can generate different sets of Pareto Optimal Curves. The values we consider

in this problem are tabulated below.

-72-

Input Torque

(Tin lb-in.)

Overall Speed Ratio

(e)

80 0.15

120 0.1

180 0.0667

270 0.05

Table 6.1 Input Parameters

6.5 Material Parameters :

The following are the material properties and constants regarding the gear design. These

constants are selected considering the scope of the problem we are dealing with. It is necessary

to choose appropriate values depending on the nature of the loading, manufacturing processes

used for generating the gear profile and material of the gear [5].

-73-

Description Symbol Value Units

Bending Reliability Factor (99.00%) kr 0.814 none

Elastic Coefficient Cp 2300 ◊psi

Mean Stress Factor kms 1.4 none

Mounting Factor Km 1.6 none

Overload Factor Ko 1.0 none

Pressure Angle f 20± degree

Shaft Length (2-stage) Ls 8.0 in.

Shaft Length (3-stage) Ls 4.0 in

Surface Fatigue Strength Sfe 190,000 psi

Surface Reliability Factor (99.00%) Cr 1.0 none

Torsional Stress Limit � Shaft tmax 25,000 psi

Velocity Factor Kv 2.0 none

Table 6.2 Material Properties and Constants

6.6 Design Vector :

The overall design of an individual gear is determined by its diameter and diametral pitch

(English units). In the English units, usually the diametral pitch (P) is taken to be an integer

value. This is analogous to the standard module for the SI system. The number of teeth for a gear

is also an integer parameter, however it is not a design variable in our formulation. With this

understanding the design vector is as given below.

x = [ dp1 dg1 b1 P1 H1 dp2 dg2 b2 P2 H2 dp3 b3 P3 H3 ds1 ds2 ] (6.1)

-74-

We have a predetermined overall reduction ratio (e) for the gear train. Hence one of the

six gear diameters can be expressed as a function of the overall reduction ratio and the remaining

five gear diameters. We choose to consider the gear diameter of the third pair (dg3) as a

dependent variable instead of an independent design variable. It is given by,

eddddd

dgg

pppg

21

3213 = (6.2)

We observe that all the design variables are continuous variables except the diametral

pitch (P) which is an integer variable.

Upper Bound :

The upper bound on the variables is given by the following vector.

xub = [10 10 20 50 500 10 10 20 50 500 10 20 50 500 10 10 ] (6.3)

Since we are using the English units, a maximum gear (or pinion) diameter of 10 in.

should be a practical upper bound. For the scope of our problem, with the maximum value of

input torque as 270 lb-in. and largest speed reduction (e = 0.05), this is an acceptable limit. The

Brinell Hardness value of 500 BHN is a material upper bound, considering the gear surfaces are

hardened to the maximum extent possible. The diametral pitch also has an upper bound of 50in-1,

which is chosen by trial and error.

Lower Bound :

The lower bound on the variables is given by the following vector.

xlb = [0 0 0 0 200 0 0 0 0 200 0 0 0 200 0 0] (6.4)

-75-

A lower limit of the Brinell Hardness value is chosen to be 200 BHN arbitrarily. All the

remaining variables are simply considered to be non-negative and hence they have a lower bound

of zero.

6.7 Constraints :

In this design problem, as was the case in the spring design problem, we have both

geometric constraints as well as material constraints. They are categorized below and expressed

in the terms of the design variables. They are formulated according to the Compromise Decision

Support Problem formulation, and are not in their original form.

6.7.1 Bending Fatigue Constraints :

This constraint takes into account the torque acting on the gear tooth. Since ours is a

speed-reduction gearbox, the torque acting on the gear tooth successively increases with each

stage. The constraints for each of the 3 stages are given by,

0.1)(),(

)(1

'1111

11 ≤=

HCSddPJbPkxg

snpp

b (Stage 1) (6.5)

0.1)(),(

)(2

'12222

122 ≤=

HCSdddPJbdPk

xgsnppp

gb (Stage 2) (6.6)

0.1)(),(

)(3

'123333

1233 ≤=

HCSddddPJbddPk

xgsnpppp

ggb (Stage 3) (6.7)

where, the lumped constant kb is given by,

-76-

msr

movinb kk

KKKTk 2= (6.8)

The Lewis Geometry Factor, J(P, dp) is a function of the pinion tooth number (which is

expressed as the product of diametral pitch, P and diameter of the pinion, dp). The plot of J

versus tooth number N is given below.

Fig. 6.3 Lewis Geometry Factor [5]

The curve is approximated by a 3rd order polynomial function, given by,

J = (2.175602µ10-7)N3 � (7.902098µ10-5)N2 + (7.935120µ10-3)N + 0.223833

N =Pidpi ; i =1,2,3 (6.9)

The fatigue strength surface factor Cs is a function of the Brinell Hardness value, and is

given by,

Cs = (-8.333333µ10-4)H + 0.933333 (6.10)

-77-

6.7.2 Shaft Torsional Stress Constraints :

The shaft connecting the gears through the stages undergo torsional stresses due to the

tangential load on the gear teeth. This stress should not exceed the maximum shear stress for the

shaft material. The constraint is formulated as,

0.1)(1

31

14 ≤=

ps

g

dddk

xg τ (Connecting Stages 1 & 2) (6.11)

0.1)(12

32

125 ≤=

pps

gg

dddddk

xg τ (Connecting Stages 2 & 3) (6.12)

where the lumped constant kτ is given by,

max

16πττ

inTk = (6.13)

6.7.3 Face Width Constraints :

This is a geometrical constraint on the face width of the gears to ensure that the gears are

not too thick or too thin. For each of the three stages, the constraint equations are given below.

0.19)(11

6 <=Pb

xg (Stage 1 � Minimum Face Width) (6.14)

0.19)(22

7 <=Pb

xg (Stage 2 � Minimum Face Width) (6.15)

0.19)(33

8 <=Pb

xg (Stage 3 � Minimum Face Width) (6.16)

-78-

0.114

)( 119 <= Pbxg

(Stage 1 � Maximum Face Width) (6.17)

0.114

)( 2210 <= Pbxg

(Stage 2 � Maximum Face Width) (6.18)

0.114

)( 3311 <= Pbxg

(Stage 3 � Maximum Face Width) (6.19)

6.7.4 Interference Constraints :

With meshing gears it is essential that there is no �interference� in operation. This

requires that the point of tangency between the pinion and gear remain on the involute profile of

the gear, outside of the base circle. This factor is a function of the center distance between the

gears and the pressure angle (f) and the constraint is expressed as,

0.1)2(

4sin

1)(

1112

1

211

12 ≤+

+=

pgp

g

dddP

dPxg

φ (Stage 1) (6.15)

0.1)2(

4sin

1)(

2222

2

222

13 ≤+

+=

pgp

g

dddP

dPxg

φ (Stage 2) (6.16)

0.1)2(

4sin

)(

12122

32

3

2121233

14 ≤+

+=

ggppp

ggppp

deddddP

deddddPxg

φ (Stage 3) (6.17)

-79-

6.7.5 Minimum Pinion Tooth Number Constraints :

It is a general design rule that the minimum number of teeth on a pinion should be 16.

This constraint is expressed for the pinion gear in all the three stages.

0.116)(11

15 ≤=pdP

xg (Pinion for Stage 1) (6.18)

0.116)(22

16 ≤=pdP

xg (Pinion for Stage 2) (6.19)

0.116)(33

17 ≤=pdP

xg (Pinion for Stage 3) (6.20)

6.8 Design Goal (Objective Function)

We have identified two objectives for our gear design problem. The first objective is to

minimize the overall volume (weight) of the gearbox. For a given set of input parameters (Tin, e)

we can obtain an optimal design in terms of the design variables listed earlier, which satisfy all

the constraints. This design set shall result in a specific value of the surface fatigue life factor

(CLi). Our second objective is to minimize this factor, since it shall result in a greater life (See

Fig. 6.1) for the gearbox. However, these are conflicting objectives and we cannot improve one

without worsening the other.

Hence we propose to generate a set of �optimal� solutions for a given set of input

parameters. These optimal solutions define the Pareto Optimal Curve and a designer can choose

any one of the designs on these curves. To generate this curve, we use the Archimedean

Approach (Weighted Sum) and define a scalar valued goal, given by the expression,

-80-

+++=TV

cTV

V Cxfxfxf

Vxfxf )()()()()( 1110 αα (6.21)

where, f0 is the total volume of the gearbox. f1, f2, f3 are surface fatigue life factors for the

gears in the 3 stages. Vα and cα are weights for the two objectives, which can be varied to obtain

the Pareto Optimal Curve. VTV is a target value for the gearbox volume as in the case of the

spring design problem. CTV is a target value for the surface fatigue life factor.

We first obtain a solution to the same optimization problem with continuous variables.

This serves a two-fold purpose. Firstly, the design vector obtained at the end of this optimization

can be used as a starting vector for the Branch and Bound Algorithm. As illustrated in Chapter 4,

this reduces the iterations required for the Branch and Bound Algorithm. This technique is

especially useful in a problem such as this one, which has a large number of design variables and

a number of non-linear constraints. Our formulation of the problem for the Compromise

Decision Support Problem solution may also result in some of the constraints becoming ill

conditioned. However, the normalization technique used, wherein all the constraints are

expressed as g(x) ≤ 1, attempts to avoid this.

Secondly, the optimal value of the gearbox volume obtained at this stage can be used as a

target value for the mixed variable problem. We can expect that the mixed variable problem

solution shall result in a value that is greater than the continuous variable solution, with all the

constraints remaining the same. Thus a deviation variable d1 can be defined from this target

value (VTV) and our objective would be to minimize this deviation. The same argument can be

applied for minimizing the Surface Fatigue Life Factor. A second deviation variable d2 can be

-81-

defined to quantify the deviation from the value (CTV) obtained from the continuous variable

solution.

The volume of the gearbox can be expressed as,

( ) ( ) ( )

++

+++++= Lsddbd

deddd

bddbddxf sspgg

ppgpgp

22

213

23

2

12

122

22

221

21

210 1

4)( π

(6.22)

where, Ls is the fixed shaft length.

The surface fatigue life factors can be expressed in terms of the design variables for each

of the three stages as,

1211

111

)()(

gp

gps

ddbddk

xf+

= (Surface Fatigue Life Factor for Stage 1) (6.23)

122

22

1222

)()(

pgp

ggps

dddbdddk

xf+

= (Surface Fatigue Life Factor for Stage 2) (6.24)

21233

1212123 )(

)()(

ppp

ggggpps

dddbdddedddk

xf+

= (Surface Fatigue Life Factor for Stage 3) (6.25)

where the lumped constant ks is given by,

22

2

sincos4

Rfe

inmovps CS

TKKKCk

φφ= (6.26)

-82-

As mentioned earlier, a smaller surface fatigue life factor (CLi) means a longer life for the

gearbox, these functions are to be directly minimized. The weights associated with the two

objectives ( 0α and cα ) are systematically varied to generate the Pareto Optimal Curves.

Finally, the goal for the Compromise Decision Support Problem solution can be

expressed as,

0)()()()(: 2321

10 =

−+++

− ++ d

Cxfxfxfd

VxfF

TVC

TVV αα (6.27)

The value of the deviation variables thus obtained shall give us an estimate as to how

much the discrete variable solution has deviated from the continuous one. However, in this

particular study, our scope is to obtain the Pareto Optimal Curves for the two conflicting

objectives, and hence that shall be the main subject of discussion henceforth.

6.9 Branch and Bound Technique

With the above-mentioned problem formulation, we can solve the problem using the

Branch and Bound Algorithm. In this case, we have three design variables (Diametral Pitch for

the 3 stages) that have integer values. However, there are no discrete valued variables. Hence we

do not need to define the x_status vector as in the case of the spring design problem. The original

Branch and Bound Algorithm can solve such integer-valued problems.

6.10 Results and Discussion

We present the Pareto Optimal Curves for the Compromise Decision Support Problem

problem. There are two cases under study. In the first case, the input torque (Tin) is kept constant

-83-

at 120 lb-in. Then we obtain different Pareto Optimal Curves for varying values of the overall

speed reduction ratio (e). The plot below shows the Pareto Optimal Curves for speed reduction

ratios of 0.15, 0.1, 0.0667, and 0.05.

Tradeoff Curves for a constant Torque(T=120 lb-in)

1.00

1.10

1.20

1.30

1.40

1.50

1.60

1.70

1.80

1.90

2.00

10 20 30 40 50 60 70

Volume (cubic in.)

Surfa

ce L

ife F

acto

r

e=0.05e=0.0667e=0.1e=0.15

Fig. 6.4 Pareto Optimal Curves for Constant Torque

-84-

T = 120 lb-in. e = 0.05

Vol. 38.1219 38.8847 38.9422 40.9414 41.4795 45.0417 54.3028 56.4096 67.9248

CLi 1.9032 1.8507 1.7898 1.6938 1.5859 1.4753 1.3156 1.2281 1.1172

e = 0.0667

Vol. 28.1000 28.8442 30.9366 32.7868 33.8369 35.9518 41.4818 44.5735 48.1748

CLi 1.9542 1.8015 1.7590 1.6103 1.5413 1.4305 1.2599 1.1973 1.1275

e = 0.1

Vol. 18.7143 18.8249 19.8443 21.4528 23.5694 24.2719 25.4360 27.0994 36.7726

CLi 1.9344 1.9000 1.8398 1.8182 1.7186 1.6673 1.3945 1.3124 1.1700

e = 0.15

Vol. 12.8155 13.4837 14.5645 15.0679 16.4025 17.1901 18.9693 21.0447 22.9703

CLi 1.8499 1.7604 1.6686 1.6046 1.5149 1.4611 1.2256 1.1327 1.0798

Table 6.3 Pareto Optimal Solutions for Constant Torque

We can draw some conclusions from the Pareto Optimal Curves above. It can be

expected that for a greater speed reduction (smaller e), the weight of the gearbox shall be greater

than one for a lesser speed reduction. Thus we observe that for a given surface fatigue life factor,

the volume (and hence weight) of the gearbox increases with reduction the speed ratio e.

-85-

Another observation is that for a given speed reduction ratio e, the surface fatigue life

factor has an inverse relationship with the overall weight of the gearbox. As seen from Fig. 6.1, a

lower value of surface fatigue life factor is desirable for higher life of the gearbox. However, we

observe that as surface fatigue life factor decreases, the weight of the gearbox increases. This

means that a lighter gearbox has a larger surface fatigue life factor and hence a smaller lifetime.

It is intuitive to expect that a heavier gearbox would mean lower operating stresses in the gears,

and hence lower fatigue.

Another comparison can be made with the results obtained by Thompson et al [5] and

those presented here. A general observation can be made that for a given surface fatigue life

factor, the gearbox volume obtained by Thompson et al is slightly lower than those obtained

using the Branch and Bound Algorithm. This result is to be expected, since Thompson�s solution

is a continuous variable solution and that presented here is a mixed variable solution. However,

the increase in volume for a given surface fatigue life factor is not significant, and this justifies

our guess that the mixed variable problem solution shall lie in the vicinity of the continuous

variable solution. Thus this validates the technique of using the design vector obtained in

continuous variable solution as a starting point for the Branch and Bound Algorithm.

For the second case, we keep the overall speed reduction ratio constant at e = 0.1. We

then proceed to obtain the Pareto Optimal Curves for different values of input torque, Tin = 80,

120, 180, and 270 lb-in. The plots obtained are illustrated in the figure below.

-86-

Tradeoff Curves for a constant Reduction Ratio(e=0.1)

0.80

1.00

1.20

1.40

1.60

1.80

2.00

10 20 30 40 50 60 70 80

Volume (cubic in.)

Surfa

ce L

ife F

acto

r

T=80T=120T=180T=270

Fig. 6.5 Pareto Optimal Curves for Constant Speed Ratio

-87-

e = 0.1

T = 80 lb-in.

Vol. 12.9733 13.2944 14.4805 15.8081 16.9753 18.6938 20.9040 21.8406 24.5240

CLi 1.7749 1.7449 1.6944 1.5451 1.3707 1.1919 1.0777 0.9925 0.8990

T = 120 lb-in.

Vol. 18.7143 18.8249 19.8443 21.4528 23.5694 24.2719 25.4360 27.0994 36.7726

CLi 1.9344 1.9000 1.8398 1.8182 1.7186 1.6673 1.3945 1.3124 1.1700

T = 180 lb-in.

Vol. 27.6574 29.9579 31.6027 33.3165 37.9896 41.9605 47.0838 52.4715 56.4649

CLi 1.9817 1.8782 1.7409 1.5704 1.3752 1.2478 1.1584 1.0978 1.0221

T = 270 lb-in.

Vol. 40.8690 42.3795 45.6200 53.9178 59.8515 61.3885 67.4679 71.3577 75.4346

CLi 2.0389 1.9816 1.6219 1.4306 1.3479 1.3024 1.1745 1.0656 1.0221

Table 6.4 Pareto Optimal Solutions for Constant Speed Ratio

Again, we observe similar trends for the Pareto Optimal Curves. It can be expected that a

gearbox transmitting greater torque (Tin) shall have a great weight than one transmitting a lower

torque, all other parameters held constant. Here too, we observe that surface fatigue life factor

-88-

has an inverse relationship with the volume (weight) of the gearbox. A similar reasoning can be

applied in this case too to explain the relationship.

One important difference between the results found in Thompson et al [5] and the Pareto

Optimal Curves presented here lies in the actual shape of the curve. Thompson et al have

obtained fairly smooth curves by systematically varying 0α and cα . Since those are continuous

variable solution we can observe a clear inverse quadratic relationship between the surface

fatigue life factor and the volume of the gearbox. However this is not the case in our results. We

find that the solutions do not chalk a smooth curve, but the results tend to deviate from an

apparent trend line. This fact brings out the importance of actually solving a mixed variable

problem as opposed to solving the simplified continuous variable problem and then rounding off

the design variables to a desired value. Such a rounding off may not always result in the optimal

solution. A rounding down could result in some of the constraints being violated and hence the

design rendered infeasible. Due to this irregularity, we observe that Pareto Optimal Curves are

not smooth curves, but have some irregularities in them.

-89-

Chapter 7

Conclusion

7.1 Conclusion

We have illustrated an integrated approach for solving mixed variable, multi-objective

constrained non-linear optimization problems. Such a problem encompasses the whole gamut of

existing optimization problems. Problems such as linear programming, unconstrained

optimization, and single objective optimization are subsets of the mixed variable, multi-objective

constrained non-linear optimization problem. The method used combines the Compromise

Decision Support Problem Technique and a Branch and Bound Algorithm to arrive at a final

optimal solution. The approach used here can be justified for a variety of reasons. The

Compromise Decision Support Problem Technique is a Goal Programming technique, wherein

the deviation between a desired objective (target value) and achievable value is minimized. Thus,

our focus shifts from the values of the actual design variables themselves to the objective

function. In many actual engineering examples, this is of advantage since our ultimate goal is to

minimize the objective function as long as the design variables satisfy the given problem

constraints.

As discussed earlier, the bounding function for the Branch and Bound Algorithm is the

most important parameter as far as its efficiency and convergence are concerned. The lower

bound for the algorithm sets a limit for the end solution obtained; hence it becomes important to

choose an appropriate bounding value. The solution obtained by solving the same problem with

continuous variables serves as an ideal bounding function for the Branch and Bound Algorithm.

It is an acceptable criterion, since the solution to the mixed variable minimization problem

-90-

cannot be lower than the one obtained for the continuous variable problem. Hence the deviation

from the continuous variable optimization problem serves as an objective function to be

minimized. The Branch and Bound Algorithm is shown to be more efficient when the starting

design vector is the optimal solution obtained by solving the continuous variable problem.

The Branch and Bound Algorithm is a systematic search procedure for solving problems

with mixed variables. The approach is adapted to both integer and discrete variable problems.

The solution to the mixed variable problem becomes increasingly important when designing for

mechanical components that conform to certain specific standard dimensions. The optimization

and standardization process becomes integrated and no further computation is necessary after the

optimal solution is obtained.

7.2 Future Work

We have illustrated an integrated optimization approach as applied to designing

mechanical components such as helical compression spring and gear train. The approach needs

to be adapted to solving optimization problems from other fields of engineering such as control

systems, circuit design, and process engineering etc. Each problem presents the designer with

new challenges regarding problem formulation and seeking an optimal solution.

The solution to a multi-objective optimization problem is not a single optimal design, but

usually a large set of non-dominated or Pareto Optimal solutions. It remains the designer�s job to

choose a solution from this Pareto Optimal Set that best suits the requirement. Such a decision

methodology remains an open research area. Though an optimization technique presents the

-91-

designer with a solution, the ultimate decision of selection is still invested with the designer.

Thus optimization is merely a tool in assisting the designer in the quest for an ideal solution.

The Compromise Decision Support Problem Technique combined with the Branch and

Bound Algorithm is another approach to solving optimization problems. However, the actual

optimization process involves more than just the numerical solution to the optimization problem.

Problem formulation, choice of an initial design vector, the minimization algorithm, are some of

the factors that affect the end solution. These aspects of the optimization process remain largely a

trial and error procedure. Before solving a problem, the designer needs to utilize his experience

to make smart decisions, which would simplify the solution process.

-92-

References

[1] Mistree, F., J. Allen, H. Karandikar, J. Shupe, and E. Bascaran, Learning how to Design :

A Minds-on, Hands-on, Decision Based Approach, 1995

(http://www.srl.gatech.edu/education/ME3110/textbook/textbook.pdf)

[2] Arora, J., Introduction to Optimum Design, New York : McGraw-Hill, 1989

[3] Sandgren, E., �Nonlinear integer and discrete programming in mechanical design,�

Advances in Design Automation, vol. 14, pp.95-105, 1988

[4] Kannan, B., and S. Kramer, �Augmented Lagrange multiplier based method for mixed

integer discrete continuous optimization and its applications to mechanical design,�

Journal of Mechanical Design, Transactions of the ASME, vol.116, no. 2, pp. 405-411,

1994

[5] Thompson, D., S. Gupta, and A. Shukla, �Tradeoff analysis in minimum volume design

of multi-stage spur gear reduction units,� Mechanism and Machine Theory, vol. 35, no. 5,

pp.609-627, 2000

[6] Marston, M., J. Allen, and F. Mistree, �Decision support problem technique: Integrating

descriptive and normative approaches in decision based design,� Journal of Engineering

Valuation and Cost Analysis, vol. 3, no. 2, pp.107-129, 2000

[7] Bras, B., and F. Mistree, �Compromise decision support problem for axiomatic and

robust design,� Journal of Mechanical Design, Transactions Of the ASME, vol. 117 no.

1, pp.10-19, 1995

[8] Sultan, A., Linear Programming : An Introduction with Applications, Boston : Academic

Press, pp. 394-403, 1993

-93-

[9] Balling, R., �Pareto sets in decision-based design,� Journal of Engineering Valuation and

Cost Analysis, vol. 3, no. 2, pp.189-198, 2000

[10] Kurapati, A., and S. Azarm, �Immune network simulation with multiobjective genetic

algorithms for multidisciplinary design optimization,� Engineering Optimization, vol. 33,

no. 2, pp.245-260, 2000

[11] El-Sayed, M., B. Ridgely, and E. Sandgren, �Nonlinear structural optimization using goal

programming,� Computers and Structures, vol. 32, no. 1 pp.69-73, 1989

[12] Kasprzak, E., and K. Lewis, �Approach to facilitate decision tradeoffs in Pareto solution

sets,� Journal of Engineering Valuation and Cost Analysis, vol. 3, no. 2, pp.173-187,

2000

[13] Kasprzak, E., and K. Lewis, �Pareto analysis in multiobjective optimization using the

collinearity theorem and scaling method,� Structural and Multidisciplinary Optimization,

vol. 22, no. 3, pp.208-218, 2001

[14] Gal, T., T. Stewart, and T. Hanne, Multicriteria Decision Making : Advances in MCDM

Models, Algorithms, Theory, and Applications, Boston : Kluwer Academic, c1999

[15] Ignizio, J., Goal Programming And Extensions, Lexington, Mass. : Lexington Books,

1976

[16] Athan, T., and P. Papalambros, �Note on weighted criteria methods for compromise

solutions in multi-objective optimization,� Engineering Optimization, vol. 27, no. 2,

pp.155-176, 1996

[17] Mistree, F., and J. Allen, �Position paper optimization in decision-based design,�

Proceedings of the Conference on Optimization in Industry, pp.135-142, 1997

-94-

[18] Chen, W., K. Lewis, and L. Schmidt, �Open workshop on decision-based design : Origin,

status, promise, and future,� Journal of Engineering Valuation and Cost Analysis, vol. 3,

no. 2, pp.57-66, 2000

[19] Wierzbicki, A., �Decision support methods and applications: The cross-sections of

economics and engineering or environmental issues,� Annual Reviews in Control, vol. 24,

pp.9-19, 2000

[20] Chen, W., K. Lewis, and L. Schmidt, �Decision-based design: An emerging design

perspective,� Journal of Engineering Valuation and Cost Analysis, vol. 3, no. 1, pp.57-

66, 2000

[21] Sierksma, G., Linear and Integer Programming : Theory and Practice, New York :

Marcel Dekker, pp. 213-220, 326-329, 2000

[22] Mavrotas, G., and D. Diakoulaki, �Branch and bound algorithm for mixed zero-one

multiple objective linear programming,� European Journal of Operational Research, vol.

107, no. 3, pp.530-541, 1998

[23] Kesavan, P., and P. Barton, �Generalized branch-and-cut framework for mixed-integer

nonlinear optimization problems,� Computers and Chemical Engineering, vol. 24, no. 2,

pp.1361-1366, 2000

[24] Yogota, T., T. Taguchi, and M. Gen, �Solution method for optimal weight design

problem of the gear using genetic algorithms,� Computers & Industrial Engineering, vol.

35, no. 3-4, pp.523-526, 1998

[25] Pomrehn, L., and P. Papalambros, �Discrete optimal design formulations with application

to gear train design,� Journal of Mechanical Design, Transactions Of the ASME, vol.

117, no. 3, pp.419-424, 1995

-95-

[26] Savage, M., S. Lattime, J. Kimmel, and H. Coe, �Optimal design of compact spur gear

reductions,� Journal of Mechanical Design, Transactions of the ASME, vol. 116, no. 3,

pp. 690-696, 1994

[27] Wang, H., and H. Wang, �Optimal engineering design of spur gear sets,� Mechanism &

Machine Theory, vol. 29, no. 7, pp.1071-1080, 1994

[28] Andrews, G., and J. Argent, �Computer-aided optimal gear design,� American Society of

Mechanical Engineers, Design Engineering Division (Publication) DE, vol. 43, no. 1,

pp.391-39, 1992

[29] Juvinall, R., and K. Marshek, Fundamentals of Machine Component Design, New York :

John Wiley, 2000

-96-

Appendix 1

The complete solution set for the Gear Train Design Problem presented in Chapter 6 is

listed here. The complete design vector is given as,

x = [ dp1 dg1 b1 P1 H1 dp2 dg2 b2 P2 H2 dp3 b3 P3 H3 ds1 ds2 ]

where,

dp : Pinion Diameter for each stage

dg : Gear Diameter for each stage.

b : Face Width of the mating gears in each stage.

P : Diametral Pitch of the mating gears in each stage.

H : Brinell Hardness of the mating gears in each stage.

ds : Shaft Diameter.

Tin : Input Torque to the gear-box.

e : Overall Speed Reduction Ratio of the gear-box.

Np : Number of teeth for the pinion in each stage.

CLi : Surface Fatigue Life Factor.

-97-

T = 120 lb-in, e = 0.05

dp1 0.8331 0.8356 1.1262 1.1568 1.1687 1.0981 1.4152 1.4400 1.4451

dg1 2.4245 2.4600 3.0434 3.1622 2.9593 2.9731 3.9269 3.9320 4.2646

b1 0.7258 0.7214 0.4831 0.4732 0.5641 0.5913 0.4127 0.4342 0.4311

P1 19 19 19 19 25 24 23 25 25

H1 500 500 500 500 500 500 500 500 500

dp2 1.3741 1.4758 1.1595 1.4189 1.3462 1.3280 1.5097 1.5881 1.8118

dg2 3.6792 3.8221 3.2714 3.9249 3.7896 3.9211 4.4139 4.5362 5.3487

b2 0.7764 0.7416 1.0124 0.7449 0.8268 0.8082 0.7711 0.7719 0.6409

P2 12 12 14 13 15 13 15 16 14

H2 500 500 500 500 500 500 500 500 500

dp3 1.9099 1.6375 1.6558 1.9560 1.6010 2.2526 2.6603 2.6098 3.4319

b3 1.0760 1.4323 1.4008 1.0405 1.4008 0.9654 0.8614 0.8604 0.7472

P3 8 10 10 9 10 9 11 11 12

H3 500 500 500 500 500 500 500 500 500

ds1 0.4144 0.4160 0.4042 0.4058 0.3956 0.4045 0.4078 0.4056 0.4163

ds2 0.5754 0.5712 0.5712 0.5696 0.5586 0.5803 0.5832 0.5756 0.5972

Vol. 38.1219 38.9422 38.8847 40.9414 41.4795 45.0417 54.3028 56.4096 67.9248

CLi 1.9032 1.8507 1.7898 1.6938 1.5859 1.4753 1.3156 1.2281 1.1172

Np1 16 16 21 22 29 26 32 36 36

Np2 16 18 16 18 20 17 22 26 26

Np3 16 16 16 17 16 21 28 28 42

-98-

T = 120 lb-in, e = 0.0667

dp1 0.8331 1.0833 1.0279 1.2716 1.5070 1.3818 1.5328 1.4482 1.4400

dg1 2.4245 2.3792 2.4488 2.8585 3.0583 3.0521 3.4885 3.4046 3.5062

b1 0.7258 0.5570 0.5601 0.4409 0.3884 0.4329 0.3832 0.4293 0.4342

P1 19 21 19 20 23 23 23 25 25

H1 500 500 500 500 500 500 500 500 500

dp2 1.3741 1.0819 1.1121 1.1664 1.3501 1.3003 1.6267 1.6106 1.9270

dg2 3.6792 2.8793 2.9346 3.1285 3.6717 3.4567 4.3073 4.2235 4.9184

b2 0.7764 0.9452 0.9703 0.9067 0.6588 0.8274 0.6131 0.6810 0.5420

P2 12 15 14 15 15 17 16 17 17

H2 500 500 500 500 500 500 500 500 500

dp3 1.9099 1.4985 2.0220 1.7214 1.5694 2.0137 2.4104 2.6265 2.8181

b3 1.0760 1.3112 0.9099 1.2042 1.1797 0.9171 0.7793 0.7387 0.7045

P3 8 11 10 12 11 11 12 12 13

H3 500 500 500 500 500 500 500 500 500

ds1 0.4144 0.3773 0.3876 0.3802 0.3674 0.3780 0.3818 0.3859 0.3904

ds2 0.5754 0.5228 0.5356 0.5282 0.5129 0.5236 0.5281 0.5322 0.5336

Vol. 28.1000 28.8442 30.9366 32.7868 33.8369 35.9518 41.4818 44.5735 48.1748

CLi 1.9542 1.8015 1.7590 1.6103 1.5413 1.4305 1.2599 1.1973 1.1275

Np1 16 23 20 26 35 32 36 36 36

Np2 16 16 16 18 20 22 26 28 32

Np3 16 16 20 20 17 22 28 32 36

-99-

T = 120 lb-in, e = 0.1

dp1 0.9872 1.0369 1.1549 0.9182 1.0409 0.9984 1.4408 1.4400 1.7600

dg1 1.7439 1.7914 2.0339 1.7473 2.0817 1.8951 2.4196 2.5330 2.9132

b1 0.6072 0.5832 0.4748 0.6766 0.5265 0.6353 0.4337 0.4342 0.3600

P1 20 15 19 21 18 22 25 25 25

H1 500 306 500 500 500 500 500 500 410

dp2 1.1650 0.9982 1.1446 1.1498 1.1949 1.1845 1.3087 1.4587 1.4624

dg2 2.7112 2.3598 2.7403 2.7195 2.8864 2.8697 3.0583 3.4744 3.4830

b2 0.6556 0.8734 0.6771 0.7250 0.7056 0.7122 0.6991 0.5894 0.6398

P2 14 16 14 14 13 14 20 18 22

H2 500 500 500 500 500 500 500 500 500

dp3 1.4569 1.5407 1.6662 1.6712 1.9855 2.1157 1.7685 2.1396 1.8098

b3 0.9757 0.8667 0.8333 1.0206 0.9440 0.7324 0.7947 0.6877 0.9045

P3 11 10 11 13 15 12 12 13 15

H3 500 500 500 500 500 500 500 500 500

ds1 0.3508 0.3483 0.3505 0.3597 0.3657 0.3593 0.3450 0.3503 0.3433

ds2 0.4649 0.4639 0.4689 0.4792 0.4906 0.4826 0.4578 0.4679 0.4585

Vol. 18.7143 18.8249 19.8443 21.4528 23.5694 24.2719 25.4360 27.0994 36.7726

CLi 1.9344 1.9000 1.8398 1.8182 1.7186 1.6673 1.3945 1.3124 1.1700

Np1 20 16 22 19 19 22 36 36 44

Np2 16 16 16 16 16 17 26 26 32

Np3 16 16 18 22 29 26 22 28 28

-100-

T = 120 lb-in, e = 0.15

dp1 1.2033 1.1496 1.0024 1.0031 1.3272 1.2715 1.7600 1.7600 1.9200

dg1 1.4692 1.4655 1.3371 1.3745 1.8799 1.8383 2.2520 2.3595 2.4931

b1 0.5573 0.5891 0.7457 0.7391 0.4266 0.4648 0.3600 0.3600 0.3600

P1 16 16 16 19 21 22 25 25 25

H1 258 260 262 285 500 500 410 412 333

dp2 0.8947 1.1520 1.0610 1.1933 1.3271 1.4951 1.4522 1.4647 1.4466

dg2 2.0262 2.4730 2.4197 2.3831 2.6714 2.9715 3.1102 3.0875 3.2326

b2 0.7683 0.5480 0.5968 0.6672 0.5734 0.4860 0.5015 0.5626 0.5586

P2 18 16 15 21 20 19 22 25 25

H2 500 500 500 500 500 500 500 500 500

dp3 1.3528 1.3341 1.4482 1.5323 1.5636 1.5611 1.6029 1.8611 2.2783

b3 0.7610 0.8095 0.9186 0.6897 0.7856 0.8406 0.8015 0.7045 0.5442

P3 12 13 15 13 15 17 17 18 18

H3 500 500 500 500 500 500 500 500 500

ds1 0.3102 0.3147 0.3195 0.3224 0.3259 0.3282 0.3151 0.3200 0.3166

ds2 0.4074 0.4060 0.4205 0.4059 0.4115 0.4126 0.4061 0.4103 0.4140

Vol. 12.8155 13.4837 14.5645 15.0679 16.4025 17.1901 18.9693 21.0447 22.9703

CLi 1.8499 1.7604 1.6686 1.6046 1.5149 1.4611 1.2256 1.1327 1.0798

Np1 20 18 16 19 28 28 44 44 48

Np2 16 19 16 25 26 28 32 36 36

Np3 16 17 22 20 24 26 28 34 40

-101-

e = 0.1, T = 80 lb-in

dp1 1.0096 1.0378 0.9532 0.8342 1.2800 1.4400 1.5200 1.6800 1.8400

dg1 1.5852 1.6599 1.6249 1.5130 2.1937 2.3382 2.4819 2.5717 2.7760

b1 0.5199 0.4838 0.5409 0.6662 0.3600 0.3600 0.3600 0.3600 0.3600

P1 18 19 20 21 25 25 25 25 25

H1 258 286 289 283 405 354 290 250 250

dp2 0.9184 0.8672 1.0057 1.0031 1.1508 1.4140 1.5257 1.5923 1.6800

dg2 2.3176 2.1900 2.5442 2.5405 2.6971 3.3534 3.7232 3.9699 4.3006

b2 0.6251 0.7143 0.5660 0.6053 0.6151 0.4475 0.4039 0.3771 0.3600

P2 17 18 16 16 23 23 22 24 25

H2 500 500 500 500 500 500 500 500 500

dp3 1.1506 1.1985 1.5586 1.8208 1.5206 1.5035 1.7337 1.6877 1.8854

b3 1.0050 0.9870 0.7252 0.6585 0.7801 0.8096 0.7305 0.7384 0.6786

P3 14 14 13 14 16 17 18 19 20

H3 500 500 500 500 500 500 500 500 500

ds1 0.2947 0.2965 0.3029 0.3092 0.3034 0.2980 0.2986 0.2922 0.2908

ds2 0.4012 0.4038 0.4127 0.4215 0.4030 0.3974 0.4019 0.3962 0.3978

Vol. 12.9733 13.2944 14.4805 15.8081 16.9753 18.6938 20.9040 21.8406 24.5240

CLi 1.7749 1.7449 1.6944 1.5451 1.3707 1.1919 1.0777 0.9925 0.8990

Np1 18 20 19 17 32 36 38 42 46

Np2 16 16 16 16 26 32 34 38 42

Np3 16 17 21 26 24 26 32 32 38

-102-

e = 0.1, T = 120 lb-in

dp1 0.9872 1.0369 1.1549 0.9182 1.0409 0.9984 1.4408 1.4400 1.7600

dg1 1.7439 1.7914 2.0339 1.7473 2.0817 1.8951 2.4196 2.5330 2.9132

b1 0.6072 0.5832 0.4748 0.6766 0.5265 0.6353 0.4337 0.4342 0.3600

P1 20 15 19 21 18 22 25 25 25

H1 500 306 500 500 500 500 500 500 410

dp2 1.1650 0.9982 1.1446 1.1498 1.1949 1.1845 1.3087 1.4587 1.4624

dg2 2.7112 2.3598 2.7403 2.7195 2.8864 2.8697 3.0583 3.4744 3.4830

b2 0.6556 0.8734 0.6771 0.7250 0.7056 0.7122 0.6991 0.5894 0.6398

P2 14 16 14 14 13 14 20 18 22

H2 500 500 500 500 500 500 500 500 500

dp3 1.4569 1.5407 1.6662 1.6712 1.9855 2.1157 1.7685 2.1396 1.8098

b3 0.9757 0.8667 0.8333 1.0206 0.9440 0.7324 0.7947 0.6877 0.9045

P3 11 10 11 13 15 12 12 13 15

H3 500 500 500 500 500 500 500 500 500

ds1 0.3508 0.3483 0.3505 0.3597 0.3657 0.3593 0.3450 0.3503 0.3433

ds2 0.4649 0.4639 0.4689 0.4792 0.4906 0.4826 0.4578 0.4679 0.4585

Vol. 18.7143 18.8249 19.8443 21.4528 23.5694 24.2719 25.4360 27.0994 36.7726

CLi 1.9344 1.9000 1.8398 1.8182 1.7186 1.6673 1.3945 1.3124 1.1700

Np1 20 16 22 19 19 22 36 36 44

Np2 16 16 16 16 16 17 26 26 32

Np3 16 16 18 22 29 26 22 28 28

-103-

e = 0.1, T = 180 lb-in

dp1 1.0835 1.2179 1.3095 1.5192 1.5321 1.5930 1.5880 1.6805 2.0766

dg1 1.9737 2.2604 2.3874 2.6472 2.7243 2.8592 2.9489 3.0669 3.6962

b1 0.7011 0.5769 0.5540 0.4883 0.5282 0.5322 0.5573 0.5366 0.3900

P1 17 16 17 18 21 23 24 25 23

H1 500 500 500 500 500 500 500 500 500

dp2 1.3475 1.1703 1.2450 1.4393 1.8406 1.7219 2.0147 1.8508 2.0092

dg2 3.0738 2.7346 2.9066 3.3287 4.1515 3.8782 4.6741 4.2418 4.5327

b2 0.7580 1.0240 0.9681 0.7990 0.5916 0.7506 0.5673 0.7195 0.6682

P2 12 14 14 15 15 19 16 19 21

H2 500 500 500 500 500 500 500 500 500

dp3 1.7735 1.9861 1.7734 1.7224 2.2066 2.0908 2.4641 2.4400 2.4076

b3 0.9981 0.9408 1.2014 1.2057 0.8324 1.0423 0.9583 0.9488 0.9356

P3 9 10 11 12 11 13 15 15 15

H3 500 500 500 500 500 500 500 500 500

ds1 0.4058 0.4083 0.4059 0.3998 0.4025 0.4037 0.4084 0.4060 0.4026

ds2 0.5341 0.5418 0.5384 0.5287 0.5279 0.5292 0.5406 0.5353 0.5281

Vol. 27.6574 29.9579 31.6027 33.3165 37.9896 41.9605 47.0838 52.4715 56.4649

CLi 1.9817 1.8782 1.7409 1.5704 1.3752 1.2478 1.1584 1.0978 1.0221

Np1 18 19 22 28 32 36 38 42 48

Np2 16 16 18 22 28 32 32 36 42

Np3 16 19 20 20 24 28 36 36 36

-104-

e = 0.1, T = 270 lb-in

dp1 1.2616 1.3515 1.7277 1.8065 1.6544 1.8768 2.0201 2.1874 2.0640

dg1 2.3770 2.4680 2.7888 3.1481 3.0191 3.4645 3.6934 3.8579 3.6813

b1 0.7121 0.6759 0.6231 0.5699 0.6795 0.5280 0.5166 0.5097 0.5922

P1 13 13 19 18 19 17 19 21 23

H1 500 500 500 500 500 500 500 500 500

dp2 1.3623 1.3353 1.5347 1.8619 1.7961 1.9415 2.3752 2.2911 2.6598

dg2 3.0386 3.0454 3.5718 4.2519 4.1778 4.3498 5.3703 5.0758 5.9982

b2 1.1507 1.1608 0.9766 0.8064 0.9074 0.8696 0.6297 0.7637 0.5731

P2 12 12 14 14 14 15 14 18 16

H2 500 500 500 500 500 500 500 500 500

dp3 1.7977 1.9619 1.6946 2.3828 2.4346 2.5521 2.7692 2.5515 2.7884

b3 1.4739 1.2263 1.4827 0.9988 1.2107 1.0735 1.0025 1.1163 1.0480

P3 9 8 9 9 12 11 12 13 13

H3 500 500 500 500 500 500 500 500 500

ds1 0.4697 0.4648 0.4461 0.4577 0.4647 0.4665 0.4650 0.4595 0.4612

ds2 0.6137 0.6119 0.5912 0.6027 0.6158 0.6104 0.6104 0.5990 0.6048

Vol. 40.869 42.3795 45.62 53.9178 59.8515 61.3885 67.4679 71.3577 75.4346

CLi 2.0389 1.9816 1.6219 1.4306 1.3479 1.3024 1.1745 1.0656 1.0221

Np1 16 18 32 32 32 32 38 46 48

Np2 16 16 22 26 26 30 34 42 42

Np3 16 16 16 22 28 28 32 32 36

-105-

Appendix 2

This appendix lists all the Matlab ® files necessary to run the Compromise Decision

Support Problem Method and the Branch and Bound Algorithm. They are arranged into four

sections.

1. This section contains three files : spring_main.m, spring_fun.m and spring_con.m. These

files solve the general constrained non-linear optimization problem using fmincon. Run

spring_main.m to obtain the solution. The design vector resulting from this optimization

routine is used as a starting vector to the Branch and Bound Algorithm, and the optimal

objective function is used as a bounding function.

2. This section contains four files : spring_bnb.m, spring_fun.m, spring_con.m and bnb.m.

These files solve the Compromise Decision Support Problem problem with the Branch

and Bound Algorithm. Run spring_bnb.m, which calls bnb.m to solve the mixed variable

optimization problem.

3. This section contains three files : gear_main.m, gear_fun.m and gear_con.m. These files

solve the general constrained non-linear optimization problem using fmincon. Run

gear_main.m to obtain the solution.

4. This section contains four files : gear_bnb.m, gear_fun.m, gear_con.m and bnb.m. These

files solve the Compromise Decision Support Problem problem with the Branch and

Bound Algorithm. Run gear_bnb.m, which calls bnb.m to solve the mixed variable

optimization problem.

-106-

Section 1

% filename : spring_main.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file is the main file for solving the general constrained % non-linear optimization spring design problem. The objective is % to minimize the weight of the spring subject to constraints % listed in Chapter 5. It calls spring_fun.m and spring_con.m clc; clear all; close all; global S G Fmax lmax dmin Dmax Fp delta_pm delta_w C K delta delta_p k S = 189000; % Allowable Shear Stress (psi) G = 1.15e8; % Shear Modulus Fmax = 1000; % Maximum Working Load (lb) lmax = 14; % Maximum Free Length (in) dmin = 0.2; % Minimum Wire Diameter (in) Dmax = 3.0; % Maximum Outer Diameter (in) Fp = 300; % Preload (lb) delta_pm = 6.0; % Maximum deflection under preload (in) delta_w = 1.25; % Maximum deflection for working load (in) x0 = [2.0 0.375 10.0]; % Initial Guess Vector % x0(1) is the Coil Diameter (in) % x0(2) is the Wire Diameter (in) % x0(3) is the number of Turns disp('The starting design vector is :'); disp(x0); f_ini = spring_fun(x0); % Evaluate the Objective Function at the begining disp('The starting objective function is :'); disp(f_ini); x = x0; C = x(1)/x(2); % Spring Index K = (4*C - 1)/(4*C - 4) + 0.615/C; % Curvature Factor k = (x(2)^4)*G/(8*(x(1)^3)*x(3)); % Spring Rate (lb/in) delta = Fmax/k; % Deflection under maximum working load (in) lf = delta + 1.05*(x(3) + 2)*x(2); % Free Length (in) delta_p = Fp/k; % Deflection under preload (in)

-107-

A = []; b = []; % No linear inequality constraints Aeq = []; beq = []; % No linear equality constraints lb = [1 0.2 3]; % Lower bound on design variables ub = [3 0.5 20]; % Upper bound on design variables options = optimset('Display','iter'); % Show progress after each iteration x = fmincon('spring_fun', x0, A, b, Aeq, beq, lb, ub, 'spring_con', options); disp('The optimized design vector is :'); disp(x); f_fin = spring_fun(x); % Evaluate the Objective Function at the end disp('The optimal objective function value is :'); disp(f_fin); % End

-108-

% filename : spring_fun.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file calculates the objective function function f = spring_fun(x) f = pi^2*x(1)*(x(2)^2)*(x(3) + 2)/4; % End

-109-

% filename : spring_con.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file contains the non-linear constraints function [g, h] = spring_con(x) global S G Fmax lmax dmin Dmax Fp delta_pm delta_w C K delta lf delta_p k C = x(1)/x(2); K = (4*C - 1)/(4*C - 4) + 0.615/C; k = (x(2)^4)*G/(8*(x(1)^3)*x(3)); delta = Fmax/k; lf = delta + 1.05*(x(3) + 2)*x(2); delta_p = Fp/k; % Inequality Constraints g(1) = 8*K*Fmax*x(1)/(S*3.142*((x(2))^3)) - 1; g(2) = lf/lmax - 1; g(3) = (x(1) + x(2))/Dmax - 1; g(4) = 1 - C/3; g(5) = delta_p/delta_pm - 1; g(6) = (delta_p - (Fmax - Fp)/k - 1.05*(x(3) + 2)*x(2))/lf - 1; g(7) = (Fmax - Fp)/(k*delta_w) - 1; % Equality Constraints h = [] ; % End

-110-

Section 2

% filename : spring_bnb.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file is the main file for solving the Compromise DSP and % Branch and Bound Formulation of the spring design problem. It % calls bnb.m which is the Branch and Bound program, subject to % constraints specified in spring_con.m and optimizes the % objective function in spring_fun.m clc; clear all; close all; global S G Fmax lmax dmin Dmax Fp delta_pm delta_w C K delta delta_p k S = 189000; % Allowable Shear Stress (psi) G = 1.15e8; % Shear Modulus Fmax = 1000; % Maximum Working Load (lb) lmax = 14; % Maximum Free Length (in) dmin = 0.2; % Minimum Wire Diameter (in) Dmax = 3.0; % Maximum Outer Diameter (in) Fp = 300; % Preload (lb) delta_pm = 6.0; % Maximum deflection under preload (in) delta_w = 1.25; % Maximum deflection for working load (in) % Allowable wire diameters : wire_diameters = [ 0.207 0.225 0.244 0.263 0.283 0.307 0.331 0.362 0.394 0.437 0.500 ]; n = length(wire_diameters); y0 = [0; 0; 0; 1; 0; 0; 0; 0; 0; 0; 0]; x0 = [2.0000; 0.375; 10.0000; y0]; % Initial Guess Vector % x0(1) is the Coil Diameter (in) % x0(2) is the Wire Diameter (in) % x0(3) is the number of Turns % x_status should be a column vector such that: % x_status(i) = 0 if x(i) is continuous, % x_status(i) = 1 if x(i) is integer, x_status = [ 0; 0; 1; ones(n,1) ]; disp('The starting design vector is :');

-111-

disp(x0(1:3)); f_ini = spring_fun(x0); % Evaluate the Objective Function at the begining disp('The starting objective function is :'); disp(f_ini); x = x0; C = x(1)/x(2); % Spring Index K = (4*C - 1)/(4*C - 4) + 0.615/C; % Curvature Factor k = (x(2)^4)*G/(8*(x(1)^3)*x(3)); % Spring Rate (lb/in) delta = Fmax/k; % Deflection under maximum working load (in) lf = delta + 1.05*(x(3) + 2)*x(2); % Free Length (in) delta_p = Fp/k; % Deflection under preload (in) A = []; b = []; % Linear inequality constraints Aeq = [ 0, -1, 0, wire_diameters; 0, 0, 0, ones(1,n) ]; beq = [ 0; 1 ]; % Linear equality constraints ymin = zeros(n,1); lb = [1; 0.2; 3; ymin]; % Lower bound on design variables ymax = ones(n,1); ub = [3; 0.5; 20; ymax]; % Upper bound on design variables options.MaxFunEvals=1e6; [errmsg,Z,X,t,c,fail] = bnb('spring_fun',x0,x_status,lb,ub,A,b,Aeq,beq,'spring_con'); disp('The optimized design vector is :'); disp(X(1:3)); f_fin = spring_fun(X); % Evaluate the Objective Function at the end disp('The optimal objective function value is :'); disp(f_fin); disp('The number of Branch and Bound Cycles are :'); disp(c); disp('The time Branch and Bound Algorithm ran is :');

-112-

disp(t); % End

-113-

% filename : spring_fun.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file calculates the objective function function f = spring_fun(x) f = pi^2*x(1)*(x(2)^2)*(x(3) + 2)/4; % End

-114-

% filename : spring_con.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file contains the non-linear constraints function [g, h] = spring_con(x) global S G Fmax lmax dmin Dmax Fp delta_pm delta_w C K delta lf delta_p k C = x(1)/x(2); K = (4*C - 1)/(4*C - 4) + 0.615/C; k = (x(2)^4)*G/(8*(x(1)^3)*x(3)); delta = Fmax/k; lf = delta + 1.05*(x(3) + 2)*x(2); delta_p = Fp/k; % Inequality Constraints g(1,1) = 8*K*Fmax*x(1)/(S*3.142*((x(2))^3)) - 1; g(2,1) = lf/lmax - 1; g(3,1) = (x(1) + x(2))/Dmax - 1; g(4,1) = 1 - C/3; g(5,1) = delta_p/delta_pm - 1; g(6,1) = (delta_p - (Fmax - Fp)/k - 1.05*(x(3) + 2)*x(2))/lf - 1; g(7,1) = (Fmax - Fp)/(k*delta_w) - 1; % Equality Constraints h = [] ; % End

-115-

function [errmsg,Z,X,t,c,fail] = bnb(fun,x0,xstat,xl,xu,a,b,aeq,beq,nonlc,setts,opts,varargin); % BNB20 Finds the constrained minimum of a function of several possibly integer variables. % Usage: [errmsg,Z,X,t,c,fail] = % bnb(fun,x0,x_status,lb,ub,A,B,Aeq,Beq,nonlcon,settings,options,P1,P2,...) % % BNB solves problems of the form: % Minimize F(x) subject to: lb <= x0 <=ub % A*x <= B Aeq*x=Beq % C(x)<=0 Ceq(x)=0 % x(i) is continuous for xstatus(i)=0 % x(i) integer for xstatus(i)= 1 % x(i) fixed for xstatus(i)=2 % % % fun is the function to be minimized and should return a scalar. F(x)=feval(fun,x). % x0 is the starting point for x. x0 should be a column vector. % x_status is a column vector describing the status of every variable x(i). % lb and ub are column vectors with lower and upper bounds for x. % A and Aeq are matrices for the linear constrains. % B and Beq are column vectors for the linear constrains. % nonlcon is the function for the non-linear constrains. % [C(x);Ceq(x)]=feval(nonlcon,x). Both C(x) and Ceq(x) should be column vectors. % % errmsg is a string containing an error message if BNB found an error in the input. % Z is the scalar result of the minimization, % X the values of the accompanying variables. % t is the time elapsed while the algorithm BNB has run and % c is the number of BNB cycles. % fail is the number of nonconvergent leaf sub-problems. % % settings is a row vector with settings for BNB: % settings(1) (standard 0) if 1: if the sub-problem does not converge do not branch it and % raise fail by one. Normally BNB will always branch a nonconvergent sub-problem so % it can try again to find a solution. % A sub-problem that is a leaf of the branch-and-bound-tree cannot be branched. If such % a problem does not converge it will be considered infeasible and fail will be raised by % one. global maxSQPiter; % STEP 0 CHECKING INPUT Z=[]; X=[]; t=0; c=0; fail=0; if nargin<2, errmsg='BNB needs at least 2 input arguments.'; return; end; if isempty(fun), errmsg='No fun found.'; return; elseif ~ischar(fun), errmsg='fun must be a string.'; return; end; if isempty(x0), errmsg='No x0 found.'; return;

-116-

elseif ~isnumeric(x0) | ~isreal(x0) | size(x0,2)>1 errmsg='x0 must be a real column vector.'; return; end; xstatus=zeros(size(x0)); if nargin>2 & ~isempty(xstat) if isnumeric(xstat) & isreal(xstat) & all(size(xstat)<=size(x0)) if all(xstat==round(xstat) & 0<=xstat & xstat<=2) xstatus(1:size(xstat))=xstat; else errmsg='xstatus must consist of the integers 0,1 en 2.'; return; end; else errmsg='xstatus must be a real column vector the same size as x0.'; return; end; end; lb=zeros(size(x0)); lb(find(xstatus==0))=-inf; if nargin>3 & ~isempty(xl) if isnumeric(xl) & isreal(xl) & all(size(xl)<=size(x0)) lb(1:size(xl,1))=xl; else errmsg='lb must be a real column vector the same size as x0.'; return; end; end; if any(x0<lb), errmsg='x0 must be in the range lb <= x0.'; return; elseif any(xstatus==1 & (~isfinite(lb) | lb~=round(lb))) errmsg='lb(i) must be an integer if x(i) is an integer variabele.'; return; end; lb(find(xstatus==2))=x0(find(xstatus==2)); ub=ones(size(x0)); ub(find(xstatus==0))=inf; if nargin>4 & ~isempty(xu) if isnumeric(xu) & isreal(xu) & all(size(xu)<=size(x0)) ub(1:size(xu,1))=xu; else errmsg='ub must be a real column vector the same size as x0.'; return; end; end; if any(x0>ub), errmsg='x0 must be in the range x0 <=ub.'; return; elseif any(xstatus==1 & (~isfinite(ub) | ub~=round(ub))) errmsg='ub(i) must be an integer if x(i) is an integer variabale.'; return; end; ub(find(xstatus==2))=x0(find(xstatus==2)); A=[]; if nargin>5 & ~isempty(a) if isnumeric(a) & isreal(a) & size(a,2)==size(x0,1), A=a; else errmsg='Matrix A not correct.'; return; end; end; B=[]; if nargin>6 & ~isempty(b) if isnumeric(b) & isreal(b) & all(size(b)==[size(A,1) 1]), B=b; else errmsg='Column vector B not correct.'; return; end; end; if isempty(B) & ~isempty(A), B=zeros(size(A,1),1); end;

-117-

Aeq=[]; if nargin>7 & ~isempty(aeq) if isnumeric(aeq) & isreal(aeq) & size(aeq,2)==size(x0,1), Aeq=aeq; else errmsg='Matrix Aeq not correct.'; return; end; end; Beq=[]; if nargin>8 & ~isempty(beq) if isnumeric(beq) & isreal(beq) & all(size(beq)==[size(Aeq,1) 1]), Beq=beq; else errmsg='Column vector Beq not correct.'; return; end; end; if isempty(Beq) & ~isempty(Aeq), Beq=zeros(size(Aeq,1),1); end; nonlcon=''; if nargin>9 & ~isempty(nonlc) if ischar(nonlc), nonlcon=nonlc; else errmsg='fun must be a string.'; return; end; end; settings = [0 0]; if nargin>10 & ~isempty(setts) if isnumeric(setts) & isreal(setts) & all(size(setts)<=size(settings)) settings(setts~=0)=setts(setts~=0); else errmsg='settings should be a row vector of length 1 or 2.'; return; end; end; maxSQPiter=1000; % % 10/17/01 KMS: BEGIN % % options=optimset('fmincon'); options = optimset( optimset('fmincon'), 'MaxSQPIter', 1000); % % 10/17/01 KMS: END % if nargin>11 & ~isempty(opts) if isstruct(opts) if isfield(opts,'MaxSQPIter') if isnumeric(opts.MaxSQPIter) & isreal(opts.MaxSQPIter) & ... all(size(opts.MaxSQPIter)==1) & opts.MaxSQPIter>0 & ... round(opts.MaxSQPIter)==opts.MaxSQPIter maxSQPiter=opts.MaxSQPIter; opts=rmfield(opts,'MaxSQPIter'); else errmsg='options.maxSQPiter must be an integer >0.'; return; end; end; options=optimset(options,opts); else errmsg='options must be a structure.'; return; end; end; evalreturn=0;

-118-

eval(['z=',fun,'(x0,varargin{:});'],'errmsg=''fun caused error.''; evalreturn=1;'); if evalreturn==1, return; end; if ~isempty(nonlcon) eval(['[C, Ceq]=',nonlcon,'(x0,varargin{:});'],'errmsg=''nonlcon caused error.''; evalreturn=1;'); if evalreturn==1, return; end; if size(C,2)>1 | size(Ceq,2)>1, errmsg='C en Ceq must be column vectors.'; return; end; end; % STEP 1 INITIALIZATION currentwarningstate=warning; warning off; tic; lx = size(x0,1); z_incumbent=inf; x_incumbent=inf*ones(size(x0)); I = ceil(sum(log2(ub(find(xstatus==1))-lb(find(xstatus==1))+1))+size(find(xstatus==1),1)+1); stackx0=zeros(lx,I); stackx0(:,1)=x0; stacklb=zeros(lx,I); stacklb(:,1)=lb; stackub=zeros(lx,I); stackub(:,1)=ub; stackdepth=zeros(1,I); stackdepth(1,1)=1; stacksize=1; xchoice=zeros(size(x0)); if ~isempty(Aeq) j=0; for i=1:size(Aeq,1) if Beq(i)==1 & all(Aeq(i,:)==0 | Aeq(i,:)==1) J=find(Aeq(i,:)==1); if all(xstatus(J)~=0 & xchoice(J)==0 & lb(J)==0 & ub(J)==1) if all(xstatus(J)~=2) | all(x0(J(find(xstatus(J)==2)))==0) j=j+1; xchoice(J)=j; if sum(x0(J))==0, errmsg='x0 not correct.'; return; end; end; end; end; end; end; errx=optimget(options,'TolX'); handleupdate=[]; if ishandle(settings(2)) taghandlemain=get(settings(2),'Tag'); if strcmp(taghandlemain,'main BNB GUI')

-119-

handleupdate=guiupd; handleupdatemsg=findobj(handleupdate,'Tag','updatemessage'); bnbguicb('hide main'); drawnow; end; end; optionsdisplay=getfield(options,'Display'); if strcmp(optionsdisplay,'iter') | strcmp(optionsdisplay,'final') show=1; else show=0; end; % STEP 2 TERMINATION while stacksize>0 c=c+1; % STEP 3 LOADING OF CSP x0=stackx0(:,stacksize); lb=stacklb(:,stacksize); ub=stackub(:,stacksize); x0(find(x0<lb))=lb(find(x0<lb)); x0(find(x0>ub))=ub(find(x0>ub)); depth=stackdepth(1,stacksize); stacksize=stacksize-1; percdone=round(100*(1-sum(0.5.^(stackdepth(1:(stacksize+1))-1)))); % UPDATE FOR USER if ishandle(handleupdate) & strcmp(get(handleupdate,'Tag'),'update BNB GUI') t=toc; updatemsg={ ... sprintf('searched %3d %% of tree',percdone) ... sprintf('Z : %12.4e',z_incumbent) ... sprintf('t : %12.1f secs',t) ... sprintf('c : %12d cycles',c-1) ... sprintf('fail : %12d cycles',fail)}; set(handleupdatemsg,'String',updatemsg); drawnow; else disp(sprintf('*** searched %3d %% of tree',percdone)); disp(sprintf('*** Z : %12.4e',z_incumbent)); disp(sprintf('*** t : %12.1f secs',t)); disp(sprintf('*** c : %12d cycles',c-1)); disp(sprintf('*** fail : %12d cycles',fail)); end; % STEP 4 RELAXATION [x z convflag]=fmincon(fun,x0,A,B,Aeq,Beq,lb,ub,nonlcon,options,varargin{:});

-120-

% STEP 5 FATHOMING K = find(xstatus==1 & lb~=ub); separation=1; if convflag<0 | (convflag==0 & settings(1)) % FC 1 separation=0; if show, disp('*** branch pruned'); end; if convflag==0, fail=fail+1; if show, disp('*** not convergent'); end; elseif show, disp('*** not feasible'); end; elseif z>=z_incumbent & convflag>0 % FC 2 separation=0; if show disp('*** branch pruned'); disp('*** ghosted'); end; elseif all(abs(round(x(K))-x(K))<errx) & convflag>0 % FC 3 z_incumbent = z; x_incumbent = x; separation = 0; if show disp('*** branch pruned'); disp('*** new best solution found'); end; end; % STEP 6 SELECTION if separation == 1 & ~isempty(K) dzsep=-1; for i=1:size(K,1) dxsepc = abs(round(x(K(i)))-x(K(i))); if dxsepc>=errx | convflag==0 xsepc = x; xsepc(K(i))=round(x(K(i))); dzsepc = abs(feval(fun,xsepc,varargin{:})-z); if dzsepc>dzsep dzsep=dzsepc; ixsep=K(i); end; end; end;

-121-

% STEP 7 SEPARATION if xchoice(ixsep)==0 % XCHOICE==0 branch=1; domain=[lb(ixsep) ub(ixsep)]; sepdepth=depth; while branch==1 xboundary=(domain(1)+domain(2))/2; if x(ixsep)<xboundary domainA=[domain(1) floor(xboundary)]; domainB=[floor(xboundary+1) domain(2)]; else domainA=[floor(xboundary+1) domain(2)]; domainB=[domain(1) floor(xboundary)]; end; sepdepth=sepdepth+1; stacksize=stacksize+1; stackx0(:,stacksize)=x; stacklb(:,stacksize)=lb; stacklb(ixsep,stacksize)=domainB(1); stackub(:,stacksize)=ub; stackub(ixsep,stacksize)=domainB(2); stackdepth(1,stacksize)=sepdepth; if domainA(1)==domainA(2) stacksize=stacksize+1; stackx0(:,stacksize)=x; stacklb(:,stacksize)=lb; stacklb(ixsep,stacksize)=domainA(1); stackub(:,stacksize)=ub; stackub(ixsep,stacksize)=domainA(2); stackdepth(1,stacksize)=sepdepth; branch=0; else domain=domainA; branch=1; end; end; else % XCHOICE~=0 L=find(xchoice==xchoice(ixsep)); M=intersect(K,L); [dummy,N]=sort(x(M)); part1=M(N(1:floor(size(N)/2))); part2=M(N(floor(size(N)/2)+1:size(N)));

-122-

sepdepth=depth+1; stacksize=stacksize+1; stackx0(:,stacksize)=x; O = (1-sum(stackx0(part1,stacksize)))/size(part1,1); stackx0(part1,stacksize)=stackx0(part1,stacksize)+O; stacklb(:,stacksize)=lb; stackub(:,stacksize)=ub; stackub(part2,stacksize)=0; stackdepth(1,stacksize)=sepdepth; stacksize=stacksize+1; stackx0(:,stacksize)=x; O = (1-sum(stackx0(part2,stacksize)))/size(part2,1); stackx0(part2,stacksize)=stackx0(part2,stacksize)+O; stacklb(:,stacksize)=lb; stackub(:,stacksize)=ub; stackub(part1,stacksize)=0; stackdepth(1,stacksize)=sepdepth; end; elseif separation==1 & isempty(K) fail=fail+1; if show disp('*** branch pruned'); disp('*** leaf not convergent'); end; end; end; % STEP 8 OUTPUT t=toc; Z = z_incumbent; X = x_incumbent; errmsg=''; if ishandle(handleupdate) taghandleupdate=get(handleupdate,'Tag'); if strcmp(taghandleupdate,'update BNB GUI') close(handleupdate); end; end; eval(['warning ',currentwarningstate]);

-123-

Section 3

% filename : gear_main.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file is the main file for solving the general constrained % non-linear optimization gear train design problem. The objective % is to minimize the weight of the gearbox subject to constraints % listed in Chapter 6. It calls gear_fun.m and gear_con.m clc; clear all; close all; global kr Cp kms Km Ko phi Ls2 Ls3 Sfe Cr tau_max Kv e Tin dp1 dg1 b1 P1 H1 dp2 dg2 b2 P2 H2 dp3 b3 P3 H3 ds1 ds2 ks kb kt J1 J2 J3 Cs1 Cs2 Cs3 N1 N2 N3 kr = 0.814; %Bending reliability factor (99.0%) Cp = 2300; %Elastic coefficient kms = 1.4; %Mean stress factor Km = 1.6; %Mounting factor Ko = 1.0; %Overload factor phi = pi*20/180; %Pressure angle Ls2 = 8.0; %Shaft length (N = 2) Ls3 = 4.0; %Shaft length (N = 3) Sfe = 190000; %Surface fatigue strength Cr = 1.0; %Surface reliability factor (99.0%) tau_max = 25000; %Torsional stress limit-shaft Kv = 2.0; %Velocity factor e = 0.1; %Overall speed ratio Tin = 120; %Input torque % Lumped Parameters ks = (4*Cp^2*Kv*Ko*Km*Tin)/(cos(phi)*sin(phi)*Sfe^2*Cr^2); kb = (2*Tin*Kv*Ko*Km)/(kr*kms); kt = (16*Tin)/(pi*tau_max); %x = [dp1 dg1 b1 P1 H1 dp2 dg2 b2 P2 H2 dp3 b3 P3 H3 ds1 ds2] %Design Vector % x(1)=dp1 %Diameter of pinion of set 1 % x(2)=dg1 %Diameter of gear of set 1 % x(3)=b1 %Width of set 1 % x(4)=P1 %Diametral pitch of set 1 % x(5)=H1 %Brinell Hardness # % x(6)=dp2 %Diameter of pinion of set 2 % x(7)=dg2 %Diameter of gear of set 2 % x(8)=b2 %Width of set 2

-124-

% x(9)=P2 %Diametral pitch of set 2 % x(10)=H2 %Brinell Hardness # % x(11)=dp3 %Diameter of pinion of set 3 % x(12)=b3 %Width of set 3 % x(13)=P3 %Diametral pitch of set 3 % x(14)=H3 %Brinell Hardness # % x(15)=ds1 %Shaft Diameter % x(16)=ds2 %Shaft Diameter % Initial Guess Vector x0 = [0.8193 1.6 0.71 19.53 250 1.1404 2.25 0.73 14.03 250 1.3548 1.13 11.81 250 0.5 0.5]; disp('The starting design vector is :'); disp(x0); % Evaluate the Objective Function at the beginning f_ini = gear_fun(x0); disp('The starting objective function is :'); disp(f_ini); x = x0; dg3 = x(1)*x(6)*x(11)/(x(2)*x(7)*e); Ft1 = 2*Tin/x(1); Ft2 = 2*Tin*x(2)/(x(6)*x(1)); Ft3 = (2*Tin*x(2)*x(7))/(x(11)*x(6)*x(1)); A = []; b = []; % No linear inequality constraints Aeq = []; beq = []; % No linear equality constraints lb = [0 0 0 0 200 0 0 0 0 200 0 0 0 200 0 0]; % Lower bound on design variables ub = [10 10 20 25 500 10 10 20 25 500 10 20 25 500 10 10]; % Upper bound on design variables options = optimset('Display','iter'); % Show progress after each iteration options.MaxFunEvals=1e6; options.MaxIter=1e3; x = fmincon('gear_fun', x0, A, b, Aeq, beq, lb, ub, 'gear_con', options);

-125-

disp('The optimized design vector is :'); disp(x); f_fin = gear_fun(x); % Evaluate the Objective Function at the end disp('The optimal objective function value is :'); disp(f_fin); f0 = pi/4*((x(1)^2 + x(2)^2)*x(3) + (x(6)^2 + x(7)^2)*x(8) + (1 + ((x(1)*x(6))/(e*x(2)*x(7)))^2)*x(11)^2*x(12) + (x(15)^2 + x(16)^2)*Ls3); f1 = ks*(x(1) + x(2))/(x(3)*x(1)^2*x(2)); f2 = ks*((x(6) + x(7))*x(2))/(x(8)*x(6)^2*x(7)*x(1)); f3 = ks*(((x(6)*x(1) + e*x(7)*x(2)))*x(7)*x(2))/(x(12)*(x(11)*x(6)*x(1))^2); cli = (f1+f2+f3)/3; disp('The optimal gearbox weight is :'); disp(f0); disp('The optimal Surface Fatigue Life Factor is :'); disp(cli); N1 = x(4)*x(1); N2 = x(9)*x(6); N3 = x(13)*x(11); disp('The no. of teeth on pinion for stage 1 are :'); disp(N1); disp('The no. of teeth on pinion for stage 2 are :'); disp(N2); disp('The no. of teeth on pinion for stage 3 are :'); disp(N3); % End

-126-

% filename : gear_fun.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file calculates the objective function function f = gear_fun(x) global e Ls2 Ls3 ks % Weighing factors for the two objectives a0 = 1; %Gearbox Volume weighing factor acli = 1; %Surface Fatigue Life weighing factor % f0 is the volume of the gearbox f0 = pi/4*((x(1)^2 + x(2)^2)*x(3) + (x(6)^2 + x(7)^2)*x(8) + (1 + ((x(1)*x(6))/(e*x(2)*x(7)))^2)*x(11)^2*x(12) + (x(15)^2 + x(16)^2)*Ls3); % f1, f2, f3 are Surface Fatigue Life Factors f1 = ks*(x(1) + x(2))/(x(3)*x(1)^2*x(2)); f2 = ks*((x(6) + x(7))*x(2))/(x(8)*x(6)^2*x(7)*x(1)); f3 = ks*(((x(6)*x(1) + e*x(7)*x(2)))*x(7)*x(2))/(x(12)*(x(11)*x(6)*x(1))^2); % Weighted sum of the objective functions f = a0*f0 + acli*(f1 + f2 + f3); % End

-127-

% filename : gear_con.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file lists the non-linear constraints function [g, h] = gear_con(x) global kb kt e J1 J2 J3 Cs1 Cs2 Cs3 N1 N2 N3 phi % Number of teeth on pinion at each stage N1 = x(4)*x(1); N2 = x(9)*x(6); N3 = x(13)*x(11); % Lewis Geometry Factor for Each Stage J1=0.00000021756022*N1^3-0.00007902097902*N1^2+0.00793512043512*N1+0.22383333333333; J2=0.00000021756022*N2^3-0.00007902097902*N2^2+0.00793512043512*N2+0.22383333333333; J3=0.00000021756022*N3^3-0.00007902097902*N3^2+0.00793512043512*N3+0.22383333333333; % Fatigue Strength Surface Factor Cs1=-0.00083333333333*x(5)+0.93333333333333; Cs2=-0.00083333333333*x(10)+0.93333333333333; Cs3=-0.00083333333333*x(14)+0.93333333333333; % Inequality Constraints g(1) = kb*x(4) - x(3)*J1*x(1)*250*x(5)*Cs1; g(2) = kb*x(2)*x(9) - x(8)*J2*x(6)*x(1)*250*x(10)*Cs2; g(3) = kb*x(13)*x(7)*x(2) - x(12)*J3*x(11)*x(6)*x(1)*250*x(14)*Cs3; g(4) = kt*x(2) - x(15)^3*x(1); g(5) = kt*x(2)*x(7) - x(16)^3*x(1)*x(6); g(6) = -x(3)*x(4) + 9; g(7) = -x(8)*x(9) + 9; g(8) = -x(12)*x(13) + 9; g(9) = x(3)*x(4) - 14; g(10) = x(8)*x(9) - 14; g(11) = x(12)*x(13) - 14; g(12) = x(4)*x(2) - (sin(phi))^2*x(4)^2*x(1)*(2*x(2) + x(1))/4 + 1; g(13) = x(9)*x(7) - (sin(phi))^2*x(9)^2*x(6)*(2*x(7) + x(6))/4 + 1; g(14) = x(13)*x(11)*x(6)*x(1) - (sin(phi))^2*x(13)^2*x(11)^2*(2*x(6)*x(1) + e*x(7)*x(2)) + e*x(7)*x(2); g(15) = -x(4)*x(1) + 16; g(16) = -x(9)*x(6) + 16; g(17) = -x(13)*x(11) + 16;

-128-

% Equality Constraints h = [] ; % End

-129-

Section 4

% filename : gear_bnb.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file is the main file for solving the Compromise DSP and % Branch and Bound Formulation of the gear design problem. It % calls bnb.m which is the Branch and Bound program, subject to % constraints specified in gear_con.m and optimizes the % objective function in gear_fun.m clc; clear all; close all; global kr Cp kms Km Ko phi Ls2 Ls3 Sfe Cr tau_max Kv e Tin dp1 dg1 b1 P1 H1 dp2 dg2 b2 P2 H2 dp3 b3 P3 H3 ds1 ds2 ks kb kt J1 J2 J3 Cs1 Cs2 Cs3 N1 N2 N3 kr = 0.814; %Bending reliability factor (99.0%) Cp = 2300; %Elastic coefficient kms = 1.4; %Mean stress factor Km = 1.6; %Mounting factor Ko = 1.0; %Overload factor phi = pi*20/180; %Pressure angle Ls2 = 8.0; %Shaft length (N = 2) Ls3 = 4.0; %Shaft length (N = 3) Sfe = 190000; %Surface fatigue strength Cr = 1.0; %Surface reliability factor (99.0%) tau_max = 25000; %Torsional stress limit-shaft Kv = 2.0; %Velocity factor e = 0.1; %Overall speed ratio Tin = 120; %Input torque % Lumped Parameters ks = (4*Cp^2*Kv*Ko*Km*Tin)/(cos(phi)*sin(phi)*Sfe^2*Cr^2); kb = (2*Tin*Kv*Ko*Km)/(kr*kms); kt = (16*Tin)/(pi*tau_max); %x = [dp1 dg1 b1 P1 H1 dp2 dg2 b2 P2 H2 dp3 b3 P3 H3 ds1 ds2] %Design Vector % x(1)=dp1 %Diameter of pinion of set 1 % x(2)=dg1 %Diameter of gear of set 1 % x(3)=b1 %Width of set 1 % x(4)=P1 %Diametral pitch of set 1 % x(5)=H1 %Brinell Hardness # % x(6)=dp2 %Diameter of pinion of set 2 % x(7)=dg2 %Diameter of gear of set 2

-130-

% x(8)=b2 %Width of set 2 % x(9)=P2 %Diametral pitch of set 2 % x(10)=H2 %Brinell Hardness # % x(11)=dp3 %Diameter of pinion of set 3 % x(12)=b3 %Width of set 3 % x(13)=P3 %Diametral pitch of set 3 % x(14)=H3 %Brinell Hardness # % x(15)=ds1 %Shaft Diameter % x(16)=ds2 %Shaft Diameter x0 = [1.0369; 1.7914; 0.5832; 15.4311; 305.8953; 0.9982; 2.3598; 0.8734; 16.0287; 500.0000; 1.5407; 0.8667; 10.3846; 500.0000; 0.3483; 0.4639]; % Initial Guess Vector % x_status should be a column vector such that: % x_status(i) = 0 if x(i) is continuous, % x_status(i) = 1 if x(i) is integer, x_status = [ 0; 0; 0; 1; 0; 0; 0; 0; 1; 0; 0; 0; 1; 0; 0; 0 ]; disp('The starting design vector is :'); disp(x0); f_ini = gear_fun(x0); % Evaluate the Objective Function at the beginning disp('The starting objective function is :'); disp(f_ini); x = x0; dg3 = x(1)*x(6)*x(11)/(x(2)*x(7)*e); Ft1 = 2*Tin/x(1); Ft2 = 2*Tin*x(2)/(x(6)*x(1)); Ft3 = (2*Tin*x(2)*x(7))/(x(11)*x(6)*x(1)); A = []; b = []; % No linear inequality constraints Aeq = []; beq = []; % No linear equality constraints lb = [0; 0; 0; 0; 200; 0; 0; 0; 0; 200; 0; 0; 0; 200; 0; 0]; % Lower bound on design variables ub = [10; 10; 20; 25; 500; 10; 10; 20; 25; 500; 10; 20; 25; 500; 10; 10];

-131-

% Upper bound on design variables options.MaxFunEvals=1e6; [errmsg,Z,X,t,c,fail] = bnb('gear_fun',x0,x_status,lb,ub,A,b,Aeq,beq,'gear_con'); disp(errmsg); disp('The optimized design vector is :'); disp(X); f_fin = gear_fun(X); % Evaluate the Objective Function at the end disp('The optimal objective function value is :'); disp(f_fin); disp('The number of Branch and Bound Cycles are :'); disp(c); disp('The time Branch and Bound Algorithm ran is :'); disp(t); f0 = pi/4*((X(1)^2 + X(2)^2)*X(3) + (X(6)^2 + X(7)^2)*X(8) + (1 + ((X(1)*X(6))/(e*X(2)*X(7)))^2)*X(11)^2*X(12) + (X(15)^2 + X(16)^2)*Ls3); f1 = ks*(X(1) + X(2))/(X(3)*X(1)^2*X(2)); f2 = ks*((X(6) + X(7))*X(2))/(X(8)*X(6)^2*X(7)*X(1)); f3 = ks*(((X(6)*X(1) + e*X(7)*X(2)))*X(7)*X(2))/(X(12)*(X(11)*X(6)*X(1))^2); cli = (f1+f2+f3)/3; disp('The optimal gearbox weight is :'); disp(f0); disp('The optimal Surface Fatigue Life Factor is :'); disp(cli); N1 = X(4)*X(1); N2 = X(9)*X(6); N3 = X(13)*X(11); disp('The no. of teeth on pinion for stage 1 are :'); disp(N1); disp('The no. of teeth on pinion for stage 2 are :'); disp(N2); disp('The no. of teeth on pinion for stage 3 are :'); disp(N3); % End

-132-

% filename : gear_fun.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file calculates the objective function function f = gear_fun(x) global e Ls2 Ls3 ks % Weighing factors for the two objectives a0 = 1; %Gearbox Volume weighing factor acli = 1; %Surface Fatigue Life weighing factor % f0 is the volume of the gearbox f0 = pi/4*((x(1)^2 + x(2)^2)*x(3) + (x(6)^2 + x(7)^2)*x(8) + (1 + ((x(1)*x(6))/(e*x(2)*x(7)))^2)*x(11)^2*x(12) + (x(15)^2 + x(16)^2)*Ls3); % f1, f2, f3 are Surface Fatigue Life Factors f1 = ks*(x(1) + x(2))/(x(3)*x(1)^2*x(2)); f2 = ks*((x(6) + x(7))*x(2))/(x(8)*x(6)^2*x(7)*x(1)); f3 = ks*(((x(6)*x(1) + e*x(7)*x(2)))*x(7)*x(2))/(x(12)*(x(11)*x(6)*x(1))^2); % Weighted sum of the objective functions f = a0*f0 + acli*(f1 + f2 + f3); % End

-133-

% filename : gear_con.m % Written by : Dinar Deshmukh % Date : 11 July 2002 % Comments : This file lists the non-linear constraints function [g, h] = gear_con(x) global kb kt e J1 J2 J3 Cs1 Cs2 Cs3 N1 N2 N3 phi % Number of teeth on pinion at each stage N1 = x(4)*x(1); N2 = x(9)*x(6); N3 = x(13)*x(11); % Lewis Geometry Factor for Each Stage J1=0.00000021756022*N1^3-0.00007902097902*N1^2+0.00793512043512*N1+0.22383333333333; J2=0.00000021756022*N2^3-0.00007902097902*N2^2+0.00793512043512*N2+0.22383333333333; J3=0.00000021756022*N3^3-0.00007902097902*N3^2+0.00793512043512*N3+0.22383333333333; % Fatigue Strength Surface Factor Cs1=-0.00083333333333*x(5)+0.93333333333333; Cs2=-0.00083333333333*x(10)+0.93333333333333; Cs3=-0.00083333333333*x(14)+0.93333333333333; % Inequality Constraints g(1,1) = kb*x(4)/(x(3)*J1*x(1)*250*x(5)*Cs1) - 1; g(2,1) = kb*x(2)*x(9)/(x(8)*J2*x(6)*x(1)*250*x(10)*Cs2) - 1; g(3,1) = kb*x(13)*x(7)*x(2)/(x(12)*J3*x(11)*x(6)*x(1)*250*x(14)*Cs3) - 1; g(4,1) = kt*x(2)/(x(15)^3*x(1)) - 1; g(5,1) = kt*x(2)*x(7)/(x(16)^3*x(1)*x(6)) - 1; g(6,1) = 9/(x(3)*x(4)) - 1; g(7,1) = 9/(x(8)*x(9)) - 1; g(8,1) = 9/(x(12)*x(13)) - 1; g(9,1) = x(3)*x(4)/14 - 1; g(10,1) = x(8)*x(9)/14 - 1; g(11,1) = x(12)*x(13)/14 - 1; g(12,1) = x(4)*x(2)/((sin(phi))^2*x(4)^2*x(1)*(2*x(2) + x(1))/4 - 1) - 1; g(13,1) = x(9)*x(7)/((sin(phi))^2*x(9)^2*x(6)*(2*x(7) + x(6))/4 - 1) - 1; g(14,1) = x(13)*x(11)*x(6)*x(1)/((sin(phi))^2*x(13)^2*x(11)^2*(2*x(6)*x(1) + e*x(7)*x(2)) - e*x(7)*x(2)) - 1; g(15,1) = 16/(x(4)*x(1)) - 1; g(16,1) = 16/(x(9)*x(6)) - 1; g(17,1) = 16/(x(13)*x(11)) - 1;

-134-

% Equality Constraints h = [] ; % End

-135-

function [errmsg,Z,X,t,c,fail] = bnb(fun,x0,xstat,xl,xu,a,b,aeq,beq,nonlc,setts,opts,varargin); % bnb Finds the constrained minimum of a function of several possibly integer variables. % Usage: [errmsg,Z,X,t,c,fail] = % bnb(fun,x0,x_status,lb,ub,A,B,Aeq,Beq,nonlcon,settings,options,P1,P2,...) % % BNB solves problems of the form: % Minimize F(x) subject to: lb <= x0 <=ub % A*x <= B Aeq*x=Beq % C(x)<=0 Ceq(x)=0 % x(i) is continuous for xstatus(i)=0 % x(i) integer for xstatus(i)= 1 % x(i) fixed for xstatus(i)=2 % % % fun is the function to be minimized and should return a scalar. F(x)=feval(fun,x). % x0 is the starting point for x. x0 should be a column vector. % x_status is a column vector describing the status of every variable x(i). % lb and ub are column vectors with lower and upper bounds for x. % A and Aeq are matrices for the linear constrains. % B and Beq are column vectors for the linear constrains. % nonlcon is the function for the non-linear constrains. % [C(x);Ceq(x)]=feval(nonlcon,x). Both C(x) and Ceq(x) should be column vectors. % % errmsg is a string containing an error message if BNB found an error in the input. % Z is the scalar result of the minimization, X the values of the accompanying variables. % t is the time elapsed while the algorithm BNB has run and % c is the number of BNB cycles. % fail is the number of nonconvergent leaf sub-problems. % % settings is a row vector with settings for BNB: % settings(1) (standard 0) if 1: if the sub-problem does not converge do not branch it and % raise fail by one. Normally BNB will always branch a nonconvergent sub-problem % so it can try again to find a solution. % A sub-problem that is a leaf of the branch-and-bound-tree cannot be branched. If such % a problem does not converge it will be considered infeasible and fail will be raised by one. global maxSQPiter; % STEP 0 CHECKING INPUT Z=[]; X=[]; t=0; c=0; fail=0; if nargin<2, errmsg='BNB needs at least 2 input arguments.'; return; end; if isempty(fun), errmsg='No fun found.'; return; elseif ~ischar(fun), errmsg='fun must be a string.'; return; end; if isempty(x0), errmsg='No x0 found.'; return; elseif ~isnumeric(x0) | ~isreal(x0) | size(x0,2)>1 errmsg='x0 must be a real column vector.'; return;

-136-

end; xstatus=zeros(size(x0)); if nargin>2 & ~isempty(xstat) if isnumeric(xstat) & isreal(xstat) & all(size(xstat)<=size(x0)) if all(xstat==round(xstat) & 0<=xstat & xstat<=2) xstatus(1:size(xstat))=xstat; else errmsg='xstatus must consist of the integers 0,1 en 2.'; return; end; else errmsg='xstatus must be a real column vector the same size as x0.'; return; end; end; lb=zeros(size(x0)); lb(find(xstatus==0))=-inf; if nargin>3 & ~isempty(xl) if isnumeric(xl) & isreal(xl) & all(size(xl)<=size(x0)) lb(1:size(xl,1))=xl; else errmsg='lb must be a real column vector the same size as x0.'; return; end; end; if any(x0<lb), errmsg='x0 must be in the range lb <= x0.'; return; elseif any(xstatus==1 & (~isfinite(lb) | lb~=round(lb))) errmsg='lb(i) must be an integer if x(i) is an integer variabele.'; return; end; lb(find(xstatus==2))=x0(find(xstatus==2)); ub=ones(size(x0)); ub(find(xstatus==0))=inf; if nargin>4 & ~isempty(xu) if isnumeric(xu) & isreal(xu) & all(size(xu)<=size(x0)) ub(1:size(xu,1))=xu; else errmsg='ub must be a real column vector the same size as x0.'; return; end; end; if any(x0>ub), errmsg='x0 must be in the range x0 <=ub.'; return; elseif any(xstatus==1 & (~isfinite(ub) | ub~=round(ub))) errmsg='ub(i) must be an integer if x(i) is an integer variabale.'; return; end; ub(find(xstatus==2))=x0(find(xstatus==2)); A=[]; if nargin>5 & ~isempty(a) if isnumeric(a) & isreal(a) & size(a,2)==size(x0,1), A=a; else errmsg='Matrix A not correct.'; return; end; end; B=[]; if nargin>6 & ~isempty(b) if isnumeric(b) & isreal(b) & all(size(b)==[size(A,1) 1]), B=b; else errmsg='Column vector B not correct.'; return; end; end; if isempty(B) & ~isempty(A), B=zeros(size(A,1),1); end; Aeq=[]; if nargin>7 & ~isempty(aeq)

-137-

if isnumeric(aeq) & isreal(aeq) & size(aeq,2)==size(x0,1), Aeq=aeq; else errmsg='Matrix Aeq not correct.'; return; end; end; Beq=[]; if nargin>8 & ~isempty(beq) if isnumeric(beq) & isreal(beq) & all(size(beq)==[size(Aeq,1) 1]), Beq=beq; else errmsg='Column vector Beq not correct.'; return; end; end; if isempty(Beq) & ~isempty(Aeq), Beq=zeros(size(Aeq,1),1); end; nonlcon=''; if nargin>9 & ~isempty(nonlc) if ischar(nonlc), nonlcon=nonlc; else errmsg='fun must be a string.'; return; end; end; settings = [0 0]; if nargin>10 & ~isempty(setts) if isnumeric(setts) & isreal(setts) & all(size(setts)<=size(settings)) settings(setts~=0)=setts(setts~=0); else errmsg='settings should be a row vector of length 1 or 2.'; return; end; end; maxSQPiter=1000; % % 10/17/01 KMS: BEGIN % % options=optimset('fmincon'); options = optimset( optimset('fmincon'), 'MaxSQPIter', 1000); % % 10/17/01 KMS: END % if nargin>11 & ~isempty(opts) if isstruct(opts) if isfield(opts,'MaxSQPIter') if isnumeric(opts.MaxSQPIter) & isreal(opts.MaxSQPIter) & ... all(size(opts.MaxSQPIter)==1) & opts.MaxSQPIter>0 & ... round(opts.MaxSQPIter)==opts.MaxSQPIter maxSQPiter=opts.MaxSQPIter; opts=rmfield(opts,'MaxSQPIter'); else errmsg='options.maxSQPiter must be an integer >0.'; return; end; end; options=optimset(options,opts); else errmsg='options must be a structure.'; return; end; end; evalreturn=0; eval(['z=',fun,'(x0,varargin{:});'],'errmsg=''fun caused error.''; evalreturn=1;'); if evalreturn==1, return; end;

-138-

if ~isempty(nonlcon) eval(['[C, Ceq]=',nonlcon,'(x0,varargin{:});'],'errmsg=''nonlcon caused error.''; evalreturn=1;'); if evalreturn==1, return; end; if size(C,2)>1 | size(Ceq,2)>1, errmsg='C en Ceq must be column vectors.'; return; end; end; % STEP 1 INITIALIZATION currentwarningstate=warning; warning off; tic; lx = size(x0,1); z_incumbent=inf; x_incumbent=inf*ones(size(x0)); I = ceil(sum(log2(ub(find(xstatus==1))-lb(find(xstatus==1))+1))+size(find(xstatus==1),1)+1); stackx0=zeros(lx,I); stackx0(:,1)=x0; stacklb=zeros(lx,I); stacklb(:,1)=lb; stackub=zeros(lx,I); stackub(:,1)=ub; stackdepth=zeros(1,I); stackdepth(1,1)=1; stacksize=1; xchoice=zeros(size(x0)); if ~isempty(Aeq) j=0; for i=1:size(Aeq,1) if Beq(i)==1 & all(Aeq(i,:)==0 | Aeq(i,:)==1) J=find(Aeq(i,:)==1); if all(xstatus(J)~=0 & xchoice(J)==0 & lb(J)==0 & ub(J)==1) if all(xstatus(J)~=2) | all(x0(J(find(xstatus(J)==2)))==0) j=j+1; xchoice(J)=j; if sum(x0(J))==0, errmsg='x0 not correct.'; return; end; end; end; end; end; end; errx=optimget(options,'TolX'); handleupdate=[]; if ishandle(settings(2)) taghandlemain=get(settings(2),'Tag'); if strcmp(taghandlemain,'main BNB GUI') handleupdate=guiupd; handleupdatemsg=findobj(handleupdate,'Tag','updatemessage');

-139-

bnbguicb('hide main'); drawnow; end; end; optionsdisplay=getfield(options,'Display'); if strcmp(optionsdisplay,'iter') | strcmp(optionsdisplay,'final') show=1; else show=0; end; % STEP 2 TERMINATION while stacksize>0 c=c+1; % STEP 3 LOADING OF CSP x0=stackx0(:,stacksize); lb=stacklb(:,stacksize); ub=stackub(:,stacksize); x0(find(x0<lb))=lb(find(x0<lb)); x0(find(x0>ub))=ub(find(x0>ub)); depth=stackdepth(1,stacksize); stacksize=stacksize-1; percdone=round(100*(1-sum(0.5.^(stackdepth(1:(stacksize+1))-1)))); % UPDATE FOR USER if ishandle(handleupdate) & strcmp(get(handleupdate,'Tag'),'update BNB GUI') t=toc; updatemsg={ ... sprintf('searched %3d %% of tree',percdone) ... sprintf('Z : %12.4e',z_incumbent) ... sprintf('t : %12.1f secs',t) ... sprintf('c : %12d cycles',c-1) ... sprintf('fail : %12d cycles',fail)}; set(handleupdatemsg,'String',updatemsg); drawnow; else disp(sprintf('*** searched %3d %% of tree',percdone)); disp(sprintf('*** Z : %12.4e',z_incumbent)); disp(sprintf('*** t : %12.1f secs',t)); disp(sprintf('*** c : %12d cycles',c-1)); disp(sprintf('*** fail : %12d cycles',fail)); end; % STEP 4 RELAXATION [x z convflag]=fmincon(fun,x0,A,B,Aeq,Beq,lb,ub,nonlcon,options,varargin{:}); % STEP 5 FATHOMING

-140-

K = find(xstatus==1 & lb~=ub); separation=1; if convflag<0 | (convflag==0 & settings(1)) % FC 1 separation=0; if show, disp('*** branch pruned'); end; if convflag==0, fail=fail+1; if show, disp('*** not convergent'); end; elseif show, disp('*** not feasible'); end; elseif z>=z_incumbent & convflag>0 % FC 2 separation=0; if show disp('*** branch pruned'); disp('*** ghosted'); end; elseif all(abs(round(x(K))-x(K))<errx) & convflag>0 % FC 3 z_incumbent = z; x_incumbent = x; separation = 0; if show disp('*** branch pruned'); disp('*** new best solution found'); end; end; % STEP 6 SELECTION if separation == 1 & ~isempty(K) dzsep=-1; for i=1:size(K,1) dxsepc = abs(round(x(K(i)))-x(K(i))); if dxsepc>=errx | convflag==0 xsepc = x; xsepc(K(i))=round(x(K(i))); dzsepc = abs(feval(fun,xsepc,varargin{:})-z); if dzsepc>dzsep dzsep=dzsepc; ixsep=K(i); end; end; end; % STEP 7 SEPARATION

-141-

if xchoice(ixsep)==0 % XCHOICE==0 branch=1; domain=[lb(ixsep) ub(ixsep)]; sepdepth=depth; while branch==1 xboundary=(domain(1)+domain(2))/2; if x(ixsep)<xboundary domainA=[domain(1) floor(xboundary)]; domainB=[floor(xboundary+1) domain(2)]; else domainA=[floor(xboundary+1) domain(2)]; domainB=[domain(1) floor(xboundary)]; end; sepdepth=sepdepth+1; stacksize=stacksize+1; stackx0(:,stacksize)=x; stacklb(:,stacksize)=lb; stacklb(ixsep,stacksize)=domainB(1); stackub(:,stacksize)=ub; stackub(ixsep,stacksize)=domainB(2); stackdepth(1,stacksize)=sepdepth; if domainA(1)==domainA(2) stacksize=stacksize+1; stackx0(:,stacksize)=x; stacklb(:,stacksize)=lb; stacklb(ixsep,stacksize)=domainA(1); stackub(:,stacksize)=ub; stackub(ixsep,stacksize)=domainA(2); stackdepth(1,stacksize)=sepdepth; branch=0; else domain=domainA; branch=1; end; end; else % XCHOICE~=0 L=find(xchoice==xchoice(ixsep)); M=intersect(K,L); [dummy,N]=sort(x(M)); part1=M(N(1:floor(size(N)/2))); part2=M(N(floor(size(N)/2)+1:size(N))); sepdepth=depth+1; stacksize=stacksize+1;

-142-

stackx0(:,stacksize)=x; O = (1-sum(stackx0(part1,stacksize)))/size(part1,1); stackx0(part1,stacksize)=stackx0(part1,stacksize)+O; stacklb(:,stacksize)=lb; stackub(:,stacksize)=ub; stackub(part2,stacksize)=0; stackdepth(1,stacksize)=sepdepth; stacksize=stacksize+1; stackx0(:,stacksize)=x; O = (1-sum(stackx0(part2,stacksize)))/size(part2,1); stackx0(part2,stacksize)=stackx0(part2,stacksize)+O; stacklb(:,stacksize)=lb; stackub(:,stacksize)=ub; stackub(part1,stacksize)=0; stackdepth(1,stacksize)=sepdepth; end; elseif separation==1 & isempty(K) fail=fail+1; if show disp('*** branch pruned'); disp('*** leaf not convergent'); end; end; end; % STEP 8 OUTPUT t=toc; Z = z_incumbent; X = x_incumbent; errmsg=''; if ishandle(handleupdate) taghandleupdate=get(handleupdate,'Tag'); if strcmp(taghandleupdate,'update BNB GUI') close(handleupdate); end; end; eval(['warning ',currentwarningstate]);